Examples#
This section presents the collection of examples currently supported by FluxVLA, illustrating how to apply the framework to various scenarios and demonstrating its efficiency and scalability in practical usage. The example library is continuously updated to progressively cover more scenarios and tasks, showcasing the versatility and extensibility of FluxVLA.
The contents of this section are organized as follows:
LIBERO Simulation Training: A complete workflow for fine-tuning vision-language-action models via reinforcement learning in the LIBERO environment, covering environment setup, training, and evaluation. See LIBERO Simulation Data Training for details.
Pi0.5 Model Training: An example of reinforcement learning fine-tuning using the π0.5 algorithm in the LIBERO environment, encompassing the full pipeline from data ingestion to evaluation visualization. See π₀.₅ Model Training for details.
GR00T-N1.5 Model Training: An example of reinforcement learning fine-tuning for GR00T-N1.5 in the LIBERO environment, covering environment setup, algorithm configuration, and visualization. See GR00T-N1.5 Model Training for details.
VLM Fine-Tuning: A minimal VLM training and evaluation workflow using a Qwen3-VL 0.6B model (composed of a Qwen3 0.6B LLM and a Qwen3 2B VLM Vision Encoder), fine-tuned on the Cambrian-7M dataset. See VLM Training and Evaluation Example for details.
Note:
Please complete the environment installation and data preparation before running the examples (refer to “Quick Start”).
The example library will be expanded with additional scenarios in future releases (e.g., more state-of-the-art models, simulators such as RoboCasa, and real-robot reinforcement learning examples).
For detailed steps and configuration of a specific example, please navigate to the corresponding subsection via the left sidebar.