Frequently Asked Questions#
Below is a compilation of frequently asked questions regarding FluxVLA. This section is continuously updated; we welcome your questions, which help us improve!
You are also welcome to use the 🦞 OpenClaw Assistant in the upper-right corner to ask questions. The “Lobster Assistant” will answer your queries and collect them for reference.
How to perform Libero evaluation on devices without ray tracing capabilities (e.g., A100)?#
To support Libero evaluation on devices without ray tracing capabilities (such as the A100), please refer to GPU rendering on EGL devices
How to debug with VSCode?#
FluxVLA’s training and evaluation scripts use torchrun to launch distributed training, which is incompatible with VSCode’s default Python debugging method. Breakpoint debugging can be achieved by configuring .vscode/launch.json to use torchrun as the debug entry program.
1. Create the debug configuration file#
Create the .vscode/launch.json file in the project root directory (or edit it directly if it already exists).
2. Configuration template#
The following is a general configuration template for debugging a training script:
{
"version": "0.2.0",
"configurations": [
{
"name": "Train Debug",
"type": "debugpy",
"request": "launch",
"program": "/root/miniconda3/bin/torchrun",
"args": [
"--nnodes", "1",
"--nproc_per_node", "2",
"scripts/train.py",
"--config", "configs/<model>/<your_config>.py",
"--work-dir", "work_dirs/<your_work_dir>",
"--cfg-options",
"train_dataloader.batch_size=4",
"train_dataloader.per_device_batch_size=2",
"runner.max_epochs=None",
"runner.max_steps=100",
"runner.save_iter_interval=10"
],
"console": "integratedTerminal",
"justMyCode": false,
"env": {
"CUDA_VISIBLE_DEVICES": "0,1",
"HF_ENDPOINT": "https://hf-mirror.com",
"WANDB_MODE": "disabled"
}
}
]
}
The configuration for the evaluation script is similar; simply replace scripts/train.py with scripts/eval.py, and substitute --work-dir and related arguments with --ckpt-path:
{
"name": "Eval Debug",
"type": "debugpy",
"request": "launch",
"program": "/root/miniconda3/bin/torchrun",
"args": [
"--nnodes", "1",
"--nproc_per_node", "2",
"scripts/eval.py",
"--config", "configs/<model>/<your_config>.py",
"--ckpt-path", "work_dirs/<your_work_dir>/checkpoints/latest-checkpoint.pt"
],
"console": "integratedTerminal",
"justMyCode": false,
"env": {
"CUDA_VISIBLE_DEVICES": "0,1",
"HF_ENDPOINT": "https://hf-mirror.com",
"WANDB_MODE": "disabled"
}
}
3. Key parameter descriptions#
Parameter |
Description |
|---|---|
|
Must be set to |
|
Absolute path to |
|
Number of GPUs per node; it is recommended to set this to a small value (e.g., 1 or 2) during debugging |
|
Set to |
|
Controls visible GPUs; it is recommended to limit this to a small number of GPUs during debugging |
|
Set to |
|
During debugging, it is recommended to reduce parameters such as |
4. Usage#
Open the FluxVLA project in VSCode
Press
F5or click the run button in the debug panel on the leftSelect the corresponding debug configuration from the dropdown menu
Set breakpoints in the code and begin step-by-step debugging
Tip: When debugging distributed training, breakpoints are triggered in every process. To debug only a single process, set
--nproc_per_nodeto1and adjustCUDA_VISIBLE_DEVICESaccordingly.
Common issues with Transformers installation#
FluxVLA depends on the Hugging Face transformers library, but this dependency is not included in requirements.txt and must be installed manually. Since different models have different transformers version requirements, version conflicts are frequently encountered during installation.
1. Recommended installation method#
Following the instructions in the README, install it separately after installing FluxVLA:
pip install transformers==4.53.0
2. Version requirements for different models#
Different models expect different versions of transformers in their code or configurations:
Model |
Recommended Version |
Notes |
|---|---|---|
OpenVLA / dinosiglip-qwen2_5 |
|
Explicit version check in the code; also requires |
Pi0 / Pi0.5 / Gr00t / LlavaVLA etc. |
|
Use the version recommended in the README |
Tron2 deployment |
|
See the Tron2 inference deployment documentation |
3. Common issues and solutions#
Issue 1: Version warning when using OpenVLA
Expected `transformers==4.40.1` and `tokenizers==0.19.1` but got ...
there might be inference-time regressions due to dependency changes.
This occurs because the OpenVLA pretrained model was built with transformers==4.40.1. If you primarily use OpenVLA, it is recommended to downgrade:
pip install transformers==4.40.1 tokenizers==0.19.1
Note: After downgrading to 4.40.1, other models (such as Pi0, Gr00t, etc.) may not function correctly. If you need to use multiple models simultaneously, it is recommended to create separate Conda environments for each model.
Issue 2: pip install transformers causes other dependencies to be upgraded
When installing transformers, pip may automatically upgrade packages such as numpy, tokenizers, and huggingface-hub, leading to conflicts with other FluxVLA dependencies. The recommended installation order is:
# 1. First install FluxVLA and its dependencies
pip install -r requirements.txt
python setup.py develop
# 2. Then install transformers
pip install transformers==4.53.0
# 3. Finally fix the numpy version
pip install numpy==1.26.4
Issue 3: ImportError or AttributeError
If you encounter errors similar to the following:
ImportError: cannot import name 'XXX' from 'transformers'
AttributeError: module 'transformers' has no attribute 'XXX'
This is typically caused by a transformers version that is either too low or too high, resulting in API incompatibility. Verify the current version and reinstall the target version:
python -c "import transformers; print(transformers.__version__)"
pip install transformers==<target_version>
Issue 4: Installing transformers from source
When the pip-installed version does not include the latest fixes, you can install from source:
pip install git+https://github.com/huggingface/transformers.git@v4.53.0
Errors encountered during mixed training with multiple datasets#
If you encounter an error similar to the following:
ValueError: The features can't be aligned because the key observation.state of features {'observation.state': List(Value('float32')), 'observation.states.ee_state': List(Value('float32')), 'observation.states.joint_state': List(Value('float32')), 'observation.states.gripper_state': List(Value('float32')), 'action': List(Value('float32')), 'timestamp': Value('float32'), 'frame_index': Value('int64'), 'episode_index': Value('int64'), 'index': Value('int64'), 'task_index': Value('int64')} has unexpected type - List(Value('float32')) (expected either List(Value('float32'), length=8) or Value("null").
This may be because you are using different versions of LeRobot data. Please ensure that all data conforms to LeRobotDataset v2.1. It is recommended to use a specific commit ID to ensure format consistency:
git clone https://github.com/huggingface/lerobot.git
cd lerobot
git checkout 55198de096f46a8e0447a8795129dd9ee84c088c
pip install -e .