Libero - Train Your Own Model#
This section describes how to train your own model using Libero data and configurations.
Prerequisites#
Complete the environment installation (refer to :doc:
installation/index).Complete data preparation (refer to :doc:
data/index).Prepare a training configuration file and pre-trained weights (optional).
Quick Start#
We use single-node multi-GPU fine-tuning of the Pi0.5 model as an example. First, prepare the Pi0.5 pre-trained model weights, or download them from HuggingFace.
You also need to prepare the LIBERO-10 dataset in LeRobot format. Refer to the data preparation documentation for download instructions.
Then set the environment variables. Adjust MLP_WORKER_GPU according to the actual number of GPUs:
export MLP_WORKER_GPU=8
export MLP_WORKER_NUM=1
export MLP_ROLE_INDEX=0
export MLP_WORKER_0_HOST=localhost
export MLP_WORKER_0_PORT=29500
Modify the dataset path in configs/pi05/pi05_paligemma_libero10_full_finetune.py:
...
freeze_vision_backbone=False,
pretrained_name_or_path= # noqa: E251
'./checkpoints/pi05_libero/model.safetensors', # Replace with the actual pre-trained weight path
name_mapping={
...
...
datasets=dict(
type='ParquetDataset',
data_root_path= # noqa: E251
'./datasets/libero_10_no_noops_lerobotv2.1', # Replace with the actual dataset path
transforms=[
...
Launch training using scripts/train.sh:
bash scripts/train.sh \
configs/pi05/pi05_paligemma_libero10_full_finetune.py \
work_dirs/pi05_paligemma_libero10_full_finetune # Replace with your desired output directory
Training Artifacts#
After training, the work_dirs/... directory typically contains:
checkpoint_*.ptmodel filesLogs and configuration backup files
For more training parameter details, refer to :doc:vla.