Libero - Inference with a Pre-trained Model#

This section describes how to use a pre-trained model to perform inference and evaluation on Libero tasks.

Prerequisites#

  • Complete the environment installation (refer to :doc:installation/index).

Quick Start#

First, prepare the model weights for evaluation, or download the pre-trained Pi0.5 weights from HuggingFace. Then set the environment variables. Adjust MLP_WORKER_GPU according to the actual number of GPUs:

export MLP_WORKER_GPU=8
export MLP_WORKER_NUM=1
export MLP_ROLE_INDEX=0
export MLP_WORKER_0_HOST=localhost
export MLP_WORKER_0_PORT=29500

Run scripts/eval.sh to perform LIBERO-10 evaluation:

bash scripts/eval.sh \
    configs/pi05/pi05_paligemma_libero10_full_finetune.py \
    work_dirs/pi05_paligemma_libero10_full_finetune/checkpoint_step_10000.pt # Replace with the path to your downloaded checkpoint

Viewing Results#

After evaluation, the following can be found in the checkpoint directory:

  • Evaluation logs (success rate, completion rate, and other metrics)

  • rollouts/ videos

For more parameter details, refer to :doc:vla-eval.