Aloha Physical Deployment#
Overview#
This document describes the complete workflow for deploying the FluxVLA model on the AgileX Aloha dual-arm platform for autonomous manipulation.
Power-On#
Turn on the robot power supply and power on all four robotic arms.
Verify that all four robotic arms display green indicator lights, confirming successful power-on.
2. Software Environment Preparation#
2.1 Create a Conda Environment#
conda create -n aloha_sdk3.0 python=3.8.19
conda activate aloha_sdk3.0
2.2 Install Piper SDK#
git clone https://github.com/agilexrobotics/piper_sdk.git
cd piper_sdk
pip3 install .
3. Launch the Robotic Arms#
Launch the robotic arm enable and control nodes (mode:=1 for follower mode, auto_enable:=true for automatic enabling):
conda activate aloha_sdk3.0
roslaunch piper start_ms_piper.launch mode:=1 auto_enable:=true
4. Launch the Cameras#
Launch the three depth cameras:
roslaunch astra_camera nmulti_camera.launch
After launching, verify that the following topics are being published correctly using rostopic list:
Camera |
Color Topic |
Depth Topic |
Point Cloud Topic |
|---|---|---|---|
Front (f) |
|
|
|
Hand (h) |
|
|
|
Left (l) |
|
|
|
Right (r) |
|
|
|
Robotic arm joint state topics:
Topic |
Description |
|---|---|
|
Left master arm joint states |
|
Right master arm joint states |
|
Left follower arm joint states |
|
Right follower arm joint states |
|
Left follower arm end-effector pose |
|
Right follower arm end-effector pose |
|
Left arm trajectory command |
|
Right arm trajectory command |
|
Left arm model output trajectory |
|
Right arm model output trajectory |
5. Run FluxVLA Model Inference#
Launch via “Run and Debug” in the IDE, or run directly from the command line:
WANDB_MODE=disabled HF_ENDPOINT=https://hf-mirror.com \
python scripts/inference_real_robot.py \
--config configs/gr00t/gr00t_eagle_3b_aloha_fold_towel_3cam_4090_full_train.py \
--ckpt-path ./work_dirs/work_dirs/419acaf40_gr00t_eagle_3b_aloha_fold_towel_3cam_4090_full_train_bs256/checkpoints/step-037356-epoch-06-loss=0.0617.pt
Corresponding VSCode launch.json configuration:
{
"name": "Inference gr00t eagle",
"type": "debugpy",
"request": "launch",
"program": "scripts/inference_real_robot.py",
"console": "integratedTerminal",
"args": [
"--config", "configs/gr00t/gr00t_eagle_3b_aloha_fold_towel_3cam_4090_full_train.py",
"--ckpt-path", "./work_dirs/work_dirs/419acaf40_gr00t_eagle_3b_aloha_fold_towel_3cam_4090_full_train_bs256/checkpoints/step-037356-epoch-06-loss=0.0617.pt"
],
"env": {
"WANDB_MODE": "disabled",
"HF_ENDPOINT": "https://hf-mirror.com"
}
}
6. Robotic Arm Reset#
Upon task completion or when a reset is required, send the reset command to return both arms to the zero position (with the end-effector gripper maintaining a slight opening of 0.1):
Left arm reset:
rostopic pub /master/joint_left sensor_msgs/JointState "header:
seq: 0
stamp: {secs: 0, nsecs: 0}
frame_id: ''
name: ['']
position: [0,0,0,0,0,0,0.1]
velocity: [0]
effort: [0]"
Right arm reset:
rostopic pub /master/joint_right sensor_msgs/JointState "header:
seq: 0
stamp: {secs: 0, nsecs: 0}
frame_id: ''
name: ['']
position: [0,0,0,0,0,0,0.1]
velocity: [0]
effort: [0]"
The 7 values in
positioncorrespond to 6 joint angles and 1 gripper opening value. Setting all to zero returns the arm to its initial pose, while0.1keeps the gripper in a slightly open state.