11. Skip to content

11. π₀ (Pi-Zero) Model#

11.1 Overview#

π₀ is a Vision-Language-Action (VLA) model developed by Physical Intelligence for general-purpose robot control.Built on large-scale pre-training and flow-matching action generation, it outputs smooth 50 Hz motor trajectories, enabling robots with diverse morphologies to execute dexterous manipulation tasks.

Departing from traditional policies, π₀ frames control as a flow-matching denoising process: beginning with random noise, it progressively refines feasible motor commands, delivering high efficiency, precision, and real-world adaptability.

11.2 GPU Requirements#

To run the models in this repository, you need an NVIDIA GPU that meets or exceeds the specifications below. The figures assume a single GPU; you can reduce per-GPU memory by enabling model parallelism—set FSDP_DEVICES in the training config.

Note: the current training scripts do not support multi-node training.

| Mode | Min. VRAM | Example Cards | | ------------------ | --------------- | ------------------ | | Inference | > 8 GB | RTX 3090/4090 | | Fine-Tuning (LoRA) | > 22.5 GB | RTX 3090/4090 | | Fine-Tuning (Full) | > 70 GB | A100 (80GB) / H100 |

11.3 Environment Setup#

Contact after-sales support to obtain the correct openpi archive. Extract it and enter the folder:

cd openpi

Install basic dependencies:

sudo apt install python3-venv clang
python3 -m pip install --user pipx
pipx install uv -i http://mirrors.aliyun.com/pypi/simple
# Tip: Configure a local proxy if your network requires it.
# export {HTTP_PROXY,HTTPS_PROXY,ALL_PROXY,http_proxy,https_proxy,all_proxy}=http://127.0.0.1:7890
# Do not use a conda environment—uv compiles some packages from source.
GIT_LFS_SKIP_SMUDGE=1 uv sync
GIT_LFS_SKIP_SMUDGE=1 uv pip install -e .

nstall the AIRBOT Play Python SDK (run while inside the openpi directory):

uv pip install /path/to/your/airbot_py-5.1.4-py3-none-any.whl

Install inference dependencies: (For the data-collection scripts and instructions, please contact after-sales support)

sudo apt-get install -y libturbojpeg gcc python3-dev v4l-utils
uv pip install -e /path/to/your/data-collection/package"[all]" -i https://pypi.mirrors.ustc.edu.cn/simple

11.4 Fine-Tuning#

11.4.1 Prepare Data#

Copy the collected data folder to the openpi root directory. Data for each task are normally stored in a single folder named after the task; the conversion script, however, allows multiple folders per task so that acquisitions made at different sites or times can be kept separate. Assume the task is named 1-1-example. Create the following directory tree inside openpi:

mkdir -p data/1-1-example/station0

Copy all .mcap files from the acquisition folder into station0:

cp path/to/your/data/1-1-example/*.mcap data/1-1-example/station0

11.4.2 Configure Training Parameters#

Example configuration files are located in examples/airbot: config_1-1_example.py – single-arm task; config_ptk_example.py – dual-arm task; config_mmk_example.py – multi-modal keypoint task etc. Copy the file that matches your task into the data folder and rename it to config.py; for example, copy it to data/1-1-example/config.py. Edit the parameters according to your dataset, ensuring that: TASK_NAME is identical to the task name recorded in the data; FOLDERS matches the data-subfolder names. Descriptions of all other parameters are provided in the inline comments.

11.4.3 Convert AIRBOT Mcap Data to LeRobot Format#

Run the converter, specifying the data-folder path; for the previously mentioned task 1-1-example:

uv run examples/airbot/convert_mcap_data_to_lerobot.py --data-dir data/1-1-example

After conversion, the dataset is saved under:

~/.cache/huggingface/lerobot/

Note: if TASK_NAME in config.py differs from the one used during data collection, the conversion will fail.

11.4.4 Compute Dataset Statistics#

Training requires normalization statistics; compute them with:

CUDA_VISIBLE_DEVICES=0 uv run examples/airbot/compute_norm_stats.py --config-path data/1-1-example/config.py

The generated statistics are stored in the assets directory.

11.4.5 Model Training#

Launch training with:

XLA_PYTHON_CLIENT_MEM_FRACTION=0.9 uv run examples/airbot/airbot_train.py --config-path data/1-1-example/

Arguments: - XLA_PYTHON_CLIENT_MEM_FRACTION=0.9: – allows JAX to use 90 % of GPU memory (default is 75 %) - --config-path: – directory or file path of the configuration

Logs are streamed to the terminal and saved in the checkpoints folder. You will be prompted to log in to Weights & Biases (wandb) for live monitoring; Follow the prompts to register and log in (do not select “3 - no visualization,” or an error may occur; if the login page fails to load, please enable a VPN).

11.5 Inference#

Start the robot arms (make sure they have been bound; see the data-collection doc):

airbot_server -i can_left -p 50051
airbot_server -i can_right -p 50053

For single-arm tasks run only one of the commands above.

Launch the inference script ensure no other program is using the camera, e.g. data-collection tools, and no proxy is set; otherwise the arm connection will fail. Clear proxy with unset HTTP_PROXY HTTPS_PROXY ALL_PROXY http_proxy https_proxy all_proxy:

  • Single-arm task:
uv run examples/airbot/airbot_inference_sync.py policy-config:local-policy-config \
    --policy-config.config-path data/1-1-example \
    --policy-config.checkpoint-dir checkpoints/1-1-example/9000 \
    --robot-config.robot_groups "" \
    --robot-config.robot_ports 50051 \
    --robot-config.camera-index 2 4 \
    --reset-action 0 0 0 0 0 0 0
  • Dual-arm task:
uv run examples/airbot/airbot_inference_sync.py policy-config:local-policy-config \
    --policy-config.config-path data/ptk_example \
    --policy-config.checkpoint-dir checkpoints/ptk_example/9000 \
    --robot-config.camera-index 2 4 6 \
    --reset-action 0 0 0 0 0 0 0 0 0 0 0 0 0 0

Where:

  • robot_ports: port numbers of the robots; change to match your launch
  • checkpoint-dir: path to the weight file; modify to your actual location
  • camera-index: camera indices in the order: environment camera, left-arm camera, right-arm camera
  • reset-action: initial robot pose; set to the same pose used during data collection

Wait for the inference script to start. When ready, the terminal shows: Press 'Enter' to continue or 'q' and 'Enter' to quit..., Press Enter to begin inference.

After execution, press q and Enter to exit.