Train robotics AI models
A guide to training AI models that control robots
The phospho starter pack makes it easy to train robotics AI models by integrating with LeRobot from Hugging Face.
In this guide, we’ll show you how to train the ACT (Action Chunking Transformer) model using the phospho starter pack and LeRobot by Hugging Face.
What is LeRobot?
LeRobot is a platform designed to make real-world robotics more accessible for everyone. It provides pre-trained models, datasets, and tools in PyTorch.
It focuses on state-of-the-art approaches in imitation learning and reinforcement learning.
With LeRobot, you get access to:
- Pretrained models for robotics applications
- Human-collected demonstration datasets
- Simulated environments to test and refine AI models
Useful links:
Step by step guide
In this guide, we will use the phospho starter pack to record a dataset and upload it to Hugging Face.
Prerequisites
- You need an assembled SO-100 robot arm and cameras. Get the phosphot starter pack here.
- Install the phosphobot software
- Connect your cameras to the computer. Start the phosphobot server.
- Complete the quickstart and check that you can control your robot.
- You have the phosphobot teleoperation app is installed on your Meta Quest 2, Pro, 3 or 3s
- You have a device to train your model. We recommend using a GPU for faster training.
1. Set up your Hugging Face token
To sync datasets, you need a Hugging Face token with write access. Follow these steps to generate one:
-
Log in to your Hugging Face account. You can create one here for free
-
Go to Profile and click Access Tokens in the sidebar.
-
Select the Write option to grant write access to your account. This is necessary for creating new datasets and uploading files. Name your token and click Create token.
-
Copy the token and save it in a secure place. You will need it later.
-
Make sure the phosphobot server is running. Open a browser and access
localhost
orphosphobot.local
if you’re using the control module. Then go to the Admin Configuration. -
Paste the Hugging Face token, and save it.
2. Set your dataset name and parameters
Go to the Admin Configuration page of your phospshobot dashboard. You can adjust settings. The most important are:
- Dataset Name: The name of the dataset you want to record.
- Task: A text description of the task you’re about to record. For example: “Pick up the lego brick and put it in the box”. This helps you remember what you recorded and is used by some AI models to understand the task.
- Camera: The cameras you want to record. By default, all cameras are recorded. You can select the cameras to record in the Admin Configuration.
- Video Codec: The video codec used to record the videos. The default is
AVC1
, which is the most efficient codec. If you’re having compatibility issues due to unavailable codecs (eg on Linux), switch tomp4v
which is more compatible.
3. Control the robot in the Meta Quest app
The easiest way to record a dataset is to use the Meta Quest app.
- In the Meta Quest, open the phospho teleop application. Wait a moment, then you should see a row displaying phosphobot or your computer name. Click the Connect button using the
Trigger Button
.
- Make sure you’re connected to the same WiFi as the phosphobot server.
- If you don’t see the server, check the IP address of the server in the phosphobot dashboard and enter it manually.
- After connecting, you’ll see the list of connected cameras and recording options.
- Move the windows with the
Grip button
to organize your space. - Enable preview to see the camera feed. Check the camera angles and adjust their positions if needed.
-
Press
A
once to start teleoperation and begin moving your controller.- The robot will naturally follow the movement of your controller. Press the
Trigger button
to close the gripper. - Press
A
again to stop the teleoperation. The robot will stop.
- The robot will naturally follow the movement of your controller. Press the
-
Press
B
to start recording. You can leave the default settings for your first attempt.- Press
B
again to stop the recording. - Press
Y
(left controller) to discard the recording.
- Press
-
Continue teleoperating and stop the recording by pressing
B
when you’re done. -
The recording is automatically saved in LeRobot v2 format and uploaded to your HuggingFace account.
Go to your Hugging Face profile to see the uploaded datasets.
You can view it using the LeRobot Dataset Visualizer.
The dataset visualizer only works with AVC1
video codec. If you used another codec, you may see black screens in the video preview.
Preview directly the videos files in a video player by opening your recording locally: ~/phosphobot/recordings/lerobot_v2/DATASET_NAME/video
.
4. Train your first model
Train GR00T-N1-2B in one click with phosphobot cloud
To train a model, you can use the phosphobot cloud. This is the quickest way to train a model.
- Enter the name of your dataset on Hugging Face (example:
PLB/simple-lego-pickup-mono-2
) in the AI Training and Control section. - Click on Train AI Model. Your model starts training. Training can take up to 3 hours. Follow the training using the button View trained models. Your model is uploaded to HuggingFace on the phospho-app account.
- To control your robot with the trained model, go to the Control your robot section and enter the name of your model.
Train with phosphobot cloud
Learn how to train a model with phosphobot cloud
Control your robot with GR00T-N1-2B
Learn about controlling your robot with GR00T-N1-2B and phosphobot cloud
Train an ACT model on Replicate
Training ACT on your own machine can be hard. Video codecs, GPU acceleration, training time, and other factors can make it hard to train the model locally.
To help you, we provide a training script that you can run on the Replicate platform, a cloud service that provides GPU instances and scripts to train your AI models and run inference.
You’ll need to provide a Hugging Face dataset ID and token on which to train the policy.
Train your ACT model on Replicate
Train your ACT model on Replicate
Train an ACT model locally with LeRobot
You need a GPU with at least 16GB of memory to train the model.
This guide will show you how to train the ACT model locally using LeRobot for your SO-100 robot.
- Install uv, the modern Python package manager.
- Set up training environment.
- (MacOS only) Set environment variables for torch compatibility:
- (Optional) Add the Weight & Biases integration for training metrics tracking:
- Run training script - Adjust parameters based on your hardware:
Trained models will be saved in lerobot/outputs/train/
.
- (Optional) Upload the model to Hugging Face. Login to HuggingFace CLI:
HuggingFace model hub is a wrapper of Github LFS. Push the model to Hugging Face:
5. Control your robot with the ACT model
- Launch ACT inference server (Run on GPU machine):
- Make sure the phosphobot server is running to control your robot:
- Create inference client script (Copy the content into
my_model/client.py
):
- Run the inference script:
Stop the script by pressing Ctrl + C
.
What’s next?
Next, you can use the trained model to control your robot. Head to our guide to get started!
Was this page helpful?