Skip to content

Share and explore robotics projects built with the phosphobot dev kit

License

Notifications You must be signed in to change notification settings

phospho-app/phosphobot

Repository files navigation

phosphobot

A community-driven platform for robotics enthusiasts to share and explore creative projects built with the phospho starter pack.

phosphobot Python package on PyPi Y Combinator W24 phospho discord

Overview

This repository contains demo code and community projects developed using the phospho starter pack. Whether you're a beginner or an experienced developer, you can explore existing projects or contribute your own creations.

Getting started

  1. Get Your Dev Kit: Purchase your Phospho starter pack at robots.phospho.ai. Unbox it and set it up following the instructions in the box.

  2. Install the phosphobot server and run it:

# Install it this way
curl -fsSL https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.sh | bash
# Start it this way
phosphobot run
# Upgrade it with brew or with apt
# sudo apt update && sudo apt install phosphobot
# brew update && brew upgrade phosphobot
  1. Use the phosphobot python client to interact with the phosphobot server API.
pip install --upgrade phosphobot

We release new versions very often.

How to train ACT with LeRobot?

  1. Record a Dataset with phosphobot: Record a dataset using the app. Do the same gesture 30-50 times (depending on the task complexity) to create a dataset. Learn more

  2. Install LeRobot. LeRobot by HuggingFace is a research-oriented library for AI training which is still a work in progress. We made a few workarounds to make sure it works reliably. On MacOS, here is a step by step guide.

2.1. Install uv, a Python environment manager.

# On macOS and Linux.
curl -LsSf https://astral.sh/uv/install.sh | sh

2.2 Create a new directory and install requirements.

mkdir my_model
cd my_model
uv init
uv add phosphobot git+https://github.com/phospho-app/lerobot
git clone https://github.com/phospho-app/lerobot

2.3 On MacOS M1, you need to set this variable for torchcodec to work.

export DYLD_LIBRARY_PATH="/opt/homebrew/lib:/usr/local/lib:$DYLD_LIBRARY_PATH"

2.4 Run the LeRobot training script. For example, on Mac M1:

uv run lerobot/lerobot/scripts/train.py \
 --dataset.repo_id=PLB/simple-lego-pickup-mono-2 \
 --policy.type=act \
 --output_dir=outputs/train/phoshobot_test \
 --job_name=phosphobot_test \
 --policy.device=mps

Change the dataset.repo_id to the id of your dataset on Hugging Face.

Change the --policy.device flag based on your hardware: cuda if you have an NVIDIA GPU, mps if you use a MacBook Pro Sillicon, and cpu otherwise.

  1. Use the ACT model to control your robot:

3.1 Launch the ACT server to run inference. This should be running on a beefy GPU machine. Check out our folder [/inference] for more details.

curl -o server.py https://raw.githubusercontent.com/phospho-app/phosphobot/refs/heads/main/inference/ACT/server.py
uv run server.py --model_id LegrandFrederic/Orange-brick-in-black-box # Replace with <YOUR_HF_MODEL_ID>

3.2 Make sure the phosphobot server is running to control your robot:

# Install it this way
curl -fsSL https://raw.githubusercontent.com/phospho-app/phosphobot/main/install.sh | bash
# Start it this way
phosphobot run

3.3 Create a script called my_model/client.py and copy paste the content below.

# /// script
# requires-python = ">=3.11"
# dependencies = [
#     "phosphobot",
# ]
# ///
from phosphobot.camera import AllCameras
from phosphobot.api.client import PhosphoApi
from phosphobot.am import ACT

import time
import numpy as np

# Connect to the phosphobot server
client = PhosphoApi(base_url="http://localhost:80")

# Get a camera frame
allcameras = AllCameras()

# Need to wait for the cameras to initialize
time.sleep(1)

# Instantiate the model
model = ACT()

while True:
    images = [
        allcameras.get_rgb_frame(camera_id=0, resize=(240, 320)),
        allcameras.get_rgb_frame(camera_id=1, resize=(240, 320)),
        allcameras.get_rgb_frame(camera_id=2, resize=(240, 320)),
    ]

    # Get the robot state
    state = client.control.read_joints()

    inputs = {"state": np.array(state.angles_rad), "images": np.array(images)}

    # Go through the model
    actions = model(inputs)

    for action in actions:
        # Send the new joint postion to the robot
        client.control.write_joints(angles=action.tolist())
        # Wait to respect frequency control (30 Hz)
        time.sleep(1 / 30)

3.4 Run this script to control your robot using the model:

uv run client.py

For the full detailed instructions and other model (Pi0, OpenVLA,...), refer to the docs.

Join the Community

Connect with other developers and share your experience in our Discord community

Community Projects

Explore projects created by our community members in the code_examples directory. Each project includes its own documentation and setup instructions.

Support

License

MIT License


Made with 💚 by the Phospho community

About

Share and explore robotics projects built with the phosphobot dev kit

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages