Skip to content

Simple python ROS 2 package with multiple nodes to start working with the Olive Camera with TPU acceleration.

License

Notifications You must be signed in to change notification settings

olive-robotics/olvx_playground_camera

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ“Έ olvx_playground_camera

A straightforward ROS 2 package written in Python, featuring multiple nodes to facilitate working with the Olive Camera, enhanced with TPU acceleration.

The camera uses the default internal calibration file. If you want to recalibrate your camera, please follow the steps outlined in the documentation.

https://olive-robotics.com/docs2/olixvision-camera/#camera-calibration

Supported Embedded Libraries for the Olive AI Camera

Coral OpenCV ROS 2 Python 3
1 2 3 4

Table of Contents

Installation

git clone --recurse-submodules [email protected]:olive-robotics/olvx_playground_camera.git

This command will ensure that you fetch all examples.

Each project has its own set of dependencies. To run a project, first navigate into its folder, and run:

pip install -r requirements.txt

Apps

0. Hello World App (TPU Embedded App)

This example is a simple parrot detector which you can test the hardware and make sure the Coral TPU is enabled.

Sample output:

Olive TPU Hello World v0.1
step1
step2
step3
step4
Ara macao (Scarlet Macaw): 0.75781

1. Object Recognition (TPU Embedded App)

This example demonstrates object detection utilizing a ROS2 image topic and encases each detected object within a square.

Object Detection Image

cd examples/01-ObjectDetection/src
python3 app_node_object_detection.py

πŸ“‹ Object List

person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush.

πŸ”— More Information: ObjectDetection.md

2. Skeleton Detection (TPU Embedded App)

Explore the utilization of the PoseNet model to detect human poses from a ROS2 image topic, pinpointing the location of body parts like elbows, shoulders, or feet.

Skeleton Detection Image

cd examples/02-SkeletonDetection/src
python3 app_node_skeleton_posenet.py

πŸšΆβ€β™‚οΈ Body Point List

nose, leftEye, rightEye, leftEar, rightEar, leftShoulder, rightShoulder, leftElbow, rightElbow, leftWrist, rightWrist, leftHip, rightHip, leftKnee, rightKnee, leftAnkle, rightAnkle.

πŸ”— More Information: SkeletonDetection.md

3. Gesture Recognition (TPU Embedded App)

An example showcasing the use of an MLP neural network model to train gesture classes.

Gesture Recognition Image

πŸ“‘ ROS2 Topic

The detection results will be published on the topic /gesturerecognition. Utilize ros2 topic list or ros2 topic echo /gesturerecognition to check whether a message is published, verifying the device’s operational status.

🀏 Gestures

Both hands down, both hands up, left down / right up, right down / left up, left down / right side, right down / left side, hands on hip.

πŸ”— More Information: GestureRecognition.md

4. April Tag Detection (CPU Embedded App)

Method 1: Manual

Skeleton Detection Image

Example forked from: https://github.com/ros-misc-utilities/apriltag_detector

A4 Tag Dataset: https://github.com/rgov/apriltag-pdfs

Download the OpenCV-4 Compiled for Olive Camera Download

Place the folder in the home and follow this namings:

/home/olive/opencv_install/opencv-4.x/

Update the .bashrc and add this lines to it:

source /opt/olive/script/env.sh
export PATH="/home/olive/.local/bin:$PATH"
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/olive/opencv_install/opencv-4.x/build/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/olive/lib
export OpenCV_DIR=/home/olive/opencv_install/opencv-4.x/build
export PYTHONPATH=~/opencv_install/opencv-4.x/build/lib/python3:$PYTHONPATH

Then install the AprilTag 3 library from examples/04-AprilTag/lib/apriltag

rm -r build
cmake -B build -DCMAKE_BUILD_TYPE=Release
sudo cmake --build build --target install

Then build the ROS2 project from examples/04-AprilTag/lib/workspace. You can skip also building if you don't want to change the code.

source install/setup.bash
colcon build

Then run it with:

ros2 launch apriltag_detector node.launch.py

Method 2: Preinstalled (patch > 1214)

In the recent software update, the April tag detector is preinstalled in the system, and you can auto-run it by uncommenting the line in olive-app-loader.sh

cd /usr/bin
nano olive-app-loader.sh

uncomment the last line

# ros2 launch apriltag_detector node.launch.py

to

ros2 launch apriltag_detector node.launch.py

To apply the result and run the node without rebooting,

sudo systemctl restart olive-app-loader.service

5. OpenCV Examples (Edge Dector, Optical Flow, Rectify, IMShow) (Host Computer App)

Run this example on your host computer. Compatible with CPU and GPU.

cd examples/05-OpenCV
python3 edge_detector.py
python3 optical_flow.py

Skeleton Detection Image

6-1. Monocular Depth Estimation (Host Computer App)

Run this example on your host computer. Compatible with CPU and GPU.

cd examples/06-1-DepthEstimation
python3 depth_estimation.py

Skeleton Detection Image

6-2. Monocular Depth Estimation (TPU Embedded App – FastDepth)

Runs directly on the Olive camera using the Coral EdgeTPU (no host needed).
Preview of the on-device output:

Monocular FastDepth

Run on the camera:

ssh olive@<camera-ip>
cd ~/examples/06-2-DepthEstimation
python3 depth_tpu_fastdepth.py

What it publishes (by default):

  • /olive/camera/id001/depth_color/compressed (JPEG stream)
  • (Enable raw publishing with --publish both; raw goes to /olive/camera/id001/depth_color.)

All arguments (optional) and defaults β€” from depth_tpu_fastdepth.py:

  • --model <path> β€” default: fastdepth_256x320_edgetpu.tflite (file next to the script)
  • --in <topic> β€” default: /olive/camera/id001/image/compressed
  • --out <base_topic> β€” default: /olive/camera/id001/depth_color (adds /compressed automatically)
  • --status <topic> β€” default: /olive/camera/id001/tpu_status
  • --every-n <int> β€” default: 1 (run inference every frame)
  • --publish {compressed,raw,both} β€” default: compressed
  • --jpeg-quality <1..100> β€” default: 60
  • --rotate {0,90,180,270} β€” default: 0
  • --resample {nearest,bilinear} β€” default: nearest

Python deps (documented for completeness; typically preinstalled on the camera):
See examples/06-2-DepthEstimation/requirements.txt (minimal: numpy, Pillow, tflite-runtime, pycoral).


πŸ” Visualisation

To view the output depth stream from the camera:

  1. Install the image viewers (host):

    sudo apt update
    sudo apt install ros-humble-rqt-image-view ros-humble-image-view
  2. Start image transport in one terminal (host):

    ros2 run image_transport republish compressed raw

    (This converts the camera's compressed JPEG stream so viewers can subscribe.)

  3. Open an image viewer in another terminal (host).
    Use one of the following commands:

    • Recommended (rqt_image_view):

      QT_QPA_PLATFORM=xcb ros2 run rqt_image_view rqt_image_view --ros-args -p image_transport:=compressed

      (Using QT_QPA_PLATFORM=xcb ensures compatibility on Wayland, SSH, or remote setups.)

    • Alternative (image_view):

      ros2 run image_view image_view --ros-args -r image:=/olive/camera/id001/depth_color -p image_transport:=compressed

Then select the topic /olive/camera/id001/depth_color/compressed in the viewer.

You can now see the real-time monocular depth image stream published by the camera.


Additional checks

  • Check publish rate:
    ros2 topic hz /olive/camera/id001/depth_color/compressed
  • Check TPU timing/status messages:
    ros2 topic echo /olive/camera/id001/tpu_status

7. Semantic Segmentation

This example runs a semantic segmentation model to generate image masks of what the camera can see.

For our testing, we ran this on a Jetson Orin NX computer, and took advantage of its GPU for fast CUDA-optimized execution.

segmentation.gif

For more information: please check out the README

8. Facial Landmark Detection

An example showing the MediaPipe facial landmark system, running using an Olive camera.

python3 src/app_facial_recognition.py

Facial Landmark Demonstration

9. Fruit Recognition

This project was developed in coordination with EkumenLabs.

Navigate to this README to learn more.

fruit_detection_hardware.png

About

Simple python ROS 2 package with multiple nodes to start working with the Olive Camera with TPU acceleration.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages