A straightforward ROS 2 package written in Python, featuring multiple nodes to facilitate working with the Olive Camera, enhanced with TPU acceleration.
The camera uses the default internal calibration file. If you want to recalibrate your camera, please follow the steps outlined in the documentation.
https://olive-robotics.com/docs2/olixvision-camera/#camera-calibration
| Coral | OpenCV | ROS 2 | Python 3 |
|---|---|---|---|
![]() |
![]() |
![]() |
![]() |
- πΈ olvx_playground_camera
- Supported Embedded Libraries for the Olive AI Camera
- Table of Contents
- Installation
- Apps
- 0. Hello World App (TPU Embedded App)
- 1. Object Recognition (TPU Embedded App)
- 2. Skeleton Detection (TPU Embedded App)
- 3. Gesture Recognition (TPU Embedded App)
- 4. April Tag Detection (CPU Embedded App)
- 5. OpenCV Examples (Edge Dector, Optical Flow, Rectify, IMShow) (Host Computer App)
- 6-1. Monocular Depth Estimation (Host Computer App)
- 6-2. Monocular Depth Estimation (TPU Embedded App β FastDepth)
- 7. Semantic Segmentation
- 8. Facial Landmark Detection
- 9. Fruit Recognition
git clone --recurse-submodules [email protected]:olive-robotics/olvx_playground_camera.git
This command will ensure that you fetch all examples.
Each project has its own set of dependencies. To run a project, first navigate into its folder, and run:
pip install -r requirements.txt
This example is a simple parrot detector which you can test the hardware and make sure the Coral TPU is enabled.
Sample output:
Olive TPU Hello World v0.1
step1
step2
step3
step4
Ara macao (Scarlet Macaw): 0.75781
This example demonstrates object detection utilizing a ROS2 image topic and encases each detected object within a square.
cd examples/01-ObjectDetection/src
python3 app_node_object_detection.py
person, bicycle, car, motorcycle, airplane, bus, train, truck, boat, traffic light, fire hydrant, stop sign, parking meter, bench, bird, cat, dog, horse, sheep, cow, elephant, bear, zebra, giraffe, backpack, umbrella, handbag, tie, suitcase, frisbee, skis, snowboard, sports ball, kite, baseball bat, baseball glove, skateboard, surfboard, tennis racket, bottle, wine glass, cup, fork, knife, spoon, bowl, banana, apple, sandwich, orange, broccoli, carrot, hot dog, pizza, donut, cake, chair, couch, potted plant, bed, dining table, toilet, tv, laptop, mouse, remote, keyboard, cell phone, microwave, oven, toaster, sink, refrigerator, book, clock, vase, scissors, teddy bear, hair drier, toothbrush.
π More Information: ObjectDetection.md
Explore the utilization of the PoseNet model to detect human poses from a ROS2 image topic, pinpointing the location of body parts like elbows, shoulders, or feet.
cd examples/02-SkeletonDetection/src
python3 app_node_skeleton_posenet.py
nose, leftEye, rightEye, leftEar, rightEar, leftShoulder, rightShoulder, leftElbow, rightElbow, leftWrist, rightWrist, leftHip, rightHip, leftKnee, rightKnee, leftAnkle, rightAnkle.
π More Information: SkeletonDetection.md
An example showcasing the use of an MLP neural network model to train gesture classes.
The detection results will be published on the topic /gesturerecognition. Utilize ros2 topic list or ros2 topic echo /gesturerecognition to check whether a message is published, verifying the deviceβs operational status.
Both hands down, both hands up, left down / right up, right down / left up, left down / right side, right down / left side, hands on hip.
π More Information: GestureRecognition.md
Example forked from: https://github.com/ros-misc-utilities/apriltag_detector
A4 Tag Dataset: https://github.com/rgov/apriltag-pdfs
Download the OpenCV-4 Compiled for Olive Camera Download
Place the folder in the home and follow this namings:
/home/olive/opencv_install/opencv-4.x/
Update the .bashrc and add this lines to it:
source /opt/olive/script/env.sh
export PATH="/home/olive/.local/bin:$PATH"
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/olive/opencv_install/opencv-4.x/build/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/olive/lib
export OpenCV_DIR=/home/olive/opencv_install/opencv-4.x/build
export PYTHONPATH=~/opencv_install/opencv-4.x/build/lib/python3:$PYTHONPATH
Then install the AprilTag 3 library from examples/04-AprilTag/lib/apriltag
rm -r build
cmake -B build -DCMAKE_BUILD_TYPE=Release
sudo cmake --build build --target install
Then build the ROS2 project from examples/04-AprilTag/lib/workspace. You can skip also building if you don't want to change the code.
source install/setup.bash
colcon build
Then run it with:
ros2 launch apriltag_detector node.launch.py
In the recent software update, the April tag detector is preinstalled in the system, and you can auto-run it by uncommenting the line in olive-app-loader.sh
cd /usr/bin
nano olive-app-loader.sh
uncomment the last line
# ros2 launch apriltag_detector node.launch.py
to
ros2 launch apriltag_detector node.launch.py
To apply the result and run the node without rebooting,
sudo systemctl restart olive-app-loader.service
Run this example on your host computer. Compatible with CPU and GPU.
cd examples/05-OpenCV
python3 edge_detector.py
python3 optical_flow.py
Run this example on your host computer. Compatible with CPU and GPU.
cd examples/06-1-DepthEstimation
python3 depth_estimation.py
Runs directly on the Olive camera using the Coral EdgeTPU (no host needed).
Preview of the on-device output:
Run on the camera:
ssh olive@<camera-ip>
cd ~/examples/06-2-DepthEstimation
python3 depth_tpu_fastdepth.pyWhat it publishes (by default):
/olive/camera/id001/depth_color/compressed(JPEG stream)- (Enable raw publishing with
--publish both; raw goes to/olive/camera/id001/depth_color.)
All arguments (optional) and defaults β from depth_tpu_fastdepth.py:
--model <path>β default:fastdepth_256x320_edgetpu.tflite(file next to the script)--in <topic>β default:/olive/camera/id001/image/compressed--out <base_topic>β default:/olive/camera/id001/depth_color(adds/compressedautomatically)--status <topic>β default:/olive/camera/id001/tpu_status--every-n <int>β default:1(run inference every frame)--publish {compressed,raw,both}β default:compressed--jpeg-quality <1..100>β default:60--rotate {0,90,180,270}β default:0--resample {nearest,bilinear}β default:nearest
Python deps (documented for completeness; typically preinstalled on the camera):
See examples/06-2-DepthEstimation/requirements.txt (minimal: numpy, Pillow, tflite-runtime, pycoral).
To view the output depth stream from the camera:
-
Install the image viewers (host):
sudo apt update sudo apt install ros-humble-rqt-image-view ros-humble-image-view
-
Start image transport in one terminal (host):
ros2 run image_transport republish compressed raw
(This converts the camera's compressed JPEG stream so viewers can subscribe.)
-
Open an image viewer in another terminal (host).
Use one of the following commands:-
Recommended (rqt_image_view):
QT_QPA_PLATFORM=xcb ros2 run rqt_image_view rqt_image_view --ros-args -p image_transport:=compressed
(Using
QT_QPA_PLATFORM=xcbensures compatibility on Wayland, SSH, or remote setups.) -
Alternative (image_view):
ros2 run image_view image_view --ros-args -r image:=/olive/camera/id001/depth_color -p image_transport:=compressed
-
Then select the topic /olive/camera/id001/depth_color/compressed in the viewer.
You can now see the real-time monocular depth image stream published by the camera.
Additional checks
- Check publish rate:
ros2 topic hz /olive/camera/id001/depth_color/compressed
- Check TPU timing/status messages:
ros2 topic echo /olive/camera/id001/tpu_status
This example runs a semantic segmentation model to generate image masks of what the camera can see.
For our testing, we ran this on a Jetson Orin NX computer, and took advantage of its GPU for fast CUDA-optimized execution.
For more information: please check out the README
An example showing the MediaPipe facial landmark system, running using an Olive camera.
python3 src/app_facial_recognition.py
This project was developed in coordination with EkumenLabs.
Navigate to this README to learn more.













