Using residuals between RAFT predicted optical flow and ego motion-induced geometric optical flow to detect moving objects from a mobile platform.
hamilton_short.mp4
sudo apt-get install ffmpeg x264 libx264-dev
git clone https://github.com/mbpeterson70/robotdatapy && cd robotdatapy && pip install . && cd ..
pip install -e .
Tested on a system with an i9-14900HX, GeForce RTX 4090 Laptop GPU (16GB), 32GB RAM. May not work on systems
with less memory, even if batch_size
is decreased.
To run the evaluation data in our blog, download the following rosbags: hamilton data (ROS1), ground truth (ROS2)
export BAG_PATH=/path/to/hamilton_data.bag
export RAFT=/path/to/dynamic-object-detection/RAFT/
python3 dynamic_object_detection/offline.py -p config/hamilton.yaml
Edit config/hamilton.yaml
to experiment with different parameters.
Note: All operations assume undistorted images. Our data is already undistorted.
The code for evaluation metrics is in eval/eval.ipynb
. Change the following lines in the second cell:
os.environ['BAG_PATH'] = os.path.expanduser('/path/to/hamilton_data.bag')
gt_bag = '~/path/to/gt_data/'
Then change the runs
variable in the last cell to the list of runs that you want to evaluate (names of the pkl/yaml/mp4 outputs, without extension). Run the entire notebook. Outputs will be printed at the bottom.