diff --git a/projects/BEVFusion/README.md b/projects/BEVFusion/README.md index bbe2697a..71af3696 100644 --- a/projects/BEVFusion/README.md +++ b/projects/BEVFusion/README.md @@ -34,7 +34,7 @@ ## Get started ### 1. Setup -- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. - Run docker ```sh diff --git a/projects/BLIP-2/README.md b/projects/BLIP-2/README.md index 65221113..333256df 100644 --- a/projects/BLIP-2/README.md +++ b/projects/BLIP-2/README.md @@ -15,7 +15,7 @@ ## Get started ### 1. Setup -- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. - Run docker ``` diff --git a/projects/CalibrationStatusClassification/README.md b/projects/CalibrationStatusClassification/README.md index 3790ea2b..6d8a8694 100644 --- a/projects/CalibrationStatusClassification/README.md +++ b/projects/CalibrationStatusClassification/README.md @@ -21,7 +21,7 @@ ## Get started ### 1. Setup -- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. - Docker build for CalibrationStatusClassification ```sh diff --git a/projects/CenterPoint/README.md b/projects/CenterPoint/README.md index c1409e33..6b265e89 100644 --- a/projects/CenterPoint/README.md +++ b/projects/CenterPoint/README.md @@ -2,10 +2,10 @@ ## Summary - [Support priority](https://github.com/tier4/AWML/blob/main/docs/design/autoware_ml_design.md#support-priority): Tier S -- ROS package: [auotoware_lidar_centerpoint] (https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_lidar_centerpoint) +- ROS package: [auotoware_lidar_centerpoint](https://github.com/autowarefoundation/autoware.universe/tree/main/perception/autoware_lidar_centerpoint) - Supported dataset - [x] T4dataset - - [] NuScenes + - [ ] NuScenes - Supported model - [x] LiDAR-only model - Other supported feature @@ -19,13 +19,23 @@ - v1 (121m range, grid_size = 760) - [CenterPoint base/1.X](./docs/CenterPoint/v1/base.md) - [CenterPoint x2/1.X](./docs/CenterPoint/v1/x2.md) - - [CenterPoint-ConvNeXtPC base/0.x](./docs/CenterPoint-ConvNeXtPC/v0/base.md) + - v2 (121m range, grid_size = 760) + - [CenterPoint base/2.X](./docs/CenterPoint/v2/base.md) + - [CenterPoint x2/2.X](./docs/CenterPoint/v2/x2.md) +- CenterPoint-ConvNeXtPC + - [CenterPoint-ConvNeXtPC base/0.x](./docs/CenterPoint-ConvNeXtPC/v0/base.md) +- CenterPoint-ShortRange + - v0 + - [CenterPoint-ShortRange base/0.X](./docs/CenterPoint-ShortRange/v0/base.md) + - v2 + - [CenterPoint-ShortRange base/2.X](./docs/CenterPoint-ShortRange/v2/base.md) + - [CenterPoint-ShortRange j6gen2/2.X](./docs/CenterPoint-ShortRange/v2/j6gen2.md) ## Get started ### 1. Setup -- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. -- Run docker +- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. +- Run docker. ```sh docker run -it --rm --gpus all --shm-size=64g --name awml -p 6006:6006 -v $PWD/:/workspace -v $PWD/data:/workspace/data autoware-ml @@ -34,7 +44,7 @@ docker run -it --rm --gpus all --shm-size=64g --name awml -p 6006:6006 -v $PWD/: ### 2. Train #### 2.1 Environment set up -Set `CUBLAS_WORKSPACE_CONFIG` for the deterministic behavior, plese check this [nvidia doc](https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility) for more info +- Set `CUBLAS_WORKSPACE_CONFIG` for the deterministic behavior, plese check this [nvidia doc](https://docs.nvidia.com/cuda/cublas/index.html#results-reproducibility) for more info. ```sh export CUBLAS_WORKSPACE_CONFIG=:4096:8 @@ -42,16 +52,16 @@ export CUBLAS_WORKSPACE_CONFIG=:4096:8 #### 2.2. Train CenterPoint model with T4dataset-base -- [choice] Train with a single GPU - - Rename config file to use for single GPU and batch size - - Change `train_batch_size` and `train_gpu_size` accordingly +- [choice] Train with a single GPU. + - Rename config file to use for single GPU and batch size. + - Change `train_batch_size` and `train_gpu_size` accordingly. ```sh # T4dataset (121m) python tools/detection3d/train.py projects/CenterPoint/configs/t4dataset/second_secfpn_2xb8_121m_base.py ``` -- [choice] Train with multi GPU +- [choice] Train with multi GPU. ```sh # Command @@ -63,7 +73,7 @@ bash tools/detection3d/dist_script.sh projects/CenterPoint/configs/t4dataset/Cen ### 3. Evaluation -- Run evaluation on a test set, please select experiment config accordingly +- Run evaluation on a test set, please select experiment config accordingly. - [choice] Evaluate with a single GPU @@ -73,8 +83,8 @@ DIR="work_dirs/centerpoint/t4dataset/second_secfpn_2xb8_121m_base/" && \ python tools/detection3d/test.py projects/CenterPoint/configs/t4dataset/second_secfpn_2xb8_121m_base.py $DIR/epoch_50.pth ``` -- [choice] Evaluate with multiple GPUs - - Note that if you choose to evaluate with multiple GPUs, you might get slightly different results as compared to single GPU due to differences across GPUs +- [choice] Evaluate with multiple GPUs. + - Note that if you choose to evaluate with multiple GPUs, you might get slightly different results as compared to single GPU due to differences across GPUs. ```sh # Command @@ -88,7 +98,7 @@ bash tools/detection3d/dist_script.sh projects/CenterPoint/configs/t4dataset/Cen ### 4. Visualization -- Run inference and visualize bounding boxes from a CenterPoint model +- Run inference and visualize bounding boxes from a CenterPoint model. ```sh # Inference for t4dataset @@ -100,7 +110,7 @@ where `frame-range` represents the range of frames to visualize. ### 5. Deploy -- Make an onnx file for a CenterPoint model +- Make an onnx file for a CenterPoint model. ```sh # Deploy for t4dataset @@ -111,14 +121,15 @@ python projects/CenterPoint/scripts/deploy.py projects/CenterPoint/configs/t4dat where `rot_y_axis_reference` can be removed if we would like to use the original counterclockwise x-axis rotation system. ## Troubleshooting +### Difference from original CenterPoint from mmdetection3d v1 -- The difference from original CenterPoint from mmdetection3d v1 - - To maintain the backward compatibility with the previous ML library, we modified the original CenterPoint from mmdetection3d v1 such as: - - Exclude voxel center from z-dimension as part of pillar features - - Assume that the rotation system in the deployed ONNX file is in clockwise y-axis, and a bounding box is [x, y, z, w, l, h] for the deployed ONNX file - - Do not use CBGS dataset to align the experiment configuration with the older library -- Latest mmdetection3D assumes the lidar coordinate system is in the right-handed x-axis reference, also the dimensionality of a bounding box is [x, y, z, l, w, h], please check [this](https://mmdetection3d.readthedocs.io/en/latest/user_guides/coord_sys_tutorial.html) for more details +- To maintain the backward compatibility with the previous ML library, we modified the original CenterPoint from mmdetection3d v1 such as: + - Exclude voxel center from z-dimension as part of pillar features. + - Assume that the rotation system in the deployed ONNX file is in clockwise y-axis, and a bounding box is [x, y, z, w, l, h] for the deployed ONNX file. + - Do not use CBGS dataset to align the experiment configuration with the older library. +- Latest mmdetection3D assumes the lidar coordinate system is in the right-handed x-axis reference, also the dimensionality of a bounding box is [x, y, z, l, w, h], please check [this](https://mmdetection3d.readthedocs.io/en/latest/user_guides/coord_sys_tutorial.html) for more details. ## Reference +- "Center-based 3D Object Detection and Tracking", Tianwei Yin, Xingyi Zhou, Philipp Krähenbühl, CVPR2021. - [CenterPoint of mmdetection3d](https://github.com/open-mmlab/mmdetection3d/tree/main/configs/centerpoint) diff --git a/projects/FRNet/README.md b/projects/FRNet/README.md index 96fd3195..819c8b94 100644 --- a/projects/FRNet/README.md +++ b/projects/FRNet/README.md @@ -23,7 +23,7 @@ ## Get started ### 1. Setup -- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. - Docker build for FRNet ```sh diff --git a/projects/GLIP/README.md b/projects/GLIP/README.md index be067170..7834afdd 100644 --- a/projects/GLIP/README.md +++ b/projects/GLIP/README.md @@ -15,7 +15,7 @@ ## Get started ### 1. Setup -- 1.1. Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +- 1.1. Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. - 1.2. Download pretrain weight from [GLIP of mmdetection](https://github.com/open-mmlab/mmdetection/tree/main/configs/glip/README.md). - In demonstration, you should put `work_dirs/pretrain/glip/`. - 1.3. Run docker diff --git a/projects/StreamPETR/README.md b/projects/StreamPETR/README.md index 7262f741..eeffffb5 100644 --- a/projects/StreamPETR/README.md +++ b/projects/StreamPETR/README.md @@ -20,7 +20,7 @@ ## Get started ### 1. Setup -- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. - Run docker ```sh diff --git a/projects/TransFusion/README.md b/projects/TransFusion/README.md index adb8fa47..16eade0c 100644 --- a/projects/TransFusion/README.md +++ b/projects/TransFusion/README.md @@ -29,7 +29,7 @@ ## Get started ### 1. Setup -- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. - Docker build for TransFusion ```sh diff --git a/projects/YOLOX_opt/README.md b/projects/YOLOX_opt/README.md index e1d806a7..c24c3b06 100644 --- a/projects/YOLOX_opt/README.md +++ b/projects/YOLOX_opt/README.md @@ -24,7 +24,7 @@ ## Get started ### 1. Setup -- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. - Setup libraries necessary for YOLOX deployment with `pip3 install -r ./projects/YOLOX_opt/requirements.txt` ### 2. Train diff --git a/tools/analysis_3d/README.md b/tools/analysis_3d/README.md index 199c55d6..27388440 100644 --- a/tools/analysis_3d/README.md +++ b/tools/analysis_3d/README.md @@ -19,7 +19,7 @@ They only need to follow `AnalysisCallbackInterface` to add the analyses they ar ## Get started ### 1. Setup -- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. - Run docker ```sh diff --git a/tools/auto_labeling_2d/README.md b/tools/auto_labeling_2d/README.md index 833d9913..d3c199d6 100644 --- a/tools/auto_labeling_2d/README.md +++ b/tools/auto_labeling_2d/README.md @@ -4,7 +4,7 @@ ## 1. Setup environment -- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. ## 2. command ### (TBD) 2.1 T4dataset to info file diff --git a/tools/auto_labeling_3d/README.md b/tools/auto_labeling_3d/README.md index 91fd28bf..80af1a2c 100644 --- a/tools/auto_labeling_3d/README.md +++ b/tools/auto_labeling_3d/README.md @@ -6,7 +6,7 @@ The pipeline of auto labeling for 3D detection. ## 1. Setup environment -- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. - In addition, please follow the below setting up procedure. ### Set up environment for auto_labeling_3d diff --git a/tools/calibration_classification/README.md b/tools/calibration_classification/README.md index 33fffda8..f18b97d6 100644 --- a/tools/calibration_classification/README.md +++ b/tools/calibration_classification/README.md @@ -12,7 +12,7 @@ It contains training, evaluation, and visualization for Calibration classificati ## 1. Setup environment -Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. ## 2. Prepare dataset diff --git a/tools/detection2d/README.md b/tools/detection2d/README.md index 1cce503a..2afb3a89 100644 --- a/tools/detection2d/README.md +++ b/tools/detection2d/README.md @@ -11,7 +11,7 @@ Training and evaluation tools for 2D Detection. ## 1. Set environment -- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md)to set up the environment. +- Please follow the [installation tutorial](/docs/tutorial/tutorial_detection_3d.md) to set up the environment. ## 2. Prepare dataset ### 2.1. COCO dataset