Skip to content

Commit

Permalink
update visualization
Browse files Browse the repository at this point in the history
  • Loading branch information
weiyithu committed Mar 29, 2023
1 parent ca905d0 commit bd7f4ac
Show file tree
Hide file tree
Showing 3 changed files with 25 additions and 6 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,19 +46,19 @@ Occupancy Ground Truth Generation Pipeline:
## Getting Started
- [Installation](docs/install.md)
- [Prepare Dataset](docs/data.md)
- [Train and Eval](docs/run.md)
- [Train, Eval and Visualize](docs/run.md)

You can download our pretrained model for [3D semantic occupancy prediction](https://cloud.tsinghua.edu.cn/f/7b2887a8fe3f472c8566/?dl=1) and [3D scene reconstruction tasks](https://cloud.tsinghua.edu.cn/f/ca595f31c8bd4ec49cf7/?dl=1). The difference is whether use semantic labels to train the model. The models are trained on 8 RTX 3090s with about 2.5 days.

## Try your own data
### Occupancy prediction
You can try our nuScenes [pretrained model](https://cloud.tsinghua.edu.cn/f/7b2887a8fe3f472c8566/?dl=1) on your own data! Here we give a template in-the-wild [data](https://cloud.tsinghua.edu.cn/f/48bd4b3e88f64ed7b76b/?dl=1) and [pickle file](https://cloud.tsinghua.edu.cn/f/5c710efd78854c529705/?dl=1). You should place it in ./data and change the corresponding infos. Specifically, you need to change the 'lidar2img', 'intrinsic' and 'data_path' as the extrinsic matrix, intrinsic matrix and path of your multi-camera images. Note that the order of frames should be same to their timestamps. 'occ_path' in this pickle file indicates the save path and you will get raw results (.npy) and point coulds (.ply) for further visualization:
You can try our nuScenes [pretrained model](https://cloud.tsinghua.edu.cn/f/7b2887a8fe3f472c8566/?dl=1) on your own data! Here we give a template in-the-wild [data](https://cloud.tsinghua.edu.cn/f/48bd4b3e88f64ed7b76b/?dl=1) and [pickle file](https://cloud.tsinghua.edu.cn/f/5c710efd78854c529705/?dl=1). You should place it in ./data and change the corresponding infos. Specifically, you need to change the 'lidar2img', 'intrinsic' and 'data_path' as the extrinsic matrix, intrinsic matrix and path of your multi-camera images. Note that the order of frames should be same to their timestamps. 'occ_path' in this pickle file indicates the save path and you will get raw results (.npy) and point coulds (.ply) in './visual_dir' for further visualization. You can use meshlab to directly visualize .ply files. Or you can run tools/visual.py to visualize .npy files.
```
./tools/dist_inference.sh ./projects/configs/surroundocc/surroundocc_inference.py ./path/to/ckpts.pth 8
```

### Ground truth generation
You can also generate dense occupancy labels with your own data! We provide a highly extensible code to achieve [this](https://github.com/weiyithu/SurroundOcc/blob/main/tools/generate_occupancy_with_own_data/process_your_own_data.py). We provide an example [sequence](https://cloud.tsinghua.edu.cn/f/94fea6c8be4448168667/?dl=1) and yoou need to prepare your data like this:
You can also generate dense occupancy labels with your own data! We provide a highly extensible code to achieve [this](https://github.com/weiyithu/SurroundOcc/blob/main/tools/generate_occupancy_with_own_data/process_your_own_data.py). We provide an example [sequence](https://cloud.tsinghua.edu.cn/f/94fea6c8be4448168667/?dl=1) and you need to prepare your data like this:

```
your_own_data_folder/
Expand Down
13 changes: 12 additions & 1 deletion docs/run.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,16 @@ Eval SurroundOcc with 8 RTX3090 GPUs
```
./tools/dist_test.sh ./projects/configs/surroundocc/surroundocc.py ./path/to/ckpts.pth 8
```
You can substitute surroundocc.py with surroundocc_nosemantic.py for 3D scene reconstruction task.

You can substitute surroundocc.py with surroundocc_nosemantic.py for 3D scene reconstruction task.
Visualize occupancy predictions:
First, you need to generate prediction results. Here we use whole validation set as an example.
```
cp ./data/nuscenes_infos_val.pkl ./data/infos_inference.pkl
./tools/dist_inference.sh ./projects/configs/surroundocc/surroundocc_inference.py ./path/to/ckpts.pth 8
```
You will get prediction results in './visual_dir'. You can directly use meshlab to visualize .ply files or run visual.py to visualize raw .npy files with mayavi:
```
cd ./tools
python visual.py $npy_path$
```
12 changes: 10 additions & 2 deletions projects/mmdet3d_plugin/surroundocc/detectors/surroundocc.py
Original file line number Diff line number Diff line change
Expand Up @@ -246,8 +246,16 @@ def generate_output(self, pred_occ, img_metas):
pcd.colors = o3d.utility.Vector3dVector(color[..., :3])
vertices = np.concatenate([vertices, semantics[:, None]], axis=-1)

o3d.io.write_point_cloud(img_metas[i]['occ_path'].replace('npy', 'ply'), pcd)
np.save(img_metas[i]['occ_path'], vertices)
save_dir = os.path.join('visual_dir', img_metas[i]['occ_path'].replace('npy', '').split('/')[-1])
os.makedirs(save_dir, exist_ok=True)


o3d.io.write_point_cloud(os.path.join(save_dir, 'pred.ply'), pcd)
np.save(os.path.join(save_dir, 'pred.npy'), vertices)
for cam_name in img_metas[i]['cams']:
os.system('cp {} {}/{}.jpg'.format(img_metas[i]['cams'][cam_name]['data_path'], save_dir, cam_name))





Expand Down

0 comments on commit bd7f4ac

Please sign in to comment.