You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You need to install Detectron2 manually to run the pipeline with automatic instance segmentation.
35
36
36
37
Follow the [detectron2 installation guide](https://detectron2.readthedocs.io/en/latest/tutorials/install.html) from there.
37
-
Tested with detectron2 0.5 + torch 1.9.0.
38
+
Tested with detectron2 0.6 + torch 1.12.0 (and various older versions).
38
39
39
40
## Dataset Preparation
40
41
See below for expected folder structure for each dataset.
@@ -71,14 +72,14 @@ See below for expected folder structure for each dataset.
71
72
72
73
First, make sure the datasets are in the right format.
73
74
74
-
### Benchmark Results
75
-
Depending on which dataset, you have downloaded you can reproduce the results reported in the paper (using the already trained models) by running the script
after that, all results can be found in `./results`.
75
+
### Full paper (training and experiments)
76
+
See the bash script in `reproduction_scripts/reproduce_paper.sh`.
77
+
78
+
Evaluation code for REAL275 and REDWOOD75 experiments will be integrated in [cpas_toolbox](https://github.com/roym899/pose_and_shape_evaluation) soon.
79
+
80
+
<sup>Non-cleaned up version of evaluation code can be found in `icaps_eval` branch.</sup>
80
81
81
-
### Train Models
82
+
### Train Models Only
82
83
To train a network for a specific category you need to first train a per-category VAE, and *afterwards* an initialization network.
83
84
#### VAE
84
85
First we need to convert the ShapeNet meshes to SDFs and optionally filter the dataset. To reproduce the preprocessing of the paper run
@@ -91,7 +92,7 @@ source train_vaes.sh
91
92
```
92
93
to train the models using the same configuration as used for the paper.
93
94
94
-
#### Init Network
95
+
#### Initialization Network
95
96
To train the initialization network we used in our paper, run
96
97
```bash
97
98
source train_init_networks.sh
@@ -113,7 +114,7 @@ Code is structured into 4 sub-packages:
113
114
114
115
Differentiable rendering of depth image for signed distance fields.
115
116
116
-
The signed distance field is assumed to be voxelized and it's pose is given by a x, y, z in the camera frame, a quaternion describing its orientation and a scale parameter describing its size. This module provides the derivative with respect to the signed distance values, and the full pose description (position, orientation, scale).
117
+
The signed distance field is assumed to be voxelized and its pose is given by a x, y, z in the camera frame, a quaternion describing its orientation and a scale parameter describing its size. This module provides the derivative with respect to the signed distance values, and the full pose description (position, orientation, scale).
117
118
118
119
#### Generating compile_commands.json
119
120
<sup>General workflow for PyTorch extensions (only tested for JIT, probably similar otherwise)</sup>
0 commit comments