Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve Rendering Script #17

Open
wants to merge 8 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 6 additions & 4 deletions docker/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
# Copyright (c) 2020-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# Copyright (c) 2020-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
#
# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual
# property and proprietary rights in and to this material, related
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# documentation and any modifications thereto. Any use, reproduction,
# disclosure or distribution of this material and related documentation
# without an express license agreement from NVIDIA CORPORATION or
# its affiliates is strictly prohibited.

ARG BASE_IMAGE=nvcr.io/nvidia/pytorch:21.08-py3
Expand Down Expand Up @@ -49,3 +49,5 @@ RUN pip install meshzoo ipdb imageio gputil h5py point-cloud-utils imageio image

# HDR image support
RUN imageio_download_bin freeimage

RUN apt-get install -y libxi6 libgconf-2-4 libfontconfig1 libxrender1
Empty file modified docker/make_image.sh
100644 → 100755
Empty file.
4 changes: 4 additions & 0 deletions render_shapenet_data/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
*.blend1
shapenet_rendered
shapenet
tmp.out
100 changes: 85 additions & 15 deletions render_shapenet_data/README.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,98 @@
## Render Shapenet Dataset
# Render ShapeNet Dataset
This script will help you render ShapeNet and custom datasets that follow ShapeNet conventions.

- Download shapenet V1 dataset following the [official link](https://shapenet.org/) and
unzip the downloaded file `unzip SHAPENET_SYNSET_ID.zip`.
- Download Blender following the [official link](https://www.blender.org/), we used
Blender **v2.90.0**, we haven't tested on other versions.
- Install required libraries:
## Prerequisites
- Python 3.7 or higher
- Blender 2.9 or higher
- OSX, Linux or Windows running WSL2

```bash
## Setup
- Download the ShapeNet V1 or V2 dataset following the [official link](https://shapenet.org/)
- Make a new folder called shapenet in this directory, and unzip the downloaded file: `mkdir shapenet && unzip SHAPENET_SYNSET_ID.zip -d shapenet`
- Download Blender following the [official link](https://www.blender.org/)

## Installing Required Libraries
You will need the following libraries on Linux:
```
apt-get install -y libxi6 libgconf-2-4 libfontconfig1 libxrender1
```

Blender ships with its own distribution of Python, which you will need to add some libraries to:
```bash
cd BLENDER_PATH/2.90/python/bin
./python3.7m -m ensurepip
./python3.7m -m pip install numpy
./python3.7m -m pip install numpy
```

- Running the render script:
## Data
The rendering script looks for datasets in the dataset_list.json file. You can modify this to add your own files and paths or point to your own JSON dataset list using the `--dataset_list <filename>` flag when invoking `render_all.py`

- For **ShapeNetCore.v1**, you don't need to do any preprocesing. If you are using your own dataset, you should make sure that your models are sorted into directories with a "model.obj" in them, following the expected conventions of ShapeNetCore.v1.

- For **ShapeNetCore.v2**, make sure to pass the `--shapenet_version 2` flag to the `render_all.py` script -- this will destructively normalize your dataset folder to match the expected structure of ShapeNetCore.v1, while retaining the original .obj and .mtl file names

## Rendering

### Quick Start
Once you've modified dataset_list.json or added the ShapeNet data that reflects the source training data paths, you can render your data like this:
```bash
python render_all.py --save_folder PATH_TO_SAVE_IMAGE --dataset_folder PATH_TO_3D_OBJ --blender_root PATH_TO_BLENDER
python render_all.py
```

**Note:** The code will save the output from blender to `tmp.out`, this is not necessary for training, and can be removed by `rm -rf tmp.out`

## Additional Flags
You can customize the rendering script by adding flags.

### Switch to Eevee for dramatically faster rendering speed
By default, the Blender renderer uses Cycles, which has a photorealistic look but is slow (>10s/frame). You can also use Eevee, which may be more game-like in look but renders much much faster (<.3s/frame), and may be suitable for extracting a high quality dataset on lower end machines in a reasonable amount of time.
```
python render_all.py --engine EEVEE
```

### Render ShapeNet V2
For ShapeNetCore.v2 you will need to pass a flag to the render script to pre-process your data:
```bash
python render_all.py --shapenet_version 2
```

### Log to Console
By default, the script will log to a tmp.out file (quiet mode), but you can override this:
```
python render_all.py --quiet_mode 0

```
### Set Number of Views to Capture
The default for the rendering script is to capture 24 views per object. However, many NeRF pipeline recommend closer to 100 images. Especially if you are working with a limited but high quality dataset, you should consider increasing the total number of views 2-4x
```
python render_all.py --num_views 96
```

### Override Arguments
By default, the rendering script will save outputs to "shapenet_rendered", read all datasets from dataset_list.json and use the default Blender installation in your system. However, you can override these arguments:
```bash
python render_all.py --save_folder PATH_TO_SAVE_IMAGE --dataset_list PATH_TO_DATASET_JSON --blender_root PATH_TO_BLENDER
```

## Modifying the Render Scene
You can open the base scenes (located in the blender directory) and modify the lighting. There are no objects in the scene, so you will need to import a test object. Just be careful to remove any scene objects before you save.

## Comparison Between Cycles and Eevee
Cycles is on the **Left**, Eevee is on the **Right**
<br />
<img src="docs_img/cycles.png" alt="drawing" width="400"/> <img src="docs_img/eevee.png" alt="drawing" width="400"/>

To Render Eevee headlessly
```
!apt-get install python-opengl -y
!apt install xvfb -y
!pip install pyvirtualdisplay
!pip install piglet
python3 render_parallel.py --num_views 96 --engine EEVEE --headless
```

- (Optional) The code will save the output from blender to `tmp.out`, this is not
necessary for training, and can be removed by `rm -rf tmp.out`
## Attribution

- This code is adopted from this [GitHub repo](https://github.com/panmari/stanford-shapenet-renderer), we thank the author for sharing the codes!

- This code is adopted from
this [GitHub repo](https://github.com/panmari/stanford-shapenet-renderer), we thank the
author for sharing the codes!
- The tome in the rendering comparison images was borrowed with permission from the [Loot Assets](https://github.com/webaverse/loot-assets) library.
Binary file not shown.
Binary file added render_shapenet_data/blender/eevee_renderer.blend
Binary file not shown.
20 changes: 20 additions & 0 deletions render_shapenet_data/dataset_list.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
[
{
"name": "Car",
"id": "02958343",
"scale": 0.9,
"directory": "./shapenet/02958343"
},
{
"name": "Chair",
"id": "03001627",
"scale": 0.7,
"directory": "./shapenet/03001627"
},
{
"name": "Motorbike",
"id": "03790512",
"scale": 0.9,
"directory": "./shapenet/03790512"
}
]
Binary file added render_shapenet_data/docs_img/cycles.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added render_shapenet_data/docs_img/eevee.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
148 changes: 128 additions & 20 deletions render_shapenet_data/render_all.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,37 +8,145 @@

import os
import argparse
import json
import subprocess
from multiprocessing.pool import ThreadPool
import subprocess

# Connect EFS
# /home/user/mirage-dev/GET3D/render_shapenet_data/mirageml-dev/aman/experiements/GET3D/render_shapenet_data
# sudo sshfs [email protected]:/home/ubuntu/mirage-dev/ /home/user/mirage-dev/GET3D/render_shapenet_data/mirageml-dev -o IdentityFile=/home/user/mirage-dev/GET3D/render_shapenet_data/mirage-omniverse.pem -o allow_other

parser = argparse.ArgumentParser(description='Renders given obj file by rotation a camera around it.')
parser.add_argument(
'--save_folder', type=str, default='./tmp',
'--save_folder', type=str, default='./shapenet_rendered',
help='path for saving rendered image')
parser.add_argument(
'--dataset_folder', type=str, default='./tmp',
help='path for downloaded 3d dataset folder')
'--dataset_list', type=str, default='./dataset_list.json',
help='path to a json linking datasets')
parser.add_argument(
'--blender_root', type=str, default='./tmp',
'--blender_root', type=str, default='blender',
help='path for blender')
parser.add_argument(
'--shapenet_version', type=str, default='1',
help='ShapeNet version 1 or 2')
parser.add_argument(
'--num_views', type=str, default='24',
help='Number of views to capture per object')
parser.add_argument(
'--engine', type=str, default='CYCLES',
help='Use CYCLES or EEVEE - CYCLES is a realistic path tracer (slow), EEVEE is a real-time renderer (fast)')
parser.add_argument(
'--quiet_mode', type=bool, default=1,
help='Route output of console to log file')
args = parser.parse_args()


engine = args.engine
quiet_mode = args.quiet_mode
save_folder = args.save_folder
dataset_folder = args.dataset_folder
dataset_list = args.dataset_list
blender_root = args.blender_root
shapenet_version = args.shapenet_version
num_views = args.num_views

synset_list = [
'02958343', # Car
'03001627', # Chair
'03790512' # Motorbike
]
scale_list = [
0.9,
0.7,
0.9
]
for synset, obj_scale in zip(synset_list, scale_list):
file_list = sorted(os.listdir(os.path.join(dataset_folder, synset)))
for idx, file in enumerate(file_list):
render_cmd = '%s -b -P render_shapenet.py -- --output %s %s --scale %f --views 24 --resolution 1024 >> tmp.out' % (
blender_root, save_folder, os.path.join(dataset_folder, synset, file, 'model.obj'), obj_scale
# check if dataset_list exists, throw error if not
if not os.path.exists(dataset_list):
raise ValueError('dataset_list does not exist!')

# check if save_folder exists
if not os.path.exists(save_folder):
os.makedirs(save_folder)

scale_list = []
path_list = []

# read and parse json file at dataset_list.json
with open(dataset_list, 'r') as f:
dataset = json.load(f)

# example json entry:
# {
# "name": "Car",
# "id": "02958343",
# "scale": 0.9,
# "directory": "./shapenet/02958343"
# }
for entry in dataset:
scale_list.append(entry['scale'])
path_list.append(entry['directory'])


# for shapenet v2, we normalize the model location
if shapenet_version == '2':
for obj_scale, dataset_folder in zip(scale_list, path_list):
file_list = sorted(os.listdir(os.path.join(dataset_folder)))
for file in file_list:
# check if file_list+'/models' exists
if os.path.exists(os.path.join(dataset_folder, file, 'models')):
# move all files in file_list+'/models' to file_list
os.system('mv ' + os.path.join(dataset_folder, file, 'models/*') + ' ' + os.path.join(dataset_folder, file))
# remove file_list+'/models' if it exists
os.system('rm -rf ' + os.path.join(dataset_folder, file, 'models'))
material_file = os.path.join(dataset_folder, file, 'model_normalized.mtl')
# read material_file as a text file, replace any instance of '../images' with './images'
with open(material_file, 'r') as f:
material_file_text = f.read()
material_file_text = material_file_text.replace('../images', './images')
# write the modified text to material_file
with open(material_file, 'w') as f:
f.write(material_file_text)

# ShapeNetCore v2 normalizes the scale and orientation of the models and the names are changed as a result
model_name = 'model.obj'
if shapenet_version == '2':
model_name = 'model_normalized.obj'

suffix = ''
if(args.quiet_mode == '1'):
suffix = ' >> tmp.out'

import pdb;
for obj_scale, dataset_folder in zip(scale_list, path_list):
file_list = sorted(os.listdir(os.path.join(dataset_folder)))
num = None # set to the number of workers you want (it defaults to the cpu count of your machine)
tp = ThreadPool(num)
def work(file):
output_dir = "/home/user/mirage-dev/GET3D/render_shapenet_data/mirageml-dev/aman/experiements/GET3D/shapenet_rendered"
camera_dir = os.path.abspath(os.path.join(save_folder, "camera", dataset_folder.split("/")[-1], file))
camera_save_dir = os.path.join(output_dir, "camera", dataset_folder.split("/")[-1], file)
img_dir = os.path.abspath(os.path.join(save_folder, "img", dataset_folder.split("/")[-1], file))
img_save_dir = os.path.join(output_dir, "img", dataset_folder.split("/")[-1], file)

if os.path.exists(camera_save_dir) and os.path.exists(img_save_dir):
print("Files Exist on EFS; ", file)
if os.path.exists(camera_dir) and os.path.exists(img_dir):
print("Removing Local: ",file)
subprocess.call(["rm", "-rf", camera_dir])
subprocess.call(["rm", "-rf", img_dir])
return
elif os.path.exists(camera_dir) and os.path.exists(img_dir):
print("Files Exist Locally Moving to EFS: ", file)
subprocess.call(["mv", camera_dir, camera_save_dir])
subprocess.call(["mv", img_dir, img_save_dir])
return

print("Rendering: ", file)
render_cmd = '%s -b -P render_shapenet.py -- --output %s %s --scale %f --views %s --engine %s%s' % (
blender_root, save_folder, os.path.join(dataset_folder, file, model_name), obj_scale, num_views, engine, suffix
)
os.system(render_cmd)

print("Moving:", camera_dir, camera_save_dir)
subprocess.call(["mv", camera_dir, camera_save_dir])
print("Moving", img_dir, img_save_dir)
subprocess.call(["mv", img_dir, img_save_dir])

for idx, file in enumerate(file_list):
tp.apply_async(work, (file,))

tp.close()
tp.join()



Loading