Skip to content

Commit f9e2637

Browse files
dmsuehirKarthik Vadla
authored andcommitted
Remove A3C FP32, SSD-VGG16 FP32 and Int8, Inception V4 Int8, Inception ResNet V2 Int8, and RFCN Int8 (#150)
1 parent 65d1dd2 commit f9e2637

File tree

107 files changed

+4
-13897
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

107 files changed

+4
-13897
lines changed

benchmarks/README.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -18,19 +18,16 @@ dependencies to be installed:
1818
| Adversarial Networks | TensorFlow | DCGAN | Inference | [FP32](adversarial_networks/tensorflow/dcgan/README.md#fp32-inference-instructions) |
1919
| Classification | TensorFlow | Wide & Deep | Inference | [FP32](classification/tensorflow/wide_deep/README.md#fp32-inference-instructions) |
2020
| Content Creation | TensorFlow | DRAW | Inference | [FP32](content_creation/tensorflow/draw/README.md#fp32-inference-instructions) |
21-
| Image Recognition | TensorFlow | Inception ResNet V2 | Inference | [Int8](image_recognition/tensorflow/inception_resnet_v2/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/inception_resnet_v2/README.md#fp32-inference-instructions) |
21+
| Image Recognition | TensorFlow | Inception ResNet V2 | Inference | [FP32](image_recognition/tensorflow/inception_resnet_v2/README.md#fp32-inference-instructions) |
2222
| Image Recognition | TensorFlow | Inception V3 | Inference | [Int8](image_recognition/tensorflow/inceptionv3/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/inceptionv3/README.md#fp32-inference-instructions) |
23-
| Image Recognition | TensorFlow | Inception V4 | Inference | [Int8](image_recognition/tensorflow/inceptionv4/README.md#int8-inference-instructions) |
2423
| Image Recognition | TensorFlow | MobileNet V1 | Inference | [FP32](image_recognition/tensorflow/mobilenet_v1/README.md#fp32-inference-instructions) |
2524
| Image Recognition | TensorFlow | ResNet 101 | Inference | [Int8](image_recognition/tensorflow/resnet101/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/resnet101/README.md#fp32-inference-instructions) |
2625
| Image Recognition | TensorFlow | ResNet 50 | Inference | [Int8](image_recognition/tensorflow/resnet50/README.md#int8-inference-instructions) [FP32](image_recognition/tensorflow/resnet50/README.md#fp32-inference-instructions) |
2726
| Image Recognition | TensorFlow | SqueezeNet | Inference | [FP32](image_recognition/tensorflow/squeezenet/README.md#fp32-inference-instructions) |
2827
| Image Segmentation | TensorFlow | 3D UNet | Inference | [FP32](image_segmentation/tensorflow/3d_unet/README.md#fp32-inference-instructions) |
2928
| Image Segmentation | TensorFlow | Mask R-CNN | Inference | [FP32](image_segmentation/tensorflow/maskrcnn/README.md#fp32-inference-instructions) |
3029
| Object Detection | TensorFlow | Fast R-CNN | Inference | [FP32](object_detection/tensorflow/fastrcnn/README.md#fp32-inference-instructions) |
31-
| Object Detection | TensorFlow | R-FCN | Inference | [Int8](object_detection/tensorflow/rfcn/README.md#int8-inference-instructions) [FP32](object_detection/tensorflow/rfcn/README.md#fp32-inference-instructions) |
30+
| Object Detection | TensorFlow | R-FCN | Inference | [FP32](object_detection/tensorflow/rfcn/README.md#fp32-inference-instructions) |
3231
| Object Detection | TensorFlow | SSD-MobileNet | Inference | [FP32](object_detection/tensorflow/ssd-mobilenet/README.md#fp32-inference-instructions) |
33-
| Object Detection | TensorFlow | SSD-VGG16 | Inference | [Int8](object_detection/tensorflow/ssd-vgg16/README.md#int8-inference-instructions) [FP32](object_detection/tensorflow/ssd-vgg16/README.md#fp32-inference-instructions) |
3432
| Recommendation | TensorFlow | NCF | Inference | [FP32](recommendation/tensorflow/ncf/README.md#fp32-inference-instructions) |
35-
| Reinforcement Learning | TensorFlow | A3C | Inference | [FP32](reinforcement_learning/tensorflow/a3c/README.md#fp32-inference-instructions) |
3633
| Text-to-Speech | TensorFlow | WaveNet | Inference | [FP32](text_to_speech/tensorflow/wavenet/README.md#fp32-inference-instructions) |

benchmarks/common/tensorflow/start.sh

Lines changed: 2 additions & 85 deletions
Original file line numberDiff line numberDiff line change
@@ -155,22 +155,6 @@ function 3d_unet() {
155155
fi
156156
}
157157

158-
# A3C model
159-
function a3c() {
160-
if [ ${PRECISION} == "fp32" ]; then
161-
162-
pip install opencv-python
163-
export PYTHONPATH=${PYTHONPATH}:${MOUNT_EXTERNAL_MODELS_SOURCE}
164-
165-
CMD="${CMD} --checkpoint=${CHECKPOINT_DIRECTORY}"
166-
167-
PYTHONPATH=${PYTHONPATH} CMD=${CMD} run_model
168-
else
169-
echo "PRECISION=${PRECISION} is not supported for ${MODEL_NAME}"
170-
exit 1
171-
fi
172-
}
173-
174158
# DCGAN model
175159
function dcgan() {
176160
if [ ${PRECISION} == "fp32" ]; then
@@ -289,9 +273,7 @@ function inception_resnet_v2() {
289273
exit 1
290274
fi
291275

292-
if [ ${PRECISION} == "int8" ]; then
293-
PYTHONPATH=${PYTHONPATH} CMD=${CMD} run_model
294-
elif [ ${PRECISION} == "fp32" ]; then
276+
if [ ${PRECISION} == "fp32" ]; then
295277
# Add on --in-graph and --data-location for int8 inference
296278
if [ ${MODE} == "inference" ] && [ ${ACCURACY_ONLY} == "True" ]; then
297279
CMD="${CMD} --in-graph=${IN_GRAPH} --data-location=${DATASET_LOCATION}"
@@ -305,21 +287,6 @@ function inception_resnet_v2() {
305287
fi
306288
}
307289

308-
# inceptionv4 model
309-
function inceptionv4() {
310-
if [ ${PRECISION} == "int8" ]; then
311-
# For accuracy, dataset location is required
312-
if [ "${DATASET_LOCATION_VOL}" == None ] && [ ${ACCURACY_ONLY} == "True" ]; then
313-
echo "No dataset directory specified, accuracy cannot be calculated."
314-
exit 1
315-
fi
316-
PYTHONPATH=${PYTHONPATH} CMD=${CMD} run_model
317-
else
318-
echo "PRECISION=${PRECISION} is not supported for ${MODEL_NAME}"
319-
exit 1
320-
fi
321-
}
322-
323290
# Mask R-CNN model
324291
function maskrcnn() {
325292
if [ ${PRECISION} == "fp32" ]; then
@@ -451,16 +418,7 @@ function rfcn() {
451418
split_arg="--split=${split}"
452419
fi
453420

454-
if [ ${PRECISION} == "int8" ]; then
455-
number_of_steps_arg=""
456-
457-
if [ -n "${number_of_steps}" ] && [ ${BENCHMARK_ONLY} == "True" ]; then
458-
number_of_steps_arg="--number_of_steps=${number_of_steps}"
459-
fi
460-
461-
CMD="${CMD} ${number_of_steps_arg} ${split_arg}"
462-
463-
elif [ ${PRECISION} == "fp32" ]; then
421+
if [ ${PRECISION} == "fp32" ]; then
464422
if [[ -z "${config_file}" ]] && [ ${BENCHMARK_ONLY} == "True" ]; then
465423
echo "R-FCN requires -- config_file arg to be defined"
466424
exit 1
@@ -522,41 +480,6 @@ function ssd_mobilenet() {
522480
fi
523481
}
524482

525-
# SSD-VGG16 model
526-
function ssd_vgg16() {
527-
# In-graph is required
528-
if [ "${IN_GRAPH}" == None ] ; then
529-
echo "In graph must be specified!"
530-
exit 1
531-
fi
532-
533-
# For accuracy, dataset location is required, see README for more information.
534-
if [ "${DATASET_LOCATION_VOL}" == "None" ] && [ ${ACCURACY_ONLY} == "True" ]; then
535-
echo "No Data directory specified, accuracy will not be calculated."
536-
exit 1
537-
fi
538-
539-
if [ "${DATASET_LOCATION_VOL}" == "None" ] && [ ${BENCHMARK_ONLY} == "True" ]; then
540-
DATASET_LOCATION=""
541-
fi
542-
543-
if [ ${NOINSTALL} != "True" ]; then
544-
pip install opencv-python
545-
fi
546-
547-
if [ ${PRECISION} == "int8" ]; then
548-
CMD="${CMD} --data-location=${DATASET_LOCATION}"
549-
elif [ ${PRECISION} == "fp32" ]; then
550-
CMD="${CMD} --in-graph=${IN_GRAPH} \
551-
--data-location=${DATASET_LOCATION}"
552-
else
553-
echo "PRECISION=${PRECISION} is not supported for ${MODEL_NAME}"
554-
exit 1
555-
fi
556-
557-
PYTHONPATH=${PYTHONPATH} CMD=${CMD} run_model
558-
}
559-
560483
# Wavenet model
561484
function wavenet() {
562485
if [ ${PRECISION} == "fp32" ]; then
@@ -612,8 +535,6 @@ echo "Log output location: ${LOGFILE}"
612535
MODEL_NAME=$(echo ${MODEL_NAME} | tr 'A-Z' 'a-z')
613536
if [ ${MODEL_NAME} == "3d_unet" ]; then
614537
3d_unet
615-
elif [ ${MODEL_NAME} == "a3c" ]; then
616-
a3c
617538
elif [ ${MODEL_NAME} == "dcgan" ]; then
618539
dcgan
619540
elif [ ${MODEL_NAME} == "draw" ]; then
@@ -624,8 +545,6 @@ elif [ ${MODEL_NAME} == "inceptionv3" ]; then
624545
inceptionv3
625546
elif [ ${MODEL_NAME} == "inception_resnet_v2" ]; then
626547
inception_resnet_v2
627-
elif [ ${MODEL_NAME} == "inceptionv4" ]; then
628-
inceptionv4
629548
elif [ ${MODEL_NAME} == "maskrcnn" ]; then
630549
maskrcnn
631550
elif [ ${MODEL_NAME} == "mobilenet_v1" ]; then
@@ -642,8 +561,6 @@ elif [ ${MODEL_NAME} == "squeezenet" ]; then
642561
squeezenet
643562
elif [ ${MODEL_NAME} == "ssd-mobilenet" ]; then
644563
ssd_mobilenet
645-
elif [ ${MODEL_NAME} == "ssd-vgg16" ]; then
646-
ssd_vgg16
647564
elif [ ${MODEL_NAME} == "wavenet" ]; then
648565
wavenet
649566
elif [ ${MODEL_NAME} == "wide_deep" ]; then

benchmarks/image_recognition/tensorflow/inception_resnet_v2/README.md

Lines changed: 0 additions & 162 deletions
Original file line numberDiff line numberDiff line change
@@ -2,170 +2,8 @@
22

33
This document has instructions for how to run Inception ResNet V2 for the
44
following modes/precisions:
5-
* [Int8 inference](#int8-inference-instructions)
65
* [FP32 inference](#fp32-inference-instructions)
76

8-
## Int8 Inference Instructions
9-
10-
1. Clone this [intelai/models](https://github.com/IntelAI/models)
11-
repository:
12-
13-
```
14-
$ git clone https://github.com/IntelAI/models.git
15-
```
16-
17-
This repository includes launch scripts for running benchmarks and the
18-
an optimized version of the Inception ResNet V2 model code.
19-
20-
2. A link to download the pre-trained model is coming soon.
21-
22-
3. Build a docker image using master of the official
23-
[TensorFlow](https://github.com/tensorflow/tensorflow) repository with
24-
`--config=mkl`. More instructions on
25-
[how to build from source](https://software.intel.com/en-us/articles/intel-optimization-for-tensorflow-installation-guide#inpage-nav-5).
26-
27-
4. If you would like to run Inception ResNet V2 inference and test for
28-
accuracy, you will need the full ImageNet dataset. Benchmarking for latency
29-
and throughput do not require the ImageNet dataset.
30-
31-
Register and download the
32-
[ImageNet dataset](http://image-net.org/download-images).
33-
34-
Once you have the raw ImageNet dataset downloaded, we need to convert
35-
it to the TFRecord format. This is done using the
36-
[build_imagenet_data.py](https://github.com/tensorflow/models/blob/master/research/inception/inception/data/build_imagenet_data.py)
37-
script. There are instructions in the header of the script explaining
38-
its usage.
39-
40-
After the script has completed, you should have a directory with the
41-
sharded dataset something like:
42-
43-
```
44-
$ ll /home/myuser/datasets/ImageNet_TFRecords
45-
-rw-r--r--. 1 user 143009929 Jun 20 14:53 train-00000-of-01024
46-
-rw-r--r--. 1 user 144699468 Jun 20 14:53 train-00001-of-01024
47-
-rw-r--r--. 1 user 138428833 Jun 20 14:53 train-00002-of-01024
48-
...
49-
-rw-r--r--. 1 user 143137777 Jun 20 15:08 train-01022-of-01024
50-
-rw-r--r--. 1 user 143315487 Jun 20 15:08 train-01023-of-01024
51-
-rw-r--r--. 1 user 52223858 Jun 20 15:08 validation-00000-of-00128
52-
-rw-r--r--. 1 user 51019711 Jun 20 15:08 validation-00001-of-00128
53-
-rw-r--r--. 1 user 51520046 Jun 20 15:08 validation-00002-of-00128
54-
...
55-
-rw-r--r--. 1 user 52508270 Jun 20 15:09 validation-00126-of-00128
56-
-rw-r--r--. 1 user 55292089 Jun 20 15:09 validation-00127-of-00128
57-
```
58-
59-
5. Next, navigate to the `benchmarks` directory in your local clone of
60-
the [intelai/models](https://github.com/IntelAI/models) repo from step 1.
61-
The `launch_benchmark.py` script in the `benchmarks` directory is
62-
used for starting a benchmarking run in a optimized TensorFlow docker
63-
container. It has arguments to specify which model, framework, mode,
64-
precision, and docker image to use, along with your path to the ImageNet
65-
TF Records that you generated in step 4.
66-
67-
Substitute in your own `--data-location` (from step 4, for accuracy
68-
only), `--in-graph` pre-trained model file path (from step 2),
69-
and the name/tag for your docker image (from step 3).
70-
71-
Inception ResNet V2 can be run for accuracy, latency benchmarking, or throughput
72-
benchmarking. Use one of the following examples below, depending on
73-
your use case.
74-
75-
For accuracy (using your `--data-location`, `--accuracy-only` and
76-
`--batch-size 100`):
77-
78-
```
79-
python launch_benchmark.py \
80-
--model-name inception_resnet_v2 \
81-
--precision int8 \
82-
--mode inference \
83-
--framework tensorflow \
84-
--accuracy-only \
85-
--batch-size 100 \
86-
--docker-image tf_int8_docker_image \
87-
--in-graph /home/myuser/inception_resnet_v2_int8_pretrained_model.pb \
88-
--data-location /home/myuser/datasets/ImageNet_TFRecords
89-
```
90-
91-
For latency (using `--benchmark-only`, `--socket-id 0` and `--batch-size 1`):
92-
93-
```
94-
python launch_benchmark.py \
95-
--model-name inception_resnet_v2 \
96-
--precision int8 \
97-
--mode inference \
98-
--framework tensorflow \
99-
--benchmark-only \
100-
--batch-size 1 \
101-
--socket-id 0 \
102-
--docker-image tf_int8_docker_image \
103-
--in-graph /home/myuser/inception_resnet_v2_int8_pretrained_model.pb
104-
```
105-
106-
For throughput (using `--benchmark-only`, `--socket-id 0` and `--batch-size 128`):
107-
108-
```
109-
python launch_benchmark.py \
110-
--model-name inception_resnet_v2 \
111-
--precision int8 \
112-
--mode inference \
113-
--framework tensorflow \
114-
--benchmark-only \
115-
--batch-size 128 \
116-
--socket-id 0 \
117-
--docker-image tf_int8_docker_image \
118-
--in-graph /home/myuser/inception_resnet_v2_int8_pretrained_model.pb
119-
```
120-
121-
Note that the `--verbose` flag can be added to any of the above commands
122-
to get additional debug output.
123-
124-
6. The log file is saved to the
125-
`models/benchmarks/common/tensorflow/logs` directory. Below are
126-
examples of what the tail of your log file should look like for the
127-
different configs.
128-
129-
Example log tail when running for accuracy:
130-
131-
```
132-
Processed 49800 images. (Top1 accuracy, Top5 accuracy) = (0.8015, 0.9523)
133-
Processed 49900 images. (Top1 accuracy, Top5 accuracy) = (0.8016, 0.9524)
134-
Processed 50000 images. (Top1 accuracy, Top5 accuracy) = (0.8015, 0.9524)
135-
lscpu_path_cmd = command -v lscpu
136-
lscpu located here: /usr/bin/lscpu
137-
Ran inference with batch size 100
138-
Log location outside container: /home/myuser/intelai/models/benchmarks/common/tensorflow/logs/benchmark_inception_resnet_v2_inference_int8_20190104_193854.log
139-
```
140-
141-
Example log tail when benchmarking for latency:
142-
```
143-
Iteration 39: 0.052 sec
144-
Iteration 40: 0.052 sec
145-
Average time: 0.052 sec
146-
Batch size = 1
147-
Latency: 52.347 ms
148-
Throughput: 19.103 images/sec
149-
lscpu_path_cmd = command -v lscpu
150-
lscpu located here: /usr/bin/lscpu
151-
Ran inference with batch size 1
152-
Log location outside container: /home/myuser/intelai/models/benchmarks/common/tensorflow/logs/benchmark_inception_resnet_v2_inference_int8_20190104_194938.log
153-
```
154-
155-
Example log tail when benchmarking for throughput:
156-
```
157-
Iteration 39: 0.993 sec
158-
Iteration 40: 1.023 sec
159-
Average time: 0.996 sec
160-
Batch size = 128
161-
Throughput: 128.458 images/sec
162-
lscpu_path_cmd = command -v lscpu
163-
lscpu located here: /usr/bin/lscpu
164-
Ran inference with batch size 128
165-
Log location outside container: /home/myuser/intelai/models/benchmarks/common/tensorflow/logs/benchmark_inception_resnet_v2_inference_int8_20190104_195504.log
166-
```
167-
168-
1697
## FP32 Inference Instructions
1708

1719
1. Clone this [intelai/models](https://github.com/IntelAI/models)

benchmarks/image_recognition/tensorflow/inception_resnet_v2/inference/int8/__init__.py

Lines changed: 0 additions & 19 deletions
This file was deleted.

0 commit comments

Comments
 (0)