You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,8 @@
2
2
3
3
This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.
4
4
5
+
Model packages and containers for running the Model Zoo's workloads can be found at the [Intel® oneContainer Portal](https://software.intel.com/containers).
6
+
5
7
## Purpose of the Model Zoo
6
8
7
9
- Demonstrate the AI workloads and deep learning models Intel has optimized and validated to run on Intel hardware
| Language Modeling | TensorFlow |[BERT](https://arxiv.org/pdf/1810.04805.pdf)| Inference |[FP32](language_modeling/tensorflow/bert_large/README.md#fp32-inference-instructions)[BFloat16**](language_modeling/tensorflow/bert_large/README.md#bfloat16-inference-instructions)|
27
29
| Language Modeling | TensorFlow |[BERT](https://arxiv.org/pdf/1810.04805.pdf)| Training |[FP32](language_modeling/tensorflow/bert_large/README.md#fp32-training-instructions)[BFloat16**](language_modeling/tensorflow/bert_large/README.md#bfloat16-training-instructions)|
28
30
| Language Translation | TensorFlow |[BERT](https://arxiv.org/pdf/1810.04805.pdf)| Inference |[FP32](language_translation/tensorflow/bert/README.md#fp32-inference-instructions)|
29
31
| Language Translation | TensorFlow |[GNMT*](https://arxiv.org/pdf/1609.08144.pdf)| Inference |[FP32](language_translation/tensorflow/mlperf_gnmt/README.md#fp32-inference-instructions)|
30
32
| Language Translation | TensorFlow |[Transformer_LT_Official ](https://arxiv.org/pdf/1706.03762.pdf)| Inference |[FP32](language_translation/tensorflow/transformer_lt_official/README.md#fp32-inference-instructions)|
31
33
| Language Translation | TensorFlow |[Transformer_LT_mlperf ](https://arxiv.org/pdf/1706.03762.pdf)| Training |[FP32](language_translation/tensorflow/transformer_mlperf/README.md#fp32-training-instructions)[BFloat16**](language_translation/tensorflow/transformer_mlperf/README.md#bfloat16-training-instructions)|
| Recommendation | TensorFlow |[Wide & Deep Large Dataset](https://arxiv.org/pdf/1606.07792.pdf)| Inference |[Int8](recommendation/tensorflow/wide_deep_large_ds/README.md#int8-inference-instructions)[FP32](recommendation/tensorflow/wide_deep_large_ds/README.md#fp32-inference-instructions)|
37
41
| Recommendation | TensorFlow |[Wide & Deep Large Dataset](https://arxiv.org/pdf/1606.07792.pdf)| Training |[FP32](recommendation/tensorflow/wide_deep_large_ds/README.md#fp32-training-instructions)|
3. Clone the [intelai/models](https://github.com/intelai/models) repo
40
-
and then run the model scripts for either online or batch inference or accuracy. For --dataset-location in accuracy run, please use the ImageNet validation data path from step 1.
23
+
and then run the model scripts for either online or batch inference or accuracy. For --data-location in accuracy run, please use the ImageNet validation data path from step 1.
41
24
Each model run has user configurable arguments separated from regular arguments by '--' at the end of the command.
42
25
Unless configured, these arguments will run with default values. Below are the example codes for each use case:
0 commit comments