Skip to content

Commit 254b21a

Browse files
committed
Sync master with 'imz-r2.4'
Signed-off-by: Abolfazl Shahbazi <[email protected]>
2 parents c0d9a4f + ad4cde7 commit 254b21a

File tree

1,615 files changed

+113590
-34783
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

1,615 files changed

+113590
-34783
lines changed

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,3 +11,5 @@ test_data/
1111
download_glue_data.py
1212
data/
1313
output/
14+
**/**.whl
15+
**/**.tar.gz

CODEOWNERS

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,17 +1,18 @@
11
# Lines starting with '#' are comments.
22
# Each line is a file pattern followed by one or more owners.
33

4-
# These owners will be the default owners for everything in the repo.
5-
* @mlukaszewski @claynerobison @chuanqi129 @agramesh1 @justkw
4+
# These owners will be the default owners for everything in the repo,
5+
# but PR owner should be able to assign other contributors when appropriate
6+
* @ashahba @claynerobison @dmsuehir
7+
datasets @ashahba @claynerobison @dzungductran
8+
docs @claynerobison @mhbuehler
9+
k8s @ashahba @dzungductran @kkasravi
10+
models @agramesh1 @ashraf-bhuiyan @riverliuintel @wei-v-wang
611

712
# Order is important. The last matching pattern has the most precedence.
813
# So if a pull request only touches javascript files, only these owners
914
# will be requested to review.
1015
#*.js @octocat @github/js
1116

1217
# You can also use email addresses if you prefer.
13-
1418

15-
# paddlepaddle
16-
**/paddlepaddle/** @kbinias @sfraczek @Sand3r- @lidanqing-intel @ddokupil @pmajchrzak @wojtuss
17-
**/PaddlePaddle/** @kbinias @sfraczek @Sand3r- @lidanqing-intel @ddokupil @pmajchrzak @wojtuss

README.md

Lines changed: 12 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,6 @@
33
This repository contains **links to pre-trained models, sample scripts, best practices, and step-by-step tutorials** for many popular open-source machine learning models optimized by Intel to run on Intel® Xeon® Scalable processors.
44

55
Model packages and containers for running the Model Zoo's workloads can be found at the [Intel® oneContainer Portal](https://software.intel.com/containers).
6-
Intel Model Zoo is also bundled as a part of
7-
[Intel® oneAPI AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html) (AI Kit).
86

97
## Purpose of the Model Zoo
108

@@ -17,20 +15,21 @@ For any performance and/or benchmarking information on specific Intel platforms,
1715

1816
## How to Use the Model Zoo
1917

20-
### Getting Started
21-
18+
### Getting Started using AI Kit
19+
- The Intel Model Zoo is released as a part of the [Intel® AI Analytics Toolkit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit.html)
20+
which provides a consolidated package of Intel’s latest deep and machine learning optimizations
21+
all in one place for ease of development. Along with Model Zoo, the toolkit also includes Intel
22+
optimized versions of deep learning frameworks (TensorFlow, PyTorch) and high performing Python
23+
libraries to streamline end-to-end data science and AI workflows on Intel architectures.
24+
- The [documentation here](/docs/general/tensorflow/AIKit.md) has instructions on how to get to
25+
the Model Zoo's conda environments and code directory within AI Kit.
26+
- There is a table of TensorFlow models with links to instructions on how to run the models [here](/benchmarks/README.md).
27+
- To get started you can refer to the [ResNet50 FP32 Inference code sample.](https://github.com/oneapi-src/oneAPI-samples/tree/master/AI-and-Analytics/Features-and-Functionality/IntelTensorFlow_ModelZoo_Inference_with_FP32_Int8)
28+
29+
### Getting Started without AI Kit
2230
- If you know what model you are interested in, or if you want to see a full list of models in the Model Zoo, start **[here](/benchmarks)**.
2331
- For framework best practice guides, and step-by-step tutorials for some models in the Model Zoo, start **[here](/docs)**.
2432

25-
- AI Kit provides a consolidated package of Intel’s latest deep and machine
26-
learning optimizations all in one place for ease of development. Along with
27-
Model Zoo, the toolkit also includes Intel optimized versions of deep
28-
learning frameworks (TensorFlow, PyTorch) and high performing Python libraries
29-
to streamline end-to-end data science and AI workflows on Intel architectures.
30-
31-
|[Download AI Kit](https://software.intel.com/content/www/us/en/develop/tools/oneapi/ai-analytics-toolkit/) |[AI Kit Get Started Guide](https://software.intel.com/content/www/us/en/develop/documentation/get-started-with-ai-linux/top.html) |
32-
|---|---|
33-
3433
### Directory Structure
3534
The Model Zoo is divided into four main directories:
3635
- **[benchmarks](/benchmarks)**: Look here for sample scripts and complete instructions on downloading and running each Intel-optimized pre-trained model.

benchmarks/README.md

Lines changed: 60 additions & 47 deletions
Large diffs are not rendered by default.

benchmarks/common/base_model_init.py

Lines changed: 15 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@
2121
import glob
2222
import json
2323
import os
24+
import sys
2425
import time
2526

2627

@@ -93,7 +94,7 @@ def __init__(self, args, custom_args=[], platform_util=None):
9394
+ " --map-by ppr:" + str(pps) + ":socket:pe=" + split_a_socket + " --cpus-per-proc " \
9495
+ split_a_socket + " " + self.python_exe
9596

96-
def run_command(self, cmd):
97+
def run_command(self, cmd, replace_unique_output_dir=None):
9798
"""
9899
Prints debug messages when verbose is enabled, and then runs the
99100
specified command.
@@ -118,7 +119,8 @@ def run_command(self, cmd):
118119
"the list of cpu nodes could not be retrieved. Please ensure "
119120
"that your system has numa nodes and numactl is installed.")
120121
else:
121-
self.run_numactl_multi_instance(cmd)
122+
self.run_numactl_multi_instance(
123+
cmd, replace_unique_output_dir=replace_unique_output_dir)
122124
else:
123125
if self.args.verbose:
124126
print("Running: {}".format(str(cmd)))
@@ -136,7 +138,7 @@ def group_cores(self, cpu_cores_list, cores_per_instance):
136138
end_list.append(cpu_cores_list[-count:]) if count != 0 else end_list
137139
return end_list
138140

139-
def run_numactl_multi_instance(self, cmd):
141+
def run_numactl_multi_instance(self, cmd, replace_unique_output_dir=None):
140142
"""
141143
Generates a series of commands that call the specified cmd with multiple
142144
instances, where each instance uses the a specified number of cores. The
@@ -195,7 +197,15 @@ def run_numactl_multi_instance(self, cmd):
195197
"numactl --localalloc --physcpubind={1}").format(
196198
len(core_list), ",".join(core_list))
197199
instance_logfile = log_filename_format.format("instance" + str(instance_num))
198-
instance_command = "{} {}".format(prefix, cmd)
200+
201+
unique_command = cmd
202+
if replace_unique_output_dir:
203+
# Swap out the output dir for a unique dir
204+
unique_dir = os.path.join(replace_unique_output_dir,
205+
"instance_{}".format(instance_num))
206+
unique_command = unique_command.replace(replace_unique_output_dir, unique_dir)
207+
208+
instance_command = "{} {}".format(prefix, unique_command)
199209
multi_instance_command += "{} >> {} 2>&1 & \\\n".format(
200210
instance_command, instance_logfile)
201211
instance_logfiles.append(instance_logfile)
@@ -209,6 +219,7 @@ def run_numactl_multi_instance(self, cmd):
209219

210220
# Run the multi-instance command
211221
print("\nMulti-instance run:\n" + multi_instance_command)
222+
sys.stdout.flush()
212223
os.system(multi_instance_command)
213224

214225
# Wait to ensure that log files have been written

0 commit comments

Comments
 (0)