Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use yolo_tiny to make INT8 Quantization? #46

Open
su26225 opened this issue Jun 8, 2021 · 3 comments
Open

How to use yolo_tiny to make INT8 Quantization? #46

su26225 opened this issue Jun 8, 2021 · 3 comments

Comments

@su26225
Copy link

su26225 commented Jun 8, 2021

I make some changes in yolov4_416x416_qtz.json and accuracy_checker\adapters\yolo.py as follows:
"type": "yolo_v3",
"anchors": "10.0, 14.0, 23.0, 27.0, 37.0, 58.0, 81.0, 82.0, 135.0, 169.0, 344.0, 319.0",
"classes": 2,
"coords": 4,
"num": 6,
"threshold": 0.001,
"anchor_masks": [ [3, 4, 5], [1, 2, 3]],
"outputs": ["detector/yolo-v4/Conv_1/BiasAdd/YoloRegion", "detector/yolo-v4/Conv_9/BiasAdd/YoloRegion"]
and
class YoloV3Adapter(Adapter):
'anchors': default='tiny_yolo_v3',
'cells': ListField default=[13, 26],

But I got the error :
Traceback (most recent call last):
File "D:\python3.6.5\Scripts\pot-script.py", line 33, in
sys.exit(load_entry_point('pot==1.0', 'console_scripts', 'pot')())
File "C:\Intel\openvino_2021.3.394\deployment_tools\tools\post_training_optimization_toolkit\app\run.py", line 37, in main
app(sys.argv[1:])
File "C:\Intel\openvino_2021.3.394\deployment_tools\tools\post_training_optimization_toolkit\app\run.py", line 56, in app
metrics = optimize(config)
File "C:\Intel\openvino_2021.3.394\deployment_tools\tools\post_training_optimization_toolkit\app\run.py", line 123, in optimize
compressed_model = pipeline.run(model)
File "C:\Intel\openvino_2021.3.394\deployment_tools\tools\post_training_optimization_toolkit\compression\pipeline\pipeline.py", line 57, in run
result = self.collect_statistics_and_run(model, current_algo_seq)
File "C:\Intel\openvino_2021.3.394\deployment_tools\tools\post_training_optimization_toolkit\compression\pipeline\pipeline.py", line 67, in collect_statistics_and_run
model = algo.run(model)
File "C:\Intel\openvino_2021.3.394\deployment_tools\tools\post_training_optimization_toolkit\compression\algorithms\quantization\default\algorithm.py", line 93, in run
self.algorithms[1].algo_collector.compute_statistics(model)
File "C:\Intel\openvino_2021.3.394\deployment_tools\tools\post_training_optimization_toolkit\compression\statistics\collector.py", line 73, in compute_statistics
, stats = self._engine.predict(combined_stats, sampler)
File "C:\Intel\openvino_2021.3.394\deployment_tools\tools\post_training_optimization_toolkit\compression\engines\ac_engine.py", line 169, in predict
stdout_redirect(self._model_evaluator.process_dataset_async, **args)
File "C:\Intel\openvino_2021.3.394\deployment_tools\tools\post_training_optimization_toolkit\compression\utils\logger.py", line 132, in stdout_redirect
res = fn(*args, **kwargs)
File "C:\Intel\openvino_2021.3.394\deployment_tools\tools\post_training_optimization_toolkit\libs\open_model_zoo\tools\accuracy_checker\accuracy_checker\evaluators\quantization_model_evaluator.py", line 153, in process_dataset_async
batch_raw_predictions, batch_identifiers, batch_meta
File "C:\Intel\openvino_2021.3.394\deployment_tools\tools\post_training_optimization_toolkit\libs\open_model_zoo\tools\accuracy_checker\accuracy_checker\evaluators\quantization_model_evaluator.py", line 99, in _process_ready_predictions
return self.adapter.process(batch_raw_predictions, batch_identifiers, batch_meta)
File "C:\Intel\openvino_2021.3.394\deployment_tools\tools\post_training_optimization_toolkit\libs\open_model_zoo\tools\accuracy_checker\accuracy_checker\adapters\yolo.py", line 393, in process
predictions[b].append(raw_outputs[blob][b])
KeyError: 'detector/yolo-v4/Conv_1/BiasAdd/YoloRegion'

@TNTWEN
Copy link
Owner

TNTWEN commented Jun 8, 2021

just change “anchors” "anchor_masks" "outputs"

for "outputs" ,you could use netron to locate IRmodel's yoloregion
6UAN`6 5HFNU17Z1JHRYTDM

@su26225
Copy link
Author

su26225 commented Jun 10, 2021

just change “anchors” "anchor_masks" "outputs"

for "outputs" ,you could use netron to locate IRmodel's yoloregion

Thanks . I have made the Quantization from f32 to int8 successfully .

And now I try to make the Quantization from f32 to f16 , I have changed the parameters in yolov4_416x416_qtz.json as follow:
"weights": { // Weights quantization parameters used by MinMaxAlgorithm
"bits": 16, // Bit-width, default is 8
"mode": "symmetric", // Quantization mode, default is "symmetric"
"level_low": 0, // Minimum level in the integer range in which we quantize to, default is 0 for unsigned range, -2^(bit-1) - for signed
"level_high": 65535, // Maximum level in the integer range in which we quantize to, default is 2^bits-1 for unsigned range, 2^(bit-1)-1 - for signed
"granularity": "perchannel", // Quantization scale granularity: ["pertensor" (default), "perchannel"]
"activations": {
"bits": 16, // Number of quantization bits
"mode": "symmetric", // Quantization mode
"granularity": "pertensor", // Granularity: one scale for output tensor
But the size of .bin has not changed .
I get the warning :
WARNING:compression.algorithms.quantization.fake_quantize_configuration:Fake quantize node detector/yolo-v4-tiny/Conv_14/Conv2D/fq_weights_1 does not support configuration from tool config file (mismatch with hardware config)
At the same time , I got a note from the documentation ,that is "It worth noting that changing the quantization scheme may lead to inability to infer such mode on the existing HW."
https://docs.openvinotoolkit.org/latest/pot_compression_algorithms_quantization_default_README.html
Can this Quantization continue? Thanks!

@TNTWEN
Copy link
Owner

TNTWEN commented Jun 10, 2021

FP32 ->FP16
Just follow FAQ ponit 10
There is no need to use pot

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants