You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I’m trying to benchmark YOLOv8 models on the COCO dataset, similar to what you’ve done in your benchmark documentation. My goal is to compare the performance of engines generated with different precisions (FP32, FP16, and INT8) for different model sizes (n, s, m, l, x).
Approach So Far:
Using Ultralytics Backend:
I modified the Ultralytics YOLOv8 code to run validation (yolo detect val) using a model generated via your ONNX export pipeline and converted into a TensorRT engine using DeepStream. While inference works, the results appear degraded (mAP is less than 1%), suggesting the model’s output might not be correctly parsed.
Manual Inference via TensorRT Python Bindings:
I attempted manual inference of the engine using TensorRT's Python API. Although I can run the inference, I haven't successfully parsed the output to obtain meaningful results. If I could parse the outputs correctly, this method might work for benchmarking using pycocotools.
Question: What’s your recommended approach for benchmarking TensorRT YOLOv8 engines on the COCO dataset? Can you provide scripts or guidelines to correctly infer these engine?
The text was updated successfully, but these errors were encountered:
Hello,
I’m trying to benchmark YOLOv8 models on the COCO dataset, similar to what you’ve done in your benchmark documentation. My goal is to compare the performance of engines generated with different precisions (FP32, FP16, and INT8) for different model sizes (n, s, m, l, x).
Approach So Far:
I modified the Ultralytics YOLOv8 code to run validation (yolo detect val) using a model generated via your ONNX export pipeline and converted into a TensorRT engine using DeepStream. While inference works, the results appear degraded (mAP is less than 1%), suggesting the model’s output might not be correctly parsed.
I attempted manual inference of the engine using TensorRT's Python API. Although I can run the inference, I haven't successfully parsed the output to obtain meaningful results. If I could parse the outputs correctly, this method might work for benchmarking using
pycocotools
.Question: What’s your recommended approach for benchmarking TensorRT YOLOv8 engines on the COCO dataset? Can you provide scripts or guidelines to correctly infer these engine?
The text was updated successfully, but these errors were encountered: