Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Benchmark YOLOv8 TensorRT Engine on COCO Dataset (Nvidia Jetson) #602

Open
valentin-phoenix opened this issue Dec 11, 2024 · 0 comments

Comments

@valentin-phoenix
Copy link

Hello,

I’m trying to benchmark YOLOv8 models on the COCO dataset, similar to what you’ve done in your benchmark documentation. My goal is to compare the performance of engines generated with different precisions (FP32, FP16, and INT8) for different model sizes (n, s, m, l, x).

Approach So Far:

  1. Using Ultralytics Backend:

I modified the Ultralytics YOLOv8 code to run validation (yolo detect val) using a model generated via your ONNX export pipeline and converted into a TensorRT engine using DeepStream. While inference works, the results appear degraded (mAP is less than 1%), suggesting the model’s output might not be correctly parsed.

  1. Manual Inference via TensorRT Python Bindings:

I attempted manual inference of the engine using TensorRT's Python API. Although I can run the inference, I haven't successfully parsed the output to obtain meaningful results. If I could parse the outputs correctly, this method might work for benchmarking using pycocotools.

Question: What’s your recommended approach for benchmarking TensorRT YOLOv8 engines on the COCO dataset? Can you provide scripts or guidelines to correctly infer these engine?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant