Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yolov8s no bounding box on default settings #597

Open
lida2003 opened this issue Dec 5, 2024 · 30 comments
Open

Yolov8s no bounding box on default settings #597

lida2003 opened this issue Dec 5, 2024 · 30 comments

Comments

@lida2003
Copy link

lida2003 commented Dec 5, 2024

The issue is quite similar to #390, but I need to use latest up to date versions.

Here is the video when I test yolov8s: https://drive.google.com/file/d/1I5MGC9_91h0drNASEM2z9VQUDptLNW_4/view?usp=drive_link

yolov4 is OK, https://drive.google.com/file/d/1bIdyqcfNa6JbuOyBR6NYOjnPqPTp-o-m/view?usp=sharing

Software part of jetson-stats 4.2.12 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Orin Nano Developer Kit - Jetpack 5.1.4 [L4T 35.6.0]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
 - P-Number: p3767-0005
 - Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
 - Distribution: Ubuntu 20.04 focal
 - Release: 5.10.216-tegra
jtop:
 - Version: 4.2.12
 - Service: Active
Libraries:
 - CUDA: 11.4.315
 - cuDNN: 8.6.0.166
 - TensorRT: 8.5.2.2
 - VPI: 2.4.8
 - OpenCV: 4.9.0 - with CUDA: YES
DeepStream C/C++ SDK version: 6.3

Python Environment:
Python 3.8.10
    GStreamer:                   YES (1.16.3)
  NVIDIA CUDA:                   YES (ver 11.4, CUFFT CUBLAS FAST_MATH)
        OpenCV version: 4.9.0  CUDA True
          YOLO version: 8.3.33
         Torch version: 2.1.0a0+41361538.nv23.06
   Torchvision version: 0.16.1+fdea156
DeepStream SDK version: 1.1.8
@PaoXi
Copy link

PaoXi commented Dec 9, 2024

Did you solve this error, the default yolov8s after running it with deepstream-app -c deep* no detection is shown no bounding boxes

@lida2003
Copy link
Author

@PaoXi No, I didn't have time to dig into it. And I didn't find any clue yet.

@marcoslucianops
Copy link
Owner

@PaoXi is your board also Orin Nano?

lida2003 added a commit to SnapDragonfly/jetson-fpv that referenced this issue Dec 10, 2024
ok for rtp/h264)

- [How to configure h265 stream?](marcoslucianops/DeepStream-Yolo#600)
- [Yolov8s no bounding box on default settings #597](marcoslucianops/DeepStream-Yolo#597)
@PaoXi
Copy link

PaoXi commented Dec 10, 2024

Yes it's 16 Gb

@marcoslucianops
Copy link
Owner

Can someone send me the exported onnx file (from Orin Nano)?

@PaoXi
Copy link

PaoXi commented Dec 10, 2024

There're two version, one exported using ultralytics API. other one directly with pytorch script

https://www.mediafire.com/file/edzascweikxrup9/yolov8s.pt.onnx/file
https://www.mediafire.com/file/5qxn3sxbqhym53q/yolov8sss.onnx/file

@PaoXi
Copy link

PaoXi commented Dec 10, 2024

They're two version, one with ultralytics's API, and the other using directly pytorch

https://www.mediafire.com/file/edzascweikxrup9/yolov8s.pt.onnx/file
https://www.mediafire.com/file/5qxn3sxbqhym53q/yolov8sss.onnx/file

@lida2003
Copy link
Author

@marcoslucianops I don't know exact model version, but I have downloaded here.

export from pt to onnx command:

$ yolo export model=yolov8s.pt format=onnx
$ yolo version
8.3.33

Attached below:

@marcoslucianops
Copy link
Owner

@lida2003 you need to export with the utils/export_yoloV8.py. See the docs/YOLOv8.md.

@lida2003
Copy link
Author

@marcoslucianops The result is the same (no bounding boxes), but I got the right way to generate onnx file, thanks.

PS: delete model_b1_gpu0_fp32.engine before run deepstream -c deep*.

@lida2003
Copy link
Author

lida2003 commented Dec 14, 2024

@marcoslucianops We are trying to reproduce the performance mentioned in the following link, which claims to achieve 181 FPS with INT8 precision on Jetson Orin NX. However, we are currently stuck on the bounding box selection issue. Any good suggestions?

EDIT: BTW, I did try wget https://github.com/ultralytics/assets/releases/download/v8.2.0/yolov8s.pt, the result is the same, no bounding box. I don't know what might be the issue or what can I do about it?

@marcoslucianops
Copy link
Owner

marcoslucianops commented Dec 16, 2024

I didn't get detection with your models @lida2003 and @PaoXi, but I got detections exporting the model here (from ultralytics, using the export_yoloV8.py and with PyTorch 2.4). What is the PyTorch version you are using?

@lida2003
Copy link
Author

lida2003 commented Dec 17, 2024

@marcoslucianops

This PyTorch is from nvidia binary release

Python Environment:
Python 3.8.10
    GStreamer:                   YES (1.16.3)
  NVIDIA CUDA:                   YES (ver 11.4, CUFFT CUBLAS FAST_MATH)
        OpenCV version: 4.9.0  CUDA True
          YOLO version: 8.3.33
         Torch version: 2.1.0a0+41361538.nv23.06
   Torchvision version: 0.16.1+fdea156
DeepStream SDK version: 1.1.8

EDIT: This is the latest(maybe the last) binary release for Jetpack 5.1.4 (ubuntu20.04).

@marcoslucianops
Copy link
Owner

Can you try v2.0.0 and v1.14.0?

@lida2003
Copy link
Author

lida2003 commented Dec 17, 2024

Can you try v2.0.0 and v1.14.0?

No, on this board runing jetpack 5.1.3/5.1.4 there is only one release version torch-2.1.0a0+41361538.nv23.06-cp38-cp38-linux_aarch64.whl, which is in developer.download.nvidia.cn/compute/redist/jp/v512/pytorch/. Old versions(links) might be removed from their web server.

I will try to export in x86 env and to see if it helps, and get back to you soon.

Also I have made a request here: [REQUEST] build script for pytorch or up to date pytorh binary release supporting jetson boards running L4T35.6(ubuntu20.04)

@marcoslucianops
Copy link
Owner

I think the versions I said also works on 5.1.3/5.1.4. Can you try?

@lida2003
Copy link
Author

lida2003 commented Dec 17, 2024

@marcoslucianops Do you mean it might be related with PyTorch version?

  • [NG] As I'm using 2.1 (from nvidia), I'm NOT sure what version @PaoXi was using.
  • [OK] From previous talk, you were using 2.4 (Not sure GPU or CPU version).

We don't have other Pytorch GPU version for L4T35.6 except 2.1. So only possible version is CPU versions which might be v2.0.0 or v1.14.0, which you are refering.

So I'm going to use x86(ubuntu 22.04) latest version on laptop to export pt file if possible, which will not mess up the current jetson env(I did mess up the env just couple of weeks before).

EDIT: x86 donwloading torch-2.5.1-cp310-cp310-manylinux1_x86_64.whl, well, it's time consuming... :(

@lida2003
Copy link
Author

lida2003 commented Dec 17, 2024

@marcoslucianops Do you mean it might be related with PyTorch version?

Well, don't have much time today. But ... ... It's great! It works using below exported onnx file to my jetson orin board.

          YOLO version: 8.3.50
         Torch version: 2.5.1+cu124
   Torchvision version: 0.20.1+cu124
$ python3 ./utils/export_yoloV8.py -w yolov8s.pt --dynamic
  • yolov8s.pt.x86.onnx works pretty fine. And the speed is OK, stable at 30FPS (file:///opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h265.mp4)

Note1: model_b1_gpu0_fp32.engine removed before using new onnx file.
Note2: Well, if needed I will try v2.0.0/v1.14.0 if I have time and get my current job done.

PS: The above test takes quite a lot of time for downloading python components.


EDIT: Should not use yolo command to export yolov8s.pt.yolo.onnx. It didn't work.

$ yolo export model=yolov8s.pt format=onnx
$ yolo version
8.3.50

@PaoXi
Copy link

PaoXi commented Dec 19, 2024

Thanks, @lida2003 for your comment and for sharing this approach! This version works pretty fine for me as well. I'm grateful for your insights—it's been super helpful.

By the way, could you share more details about how you exported this ONNX version? I'm curious if there are specific steps or tweaks you used that made it work so well.

Also, just to share my setup, this ONNX file worked for me with the following PyTorch version:

  • torch: 1.14.0a0+44dac51c.nv23.2
  • torchvision: 0.14

Here’s my Jetson setup:

Model: NVIDIA Orin NX Developer Kit - Jetpack 5.1.2 [L4T 35.4.1]  
NV Power Mode[0]: MAXN  
Platform: Ubuntu 20.04 (focal)  
Kernel: 5.10.120-tegra  
Libraries:  
  - CUDA: 11.4.315  
  - cuDNN: 8.6.0.166  
  - TensorRT: 8.5.2.2  
  - VPI: 2.3.9  
  - Vulkan: 1.3.204  
  - OpenCV: 4.5.5 (with CUDA: YES)  

Looking forward to your thoughts!

@lida2003
Copy link
Author

lida2003 commented Dec 19, 2024

By the way, could you share more details about how you exported this ONNX version? I'm curious if there are specific steps or tweaks you used that made it work so well.

No special steps, just use 2.5.1 on x86 to export the onnx as guide said.

But I have found some issue related with BYTETrack here: #605. Not sure if it's related with onnx file.

@marcoslucianops Do you mean it might be related with PyTorch version?

* [NG] As I'm using 2.1 (from nvidia), I'm NOT sure what version @PaoXi was using.

* [OK] From previous talk, you were using 2.4 (Not sure GPU or CPU version).
  • [OK] 2.5.1+cu124, x86 exported onnx file
  • [OK] 1.14.0a0+44dac51c.nv23.2 , jetson Orin NX exported onnx file

@marcoslucianops we have the above results, what root cause might be?

I'm trying build jetson orin pytorch 2.5.1, still some issues now.

@sdhamodaran
Copy link

@marcoslucianops Hi I could not get this to show the detection boxes at all. I donno where i am going wrong. As suggested by you, I have tried torch 2.0 and 1.4 still could not get it to work. I am on Jetson orin nano 8 gb, deepstream-app version 6.2.0, DeepStreamSDK 6.2.0,

@lida2003
Copy link
Author

@sdhamodaran It might be something to do with onnxruntime. Check onnxruntime version into account also.

@sdhamodaran
Copy link

Thanks for your reply. I have '1.19.2' version of onnxruntime. Which one you have?

@lida2003
Copy link
Author

lida2003 commented Jan 16, 2025

@sdhamodaran I use onnxruntime_gpu-1.17.0-cp38-cp38-linux_aarch64.whl, which comes from nvidia, on jetson orin.

I have upgraded pytorch to pytorch-v2.5.1+l4t35.6-cp38-cp38-aarch64 on jetson orin L4T 35.6 Jetpack 5.1.4. And it seems still got trouble.

Right now, I'm compiling onnxruntime 1.19.2, still got issue on jetson orin L4T 35.6 Jetpack 5.1.4.

BTW, you can export onnx file using x86 or try @PaoXi 's combination torch: 1.14.0a0+44dac51c.nv23.2 + torchvision: 0.14.

And we have uploaded those onnx file, which is OK for us, you can try.

@PaoXi
Copy link

PaoXi commented Jan 16, 2025

Have you tried the last exported file by @lida2003 ? Try exporting the ONNX file on an x86 architecture workstation using the latest versions of CUDA, PyTorch, and ONNX Runtime. Then, use the generated ONNX file to export the engine file.

@sdhamodaran
Copy link

Hi @PaoXi and @lida2003 , thanks for offering to use your exported file. That worked well. So it is the onnx file creation process that is causing the issue.

@sdhamodaran
Copy link

Also a general question. When you guys try different combos of torch, torchvision and onnxruntime, from which step do you repeat the process from. I am starting from creation of onnx file and then running deepstream-app command. Am i doing it right?

@lida2003
Copy link
Author

So it is the onnx file creation process that is causing the issue.

YES, but I didn't know the rootcause. Probably onnxruntime, Need to confirm.

When x86 onnx file works, i thought it was pytorch version issue. After ugraded to 2.5.1 on jetson, I found it will still fail exporting on jetson, so I think it might be onnxruntime now.

Anyway, x86's onnx file works. We need more tests or more experienced person to take time and effort to find out rootcause.

I am starting from creation of onnx file and then running deepstream-app command. Am i doing it right?

YES, you are absolutely right.

@lida2003
Copy link
Author

@marcoslucianops As you can see in my first post of this thread, I didn't noticed onnxruntime. But I checked my system, there are two onnxruntime w/o gpu. How can I distinguish whether the application is actually using onnxruntime or onnxruntime-gpu in the code?

python3 ./utils/export_yoloV8.py -w yolov8s.pt --dynamic

$ sudo ./wrapper.sh version
Skipping CMD_KEYMONITOR execution for module 'version'.
Executing command on module version:

Software part of jetson-stats 4.2.12 - (c) 2024, Raffaello Bonghi
Model: NVIDIA Orin Nano Developer Kit - Jetpack 5.1.4 [L4T 35.6.0]
NV Power Mode[0]: 15W
Serial Number: [XXX Show with: jetson_release -s XXX]
Hardware:
 - P-Number: p3767-0005
 - Module: NVIDIA Jetson Orin Nano (Developer kit)
Platform:
 - Distribution: Ubuntu 20.04 focal
 - Release: 5.10.216-tegra
jtop:
 - Version: 4.2.12
 - Service: Active
Libraries:
 - CUDA: 11.8.89
 - cuDNN: 8.6.0.166
 - TensorRT: 8.5.2.2
 - VPI: 2.4.8
 - Vulkan: 1.3.204
 - OpenCV: 4.9.0 - with CUDA: YES
DeepStream C/C++ SDK version: 6.3

Python Environment:
Python 3.8.10
    GStreamer:                   YES (1.16.3)
  NVIDIA CUDA:                   YES (ver 11.4, CUFFT CUBLAS FAST_MATH)
         OpenCV version: 4.9.0  CUDA True
           YOLO version: 8.3.33
         PYCUDA version: 2024.1.2
          Torch version: 2.5.1+l4t35.6
    Torchvision version: 0.20.1a0+3ac97aa
 DeepStream SDK version: 1.1.8
onnxruntime     version: 1.16.3
onnxruntime-gpu version: 1.18.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants