modes/export/ #7933
Replies: 81 comments 225 replies
-
Where can we find working examples of a tf.js exported model? |
Beta Was this translation helpful? Give feedback.
-
How to use exported engine file for inference of images in a directory? |
Beta Was this translation helpful? Give feedback.
-
I trained a custom model taking yolov8n.pt (backbone) and I want to do a model registry in MLFLOW of the model in the .engine format. It's possible directly without the export step? Someone deal with something similar? Tks for your help! |
Beta Was this translation helpful? Give feedback.
-
Hi, I appreciate the really awesome work within Ultralytics. I have a simple question. What is the difference between |
Beta Was this translation helpful? Give feedback.
-
Hello @pderrenger Can you plz help me out with how can i use Paddlepaddle Format to extract the text from the images? Your response is very imp to me i am waiting for your reply. |
Beta Was this translation helpful? Give feedback.
-
my code from ultralytics import YOLO model = YOLO('yolov8n_web_model/yolov8n.pt') # load an official model model = YOLO('/path_to_model/best.pt') i got an error ERROR: The trace log is below.
What you should do instead is wrap
ERROR: input_onnx_file_path: /home/ubuntu/Python/runs/detect/train155/weights/best.onnx TensorFlow SavedModel: export failure ❌ 7.4s: SavedModel file does not exist at: /home/ubuntu/Python/runs/detect/train155/weights/best_saved_model/{saved_model.pbtxt|saved_model.pb} what is wrong and what i need to do for fix? thanks a lot |
Beta Was this translation helpful? Give feedback.
-
Hello! the error I get is "TypeError: Model.export() takes 1 positional argument but 2 were given" |
Beta Was this translation helpful? Give feedback.
-
Are there any examples of getting the output of a pose estimator model in C++ using a torchscript file. I'm getting an output of shape (1, 56, 8400) for an input of size (1, 3, 640, 640) with two people in the sample picture. How should I interpret/post-process this output? |
Beta Was this translation helpful? Give feedback.
-
I trained a yolov5 detection model a little while ago and have successfully converted that model to tensorflowjs. That tfjs model works as expected in code only slightly modified from the example available at https://github.com/zldrobit/tfjs-yolov5-example. My version of the relevant section:
I have now trained a yolov8 detection model on very similar data. The comments in https://github.com/ultralytics/ultralytics/blob/main/ultralytics/engine/exporter.py#L45-L49 However, that does not seem to be the case. The v5 model output is the 4 length array of tensors (which is why the destructuring assignment works), but the v8 model output is a single tensor of shape [1, X, 8400] thus the example code results in an error complaining that the model result is non-iterable when attempting to destructure. From what I understand, the [1, X, 8400] is the expected output shape of the v8 model. Is further processing of the v8 model required, or did I do something wrong during the pt -> tfjs export? |
Beta Was this translation helpful? Give feedback.
-
I was wondering if anyone could help me with this code: I exported my custom trained yolov8n.pt model to .onnx but now my code is not working(model.export(format='onnx', int8=True, dynamic=True)). I am having trouble using the outputs after running inference. My Code: def load_image(image_path):
def draw_bounding_boxes(image, detections, confidence_threshold=0.5): def main(model_path, image_path):
if name == "main": Error: |
Beta Was this translation helpful? Give feedback.
-
"batch_size" is not in arguments as previous versions? |
Beta Was this translation helpful? Give feedback.
-
I converted the model I trained with costum data to tflite format. Before converting, I set the int8 argument to true. But when I examined the tflite format from the netron website, I saw that the input information is still float32. Is this normal or is there a bug? Also thank you very much for answering every question without getting bored. |
Beta Was this translation helpful? Give feedback.
-
!yolo export model=/content/drive/MyDrive/best-1-1.pt format=tflite export failure ❌ 33.0s: generic_type: cannot initialize type "StatusCode": an object with that name is already defined |
Beta Was this translation helpful? Give feedback.
-
Hi I havr tried all TFLITE export formats to convert the best.pt to .tflite but non is working. I have also checked my runtime and all the latest imports pip install -U ultralytics, and I have also tried the code you gave to someone in the comments but the issue is not resolvig Step 1: Export to TensorFlow SavedModel!yolo export model='/content/drive/MyDrive/best-1-1.pt' format=saved_model Step 2: Convert the exported SavedModel to TensorFlow Liteimport tensorflow as tf Save the TFLite modelwith open('/content/drive/MyDrive/yolov8_model.tflite', 'wb') as f: but the same error comes back. |
Beta Was this translation helpful? Give feedback.
-
can we export sam/mobile sam model to tensorRT or onnx? |
Beta Was this translation helpful? Give feedback.
-
model.export(format="onnx") only support oonx<=1.16.0, newest 1.17.0 not support |
Beta Was this translation helpful? Give feedback.
-
Why is it that when i format an 'onnx' file to 'int8' and 'dynamic' it work, but when i format the 'onnx' file to 'float16' and 'dynamic' it does't work. |
Beta Was this translation helpful? Give feedback.
-
hi, could i order the version of onnx to 13 , 11 or 10. Is it possible todowngrade the onnx version? |
Beta Was this translation helpful? Give feedback.
-
Hello! Load a modelmodel = YOLO("./runs/detect/train_v8_bird10/weights/best.pt") # load a custom trained model Export the modelmodel.export(format="torchscript", imgsz=320) PyTorch: starting from 'runs\detect\train_v8_bird10\weights\best.pt' with input shape (1, 3, 320, 320) BCHW and output shape(s) (1, 5, 2100) (5.9 MB) TorchScript: starting export with torch 1.12.1+cu116... |
Beta Was this translation helpful? Give feedback.
-
i am using Tesla T4 GPU but i can't achive 3ms inference time. it takes 7 - 8 ms. for openvino model. |
Beta Was this translation helpful? Give feedback.
-
Hello Team, So could you let me know why this happen, what is the reason my code not running smooth with speed up on 'nvidia jetson agx orin 64gb developer kit' |
Beta Was this translation helpful? Give feedback.
-
Hello, I'm trying to convert a Yolov8 custom trained classification model .pt file to an IMX format to be used with the new Raspberry Pi AI camera. However during initializing, it tries to find labels but due to it being a classification model and not a detection model, there are no labels.
Will this be a problem during exporting or should I use other arguments? Thanks! |
Beta Was this translation helpful? Give feedback.
-
I have trained a detect model and exported it to a paddle model, but the inference results are different between them. The paddle model performed worse than the detect model. Is there any parameter that I am missing when exprting the model? Thanks in advance! |
Beta Was this translation helpful? Give feedback.
-
I'm trying to export the pre-trained yolo11n.pt (COCO) model to ONNX for inferencing with ONNX Runtime in C#/.NET environment. PyTorch: starting from 'yolo11n.pt' with input shape (1, 3, 640, 640) BCHW and output shape(s) (1, 84, 8400) (5.4 MB) ONNX: starting export with onnx 1.17.0 opset 19... Export complete (2.8s) An ONNX model was created but the inferencing results are not what's expected (i.e., bounding box coordinates, confidences look like wrong ranges). I realize there are many possible causes, but two simple questions: 1) does the model need to be put in eval mode before exporting (model.eval()), and 2) I've read that exporting with torch requires a dummy input but these instructions make no mention of it nor do export arguments allow this. Can you clarify? Thank you, I appreciate your advice. |
Beta Was this translation helpful? Give feedback.
-
Glenn,
Thank you for the reply. The process of elimination continues.
Kevin.
Kevin Logan
President/CEO
***@***.***> ***@***.***
860-861-3172 (cell)
860-535-3885 (office)
From: Glenn Jocher ***@***.***>
Sent: Wednesday, December 4, 2024 4:14 PM
To: ultralytics/ultralytics ***@***.***>
Cc: KPLogan ***@***.***>; Mention ***@***.***>
Subject: Re: [ultralytics/ultralytics] modes/export/ (Discussion #7933)
@KPLogan <https://github.com/KPLogan> 1) YOLO models are automatically set to evaluation mode during the export process, so calling model.eval() before export is unnecessary.
2) A dummy input is internally managed by the library during ONNX export, so you don't need to provide it explicitly.
If the inference outputs remain unexpected, ensure you've used the correct preprocessing and postprocessing steps in your C#/.NET environment. If the issue persists, you can revisit the ONNX documentation <https://docs.ultralytics.com/integrations/onnx/> or check your ONNX Runtime pipeline. Let us know if you encounter further issues!
—
Reply to this email directly, view it on GitHub <#7933 (reply in thread)> , or unsubscribe <https://github.com/notifications/unsubscribe-auth/ACQMYD4RRO7QJ6N2B37H7LT2D5WC7AVCNFSM6AAAAABCTE3SQOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTCNBWGYYTCOI> .
You are receiving this because you were mentioned. <https://github.com/notifications/beacon/ACQMYD6TV6FTQZTJRFILYR32D5WC7A5CNFSM6AAAAABCTE3SQOWGG33NNVSW45C7OR4XAZNRIRUXGY3VONZWS33OINXW23LFNZ2KUY3PNVWWK3TUL5UWJTQAV32YO.gif> Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Hi, model = YOLO("yolov8m-pose.pt") # Load a model and even changed some arguments, but without success. How should I do that? |
Beta Was this translation helpful? Give feedback.
-
Hey there. Im exporting finished YOLOV11 model on custom dataset to TensorflowLite with this code:
I Got warning when exporting YOLOv11 on custom dataset model from .pt to .tflite
Is this okay? |
Beta Was this translation helpful? Give feedback.
-
Issue with Incorrect Predictions from Quantized YOLOv8m-obb Model (TFLite) in TensorFlow Framework I have exported my YOLOv8m-obb model to TFLite format with INT8 quantization enabled, using an image size of 640x640 and a data.yaml for my dataset. When I use the quantized model for inference with the Ultralytics framework (Oriented Bounding Boxes), the predictions are correct. However, when I use the same model in the TensorFlow framework, I encounter several issues with the output:
I suspect there might be an issue with how the quantization parameters (scale and zero-point) are being applied in TensorFlow, or possibly with the way I'm handling the model's output or the way I exported the model. I would appreciate guidance on how to correctly handle the quantized model in TensorFlow and resolve the issues with incorrect predictions. |
Beta Was this translation helpful? Give feedback.
-
I have .pth mode (not sure exactly what is it). How can i convert it to .pt model? |
Beta Was this translation helpful? Give feedback.
-
What should the 'opset' value be for Yolov8 to onnx 1.20.1 to be used in TensorRT 10.7 support matrix? Thanks. |
Beta Was this translation helpful? Give feedback.
-
modes/export/
Step-by-step guide on exporting your YOLOv8 models to various format like ONNX, TensorRT, CoreML and more for deployment. Explore now!.
https://docs.ultralytics.com/modes/export/
Beta Was this translation helpful? Give feedback.
All reactions