-
-
Notifications
You must be signed in to change notification settings - Fork 16.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inconsistent TFLite Model Results Between detect.py and Custom Inference Code #13465
Comments
👋 Hello @yAlqubati, thank you for your interest in YOLOv5 🚀! It looks like you're experiencing differences between For a 🐛 Bug Report, could you please confirm if If not already done, providing a minimum reproducible example (MRE) with a simplified version of your test image, model, and code is immensely helpful for debugging. RequirementsEnsure that you are using Python>=3.8.0 with all necessary libraries installed and that the environment is correctly set up with matching TensorFlow Lite configurations. Additional Debugging Tips
Once these items are cross-verified, aligning the results should be easier. An engineer will follow up shortly with further recommendations 😊. |
@yAlqubati the difference in results between Suggestions:
For reference, you can review the YOLOv5 TFLite inference example provided in the YOLOv5 TFLite Export Guide. It includes preprocessing and postprocessing steps that align with Let us know if you encounter further issues! |
Thanks for the suggestions! I've verified that the image preprocessing follows your steps, and the model is FP32, not quantized—would this be an issue? For postprocessing, I’ve ensured correct scaling and NMS parameters ( Let me know if there’s anything else I can try! |
@yAlqubati you're on the right track, and the FP32 model should not cause issues. To inspect the output shape in |
Search before asking
Question
Description:
Hi,
I've converted a YOLOV5s model to a Tflite model using the export.py script provided in YOLOv5. The output came with the name
(best-fp16.tflite).
It works well when used with the officialdetect.py
script. The predictions are accurate, with good bounding box alignment and confidence scores.However, when I perform inference using my custom Python code, the results are noticeably worse:
here is the detect.py output
here is the custom code output:
know the labels in the output are incorrect (e.g., "person") because I forgot to update coco.yaml, but the main issue lies in the quality of the detections.
the input shape is [1, 640, 640, 3]
the output shape is [1, 25200, 7]
here is my custom code for detection:
how can I get the same result as the detect.py, do I need to include any preprocessing for the image or postprocessing for the output?
I want to have the same result as the detect.py so that I can convert it to flutter and make detections from phones
Additional
No response
The text was updated successfully, but these errors were encountered: