Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Integrate Yolo-OBB model into this application? #44

Open
muhammad-qasim-cowlar opened this issue Jul 24, 2024 · 1 comment
Open

Comments

@muhammad-qasim-cowlar
Copy link

Currently the application uses the x,y,width and height detected by the yolo model to draw bounding boxes. What would I need to do to integrate YOLOv8-OBB into the existing application as the OBB model can draw polygons and not just rectangles and it would be helpful for my usecase.

@pderrenger
Copy link
Member

@muhammad-qasim-cowlar hello!

Integrating the YOLOv8-OBB model into your application to leverage oriented bounding boxes (OBBs) is a great idea, especially if your use case benefits from more precise object localization. Here are the steps you can follow to make this integration:

  1. Update to the Latest Version: Ensure you are using the latest version of the Ultralytics YOLO package to access the most recent features and bug fixes.

  2. Train or Load a YOLOv8-OBB Model: If you haven't already, you can train a YOLOv8-OBB model or load a pre-trained one. Here's a quick example of how to train a model:

    from ultralytics import YOLO
    
    # Create a new YOLOv8n-OBB model from scratch
    model = YOLO("yolov8n-obb.yaml")
    
    # Train the model on your dataset
    results = model.train(data="your_dataset.yaml", epochs=100, imgsz=640)
  3. Modify Your Application to Handle OBBs: Since OBBs are represented by four corner points, you will need to adjust your application to handle these points instead of the traditional x, y, width, height format. The YOLO OBB format provides bounding boxes as x1, y1, x2, y2, x3, y3, x4, y4.

    Here’s an example of how you might modify your drawing function to handle OBBs:

    import cv2
    
    def draw_obb(image, obb):
        # obb is a list of 8 values: [x1, y1, x2, y2, x3, y3, x4, y4]
        points = [(obb[i], obb[i+1]) for i in range(0, len(obb), 2)]
        points = np.array(points, dtype=np.int32)
        cv2.polylines(image, [points], isClosed=True, color=(0, 255, 0), thickness=2)
    
    # Example usage
    image = cv2.imread("path_to_image.jpg")
    obb = [0.780811, 0.743961, 0.782371, 0.74686, 0.777691, 0.752174, 0.776131, 0.749758]
    draw_obb(image, obb)
    cv2.imshow("OBB", image)
    cv2.waitKey(0)
  4. Adjust Post-Processing: Ensure your post-processing pipeline can handle the OBB format. This might involve updating any code that processes detection outputs to work with the four corner points.

  5. Testing and Validation: Thoroughly test the integration to ensure that the OBBs are being drawn correctly and that the application behaves as expected with the new bounding box format.

By following these steps, you should be able to integrate the YOLOv8-OBB model into your application successfully. If you encounter any issues or have further questions, feel free to ask!

Best of luck with your integration! 😊


For more detailed information on OBB datasets and training, you can refer to the Ultralytics documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants