-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Two computer vision tasks combined #6788
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Yes. You can definitely do that. |
Hello Jason, Do you mean something like this:
Thank you,
|
Yes, exactly! Thanks a lot! |
Hello Jason! Thanks for the follow-up. Here is a more detailed explanation of how to include both bounding boxes and keypoints within a single labeling configuration.
• is for bounding boxes, so you can annotate objects. • allows you to mark specific keypoints (like body joints or reference points within your objects). Make sure the from_name and toName attributes match up correctly in your labeling config. Let us know if you have any other questions!
|
Hi there! Looking forward to your help! Thanks! |
Hello Jason, Based on the feedback you provided, here are a few points and additional insights that may help resolve the issue when combining ML-predicted bounding boxes with keypoint annotations:
Make sure that both your and tags share the same toName attribute (e.g., "image"). This is essential so that Label Studio links annotations to the same image and can group them correctly. When using an ML backend that provides both bounding box and keypoint predictions, keypoints are typically grouped with their corresponding object if the predictions include a shared region ID. If you integrate model predictions, ensure that your keypoint labeling element includes the parameter model_add_bboxes="true". This tells Label Studio to include (or group) the bounding boxes from keypoint detections together with the keypoint results. For example:
It’s important to note that in Label Studio, keypoint annotations are not “nested” inside the drawn bounding box in the UI; they overlay the entire image. Their placement depends on the coordinates returned by the model or the ones you manually add. If you’re seeing keypoints only outside the bounding box, it might be due to how the model predictions are being merged or the region positions are calculated. This behavior can sometimes be influenced by extensive ML pre-annotations. In such cases, manually reviewing and adjusting the positions using the region editor can help. Since you mentioned that model predictions are integrated into your task, it’s possible that the pre-annotation results are affecting the interactive behavior. When predictions are present, Label Studio groups results based on their region IDs. If the keypoints are coming as separate predictions, they might not be automatically "attached" or overlaid to the corresponding bounding box region. Double-check that your ML backend’s output JSON correctly associates keypoints with the same region (i.e., same annotation ID) as the bounding box. If not, you may consider re-running the ML prediction with settings adjusted for grouping. I hope these insights help clarify the behavior and assist you in refining your project setup. Please let me know if you have further questions or need additional assistance!
|
Hello! Thanks for your help! Id like to clarify that in this case, I only integrated bounding boxes predctions into the task while keypoints were manually added. And I had studied and tried the your suggestions, here are my corresponding experiment results:
I double checked the toName attribute, both the object detection and keypoint detection tasks are the same. Shown below:
I had tried the code above you provided but didnt work, still couldnt add keypoints inside the predicted bounding boxes.
Since I just wanted to add keypoints manually, this would likely not be a problem.
I thought this could be the reason. But I didnt figure out the concept of group. And I had also tried that if I just put the object detection and keypoints togother and all labeled by hand, they two could work fine. So there must be something I forgot to configure in ML backend, probably was "group" like you said. |
Hello Jason, Thank you for the extra details and inputs, that was greatly appreciated! I understand that you’re manually adding keypoints while using ML-predicted bounding boxes, but the keypoints aren’t “grouped” with their corresponding boxes as you’d expect.
In Label Studio, “grouping” of annotations (linking keypoints to a bounding box) is typically achieved when both the keypoints and bounding boxes are generated together by the ML backend. In that case, the backend output includes a shared identifier (often part of the prediction’s metadata) that tells Label Studio which keypoints belong to which bounding box. When you add keypoints manually, they are created as separate annotations and won’t automatically attach to the pre-annotated bounding boxes unless they share a common “group” (or region ID). Your provided ML prediction JSON only includes bounding boxes without any grouping field (no key like “group” or annotation id linking to keypoint predictions). This means that even if you enable model_add_bboxes="true" on your , it won’t “attach” manually added keypoints to the predicted boxes because the ML backend isn’t returning both types of predictions together.
|
Thanks for your suggestions. Now maybe I think I should give up attempting to integrate ML-backend detection with mannually labeling keypoints. It will be more easier to seperate them. Thanks a lot after all! Close this issue now. |
Hi, there!
Can i combine the two tasks keypoints and object-detection bouding boxes labeling into one template?
Best wishes!
The text was updated successfully, but these errors were encountered: