You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
path/to/model.py in get_prompts():
for ctx in context['result']
TypeError: NoneType object is not subscriptable
Expected behavior
Predictions get computed and appear on the interface as shown in the tutorial
Environment (please complete the following information):
OS: [e.g. iOS] label studio on windows 11, label studio ml backend on Ubuntu 22.04
Label Studio Version [e.g. 0.8.0]: Cloned at commit 6da49c1
Additional context
Looks like interactive annotation is not sending any requests. The errors seem to have been caused by pre-labelling requests.
On disabling pre-labelling, and following the interactive annotation process as shown in the tutorial GIF:
Toggle auto-annotation in labelling interface
Select label
Draw bounding box
No requests seem to be sent to the ML backend.
The text was updated successfully, but these errors were encountered:
Yes, I have GPU setup.
I'd like to make some corrections to my first comment.
VOT is working for interactive annotation using ML backend.
VOT is not working for pre-annotation by selecting 'retrieve predictions for all tasks' in the data manager.
API request is sent to the ML backend in (2) but it errors out with TypeError: NoneType object is not subscriptable
in segment_anything_2_video/model.py get_prompts()
I guess the API call is not receiving the context object properly.
Neither LS nor ML backend is running in docker.
Thank you for your response!
swap-10
changed the title
Can't run Video Object Tracking SAM-2
Can't run Video Object Tracking SAM-2 pre-annotation
Jan 7, 2025
Handle “None” Context for Pre-Labeling
Modify the parts of your SAM2 code (in the model’s “get_prompts” or “predict” method) so that it checks if “context” or “context['result']” is present before accessing them. For example:
def get_prompts(self, context): if not context or 'result' not in context: return [] # existing code that loops over context['result']
When you run a full “retrieve predictions” job, often there’s no bounding box or keypoint input yet, so the “context” can come through as None.
Describe the bug
Using the video object tracking feature with sam-2 as described here: https://docs.humansignal.com/tutorials/segment_anything_2_video
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Predictions get computed and appear on the interface as shown in the tutorial
Environment (please complete the following information):
Additional context
Looks like interactive annotation is not sending any requests. The errors seem to have been caused by pre-labelling requests.
On disabling pre-labelling, and following the interactive annotation process as shown in the tutorial GIF:
No requests seem to be sent to the ML backend.
The text was updated successfully, but these errors were encountered: