Skip to content

Commit 0ce009a

Browse files
authored
Merge branch 'blakeblackshear:dev' into dev
2 parents 94d919e + 9fda259 commit 0ce009a

File tree

6 files changed

+5
-306
lines changed

6 files changed

+5
-306
lines changed

.github/DISCUSSION_TEMPLATE/detector-support.yml

-13
Original file line numberDiff line numberDiff line change
@@ -74,19 +74,6 @@ body:
7474
- CPU (no coral)
7575
validations:
7676
required: true
77-
- type: dropdown
78-
id: object-detector
79-
attributes:
80-
label: Object Detector
81-
options:
82-
- Coral
83-
- OpenVino
84-
- TensorRT
85-
- RKNN
86-
- Other
87-
- CPU (no coral)
88-
validations:
89-
required: true
9077
- type: textarea
9178
id: screenshots
9279
attributes:

.github/DISCUSSION_TEMPLATE/general-support.yml

-13
Original file line numberDiff line numberDiff line change
@@ -102,19 +102,6 @@ body:
102102
- CPU (no coral)
103103
validations:
104104
required: true
105-
- type: dropdown
106-
id: object-detector
107-
attributes:
108-
label: Object Detector
109-
options:
110-
- Coral
111-
- OpenVino
112-
- TensorRT
113-
- RKNN
114-
- Other
115-
- CPU (no coral)
116-
validations:
117-
required: true
118105
- type: dropdown
119106
id: network
120107
attributes:

docs/docs/configuration/semantic_search.md

+4-6
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ title: Using Semantic Search
55

66
Semantic Search in Frigate allows you to find tracked objects within your review items using either the image itself, a user-defined text description, or an automatically generated one. This feature works by creating _embeddings_ — numerical vector representations — for both the images and text descriptions of your tracked objects. By comparing these embeddings, Frigate assesses their similarities to deliver relevant search results.
77

8-
Frigate has support for two models to create embeddings, both of which run locally: [OpenAI CLIP](https://openai.com/research/clip) and [all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). Embeddings are then saved to Frigate's database.
8+
Frigate has support for [Jina AI's CLIP model](https://huggingface.co/jinaai/jina-clip-v1) to create embeddings, which runs locally. Embeddings are then saved to Frigate's database.
99

1010
Semantic Search is accessed via the _Explore_ view in the Frigate UI.
1111

@@ -27,13 +27,11 @@ If you are enabling the Search feature for the first time, be advised that Friga
2727

2828
:::
2929

30-
### OpenAI CLIP
30+
### Jina AI CLIP
3131

32-
This model is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
32+
The vision model is able to embed both images and text into the same vector space, which allows `image -> image` and `text -> image` similarity searches. Frigate uses this model on tracked objects to encode the thumbnail image and store it in the database. When searching for tracked objects via text in the search box, Frigate will perform a `text -> image` similarity search against this embedding. When clicking "Find Similar" in the tracked object detail pane, Frigate will perform an `image -> image` similarity search to retrieve the closest matching thumbnails.
3333

34-
### all-MiniLM-L6-v2
35-
36-
This is a sentence embedding model that has been fine tuned on over 1 billion sentence pairs. This model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
34+
The text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
3735

3836
## Usage
3937

frigate/embeddings/functions/clip.py

-166
This file was deleted.

frigate/embeddings/functions/minilm_l6_v2.py

-107
This file was deleted.

frigate/genai/__init__.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ def generate_description(
3636
"""Generate a description for the frame."""
3737
prompt = camera_config.genai.object_prompts.get(
3838
label, camera_config.genai.prompt
39-
)
39+
).format(label=label)
4040
return self._send(prompt, thumbnails)
4141

4242
def _init_provider(self):

0 commit comments

Comments
 (0)