Skip to content

Commit 30bac7c

Browse files
authored
Merge branch 'blakeblackshear:dev' into dev
2 parents 1db3aa1 + fc0fb15 commit 30bac7c

23 files changed

+353
-399
lines changed

docs/docs/configuration/object_detectors.md

+1
Original file line numberDiff line numberDiff line change
@@ -457,6 +457,7 @@ model:
457457
width: 320 # <--- should match whatever was set in notebook
458458
height: 320 # <--- should match whatever was set in notebook
459459
input_pixel_format: bgr
460+
input_tensor: nchw
460461
path: /config/yolo_nas_s.onnx
461462
labelmap_path: /labelmap/coco-80.txt
462463
```

docs/docs/configuration/semantic_search.md

+21-17
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ For best performance, 16GB or more of RAM and a dedicated GPU are recommended.
1919

2020
## Configuration
2121

22-
Semantic search is disabled by default, and must be enabled in your config file before it can be used. Semantic Search is a global configuration setting.
22+
Semantic Search is disabled by default, and must be enabled in your config file before it can be used. Semantic Search is a global configuration setting.
2323

2424
```yaml
2525
semantic_search:
@@ -41,7 +41,7 @@ The vision model is able to embed both images and text into the same vector spac
4141

4242
The text model is used to embed tracked object descriptions and perform searches against them. Descriptions can be created, viewed, and modified on the Search page when clicking on the gray tracked object chip at the top left of each review item. See [the Generative AI docs](/configuration/genai.md) for more information on how to automatically generate tracked object descriptions.
4343

44-
Differently weighted CLIP models are available and can be selected by setting the `model_size` config option:
44+
Differently weighted CLIP models are available and can be selected by setting the `model_size` config option as `small` or `large`:
4545

4646
```yaml
4747
semantic_search:
@@ -50,37 +50,41 @@ semantic_search:
5050
```
5151

5252
- Configuring the `large` model employs the full Jina model and will automatically run on the GPU if applicable.
53-
- Configuring the `small` model employs a quantized version of the model that uses much less RAM and runs faster on CPU with a very negligible difference in embedding quality.
53+
- Configuring the `small` model employs a quantized version of the model that uses less RAM and runs on CPU with a very negligible difference in embedding quality.
5454

5555
### GPU Acceleration
5656

5757
The CLIP models are downloaded in ONNX format, and the `large` model can be accelerated using GPU hardware, when available. This depends on the Docker build that is used.
5858

59+
```yaml
60+
semantic_search:
61+
enabled: True
62+
model_size: large
63+
```
64+
5965
:::info
6066

6167
If the correct build is used for your GPU and the `large` model is configured, then the GPU will be detected and used automatically.
6268

63-
**AMD**
64-
- ROCm will automatically be detected and used for semantic search in the `-rocm` Frigate image.
69+
**NOTE:** Object detection and Semantic Search are independent features. If you want to use your GPU with Semantic Search, you must choose the appropriate Frigate Docker image for your GPU.
6570

66-
**Intel**
67-
- OpenVINO will automatically be detected and used as a detector in the default Frigate image.
71+
- **AMD**
6872

69-
**Nvidia**
70-
- Nvidia GPUs will automatically be detected and used as a detector in the `-tensorrt` Frigate image.
71-
- Jetson devices will automatically be detected and used as a detector in the `-tensorrt-jp(4/5)` Frigate image.
73+
- ROCm will automatically be detected and used for Semantic Search in the `-rocm` Frigate image.
7274

73-
:::
75+
- **Intel**
7476

75-
```yaml
76-
semantic_search:
77-
enabled: True
78-
model_size: small
79-
```
77+
- OpenVINO will automatically be detected and used for Semantic Search in the default Frigate image.
78+
79+
- **Nvidia**
80+
- Nvidia GPUs will automatically be detected and used for Semantic Search in the `-tensorrt` Frigate image.
81+
- Jetson devices will automatically be detected and used for Semantic Search in the `-tensorrt-jp(4/5)` Frigate image.
82+
83+
:::
8084

8185
## Usage and Best Practices
8286

83-
1. Semantic search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and semantic search for the best results.
87+
1. Semantic Search is used in conjunction with the other filters available on the Search page. Use a combination of traditional filtering and Semantic Search for the best results.
8488
2. Use the thumbnail search type when searching for particular objects in the scene. Use the description search type when attempting to discern the intent of your object.
8589
3. Because of how the AI models Frigate uses have been trained, the comparison between text and image embedding distances generally means that with multi-modal (`thumbnail` and `description`) searches, results matching `description` will appear first, even if a `thumbnail` embedding may be a better match. Play with the "Search Type" setting to help find what you are looking for. Note that if you are generating descriptions for specific objects or zones only, this may cause search results to prioritize the objects with descriptions even if the the ones without them are more relevant.
8690
4. Make your search language and tone closely match exactly what you're looking for. If you are using thumbnail search, **phrase your query as an image caption**. Searching for "red car" may not work as well as "red sedan driving down a residential street on a sunny day".

docs/docs/frigate/installation.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -81,15 +81,15 @@ You can calculate the **minimum** shm size for each camera with the following fo
8181

8282
```console
8383
# Replace <width> and <height>
84-
$ python -c 'print("{:.2f}MB".format((<width> * <height> * 1.5 * 10 + 270480) / 1048576))'
84+
$ python -c 'print("{:.2f}MB".format((<width> * <height> * 1.5 * 20 + 270480) / 1048576))'
8585

86-
# Example for 1280x720
87-
$ python -c 'print("{:.2f}MB".format((1280 * 720 * 1.5 * 10 + 270480) / 1048576))'
88-
13.44MB
86+
# Example for 1280x720, including logs
87+
$ python -c 'print("{:.2f}MB".format((1280 * 720 * 1.5 * 20 + 270480) / 1048576)) + 40'
88+
46.63MB
8989

9090
# Example for eight cameras detecting at 1280x720, including logs
91-
$ python -c 'print("{:.2f}MB".format(((1280 * 720 * 1.5 * 10 + 270480) / 1048576) * 8 + 40))'
92-
136.99MB
91+
$ python -c 'print("{:.2f}MB".format(((1280 * 720 * 1.5 * 20 + 270480) / 1048576) * 8 + 40))'
92+
253MB
9393
```
9494

9595
The shm size cannot be set per container for Home Assistant add-ons. However, this is probably not required since by default Home Assistant Supervisor allocates `/dev/shm` with half the size of your total memory. If your machine has 8GB of memory, chances are that Frigate will have access to up to 4GB without any additional configuration.
@@ -194,7 +194,7 @@ services:
194194
privileged: true # this may not be necessary for all setups
195195
restart: unless-stopped
196196
image: ghcr.io/blakeblackshear/frigate:stable
197-
shm_size: "64mb" # update for your cameras based on calculation above
197+
shm_size: "512mb" # update for your cameras based on calculation above
198198
devices:
199199
- /dev/bus/usb:/dev/bus/usb # Passes the USB Coral, needs to be modified for other versions
200200
- /dev/apex_0:/dev/apex_0 # Passes a PCIe Coral, follow driver instructions here https://coral.ai/docs/m2/get-started/#2a-on-linux

frigate/api/event.py

-3
Original file line numberDiff line numberDiff line change
@@ -1063,9 +1063,6 @@ def delete_event(request: Request, event_id: str):
10631063
media.unlink(missing_ok=True)
10641064
media = Path(f"{os.path.join(CLIPS_DIR, media_name)}-clean.png")
10651065
media.unlink(missing_ok=True)
1066-
if event.has_clip:
1067-
media = Path(f"{os.path.join(CLIPS_DIR, media_name)}.mp4")
1068-
media.unlink(missing_ok=True)
10691066

10701067
event.delete_instance()
10711068
Timeline.delete().where(Timeline.source_id == event_id).execute()

frigate/app.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -521,9 +521,9 @@ def shm_frame_count(self) -> int:
521521
f"Calculated total camera size {available_shm} / {cam_total_frame_size} :: {shm_frame_count} frames for each camera in SHM"
522522
)
523523

524-
if shm_frame_count < 10:
524+
if shm_frame_count < 20:
525525
logger.warning(
526-
f"The current SHM size of {total_shm}MB is too small, recommend increasing it to at least {round(min_req_shm + cam_total_frame_size * 10)}MB."
526+
f"The current SHM size of {total_shm}MB is too small, recommend increasing it to at least {round(min_req_shm + cam_total_frame_size * 20)}MB."
527527
)
528528

529529
return shm_frame_count

frigate/util/model.py

+22-11
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
"""Model Utils"""
22

3+
import logging
34
import os
45
from typing import Any
56

@@ -11,6 +12,8 @@
1112
# openvino is not included
1213
pass
1314

15+
logger = logging.getLogger(__name__)
16+
1417

1518
def get_ort_providers(
1619
force_cpu: bool = False, device: str = "AUTO", requires_fp16: bool = False
@@ -89,19 +92,27 @@ def __init__(self, model_path: str, device: str, requires_fp16: bool = False):
8992
self.ort: ort.InferenceSession = None
9093
self.ov: ov.Core = None
9194
providers, options = get_ort_providers(device == "CPU", device, requires_fp16)
95+
self.interpreter = None
9296

9397
if "OpenVINOExecutionProvider" in providers:
94-
# use OpenVINO directly
95-
self.type = "ov"
96-
self.ov = ov.Core()
97-
self.ov.set_property(
98-
{ov.properties.cache_dir: "/config/model_cache/openvino"}
99-
)
100-
self.interpreter = self.ov.compile_model(
101-
model=model_path, device_name=device
102-
)
103-
else:
104-
# Use ONNXRuntime
98+
try:
99+
# use OpenVINO directly
100+
self.type = "ov"
101+
self.ov = ov.Core()
102+
self.ov.set_property(
103+
{ov.properties.cache_dir: "/config/model_cache/openvino"}
104+
)
105+
self.interpreter = self.ov.compile_model(
106+
model=model_path, device_name=device
107+
)
108+
except Exception as e:
109+
logger.warning(
110+
f"OpenVINO failed to build model, using CPU instead: {e}"
111+
)
112+
self.interpreter = None
113+
114+
# Use ONNXRuntime
115+
if self.interpreter is None:
105116
self.type = "ort"
106117
self.ort = ort.InferenceSession(
107118
model_path,
+36
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
"""Peewee migrations -- 027_create_explore_index.py.
2+
3+
Some examples (model - class or model name)::
4+
5+
> Model = migrator.orm['model_name'] # Return model in current state by name
6+
7+
> migrator.sql(sql) # Run custom SQL
8+
> migrator.python(func, *args, **kwargs) # Run python code
9+
> migrator.create_model(Model) # Create a model (could be used as decorator)
10+
> migrator.remove_model(model, cascade=True) # Remove a model
11+
> migrator.add_fields(model, **fields) # Add fields to a model
12+
> migrator.change_fields(model, **fields) # Change fields
13+
> migrator.remove_fields(model, *field_names, cascade=True)
14+
> migrator.rename_field(model, old_field_name, new_field_name)
15+
> migrator.rename_table(model, new_table_name)
16+
> migrator.add_index(model, *col_names, unique=False)
17+
> migrator.drop_index(model, *col_names)
18+
> migrator.add_not_null(model, *field_names)
19+
> migrator.drop_not_null(model, *field_names)
20+
> migrator.add_default(model, field_name, default)
21+
22+
"""
23+
24+
import peewee as pw
25+
26+
SQL = pw.SQL
27+
28+
29+
def migrate(migrator, database, fake=False, **kwargs):
30+
migrator.sql(
31+
'CREATE INDEX IF NOT EXISTS "event_label_start_time" ON "event" ("label", "start_time" DESC)'
32+
)
33+
34+
35+
def rollback(migrator, database, fake=False, **kwargs):
36+
migrator.sql('DROP INDEX IF EXISTS "event_label_start_time"')

0 commit comments

Comments
 (0)