Skip to content

Commit a2659f2

Browse files
authored
Merge branch 'blakeblackshear:dev' into dev
2 parents 30bac7c + 3249ffb commit a2659f2

File tree

17 files changed

+220
-140
lines changed

17 files changed

+220
-140
lines changed

docker-compose.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ services:
2323
# count: 1
2424
# capabilities: [gpu]
2525
environment:
26-
YOLO_MODELS: yolov7-320
26+
YOLO_MODELS: ""
2727
devices:
2828
- /dev/bus/usb:/dev/bus/usb
2929
# - /dev/dri:/dev/dri # for intel hwaccel, needs to be updated for your hardware

docker/tensorrt/Dockerfile.base

+1-1
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ ENV S6_CMD_WAIT_FOR_SERVICES_MAXTIME=0
2525
COPY --from=trt-deps /usr/local/lib/libyolo_layer.so /usr/local/lib/libyolo_layer.so
2626
COPY --from=trt-deps /usr/local/src/tensorrt_demos /usr/local/src/tensorrt_demos
2727
COPY docker/tensorrt/detector/rootfs/ /
28-
ENV YOLO_MODELS="yolov7-320"
28+
ENV YOLO_MODELS=""
2929

3030
HEALTHCHECK --start-period=600s --start-interval=5s --interval=15s --timeout=5s --retries=3 \
3131
CMD curl --fail --silent --show-error http://127.0.0.1:5000/api/version || exit 1

docker/tensorrt/detector/rootfs/etc/s6-overlay/s6-rc.d/trt-model-prepare/run

+5
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,11 @@ FIRST_MODEL=true
1919
MODEL_DOWNLOAD=""
2020
MODEL_CONVERT=""
2121

22+
if [ -z "$YOLO_MODELS"]; then
23+
echo "tensorrt model preparation disabled"
24+
exit 0
25+
fi
26+
2227
for model in ${YOLO_MODELS//,/ }
2328
do
2429
# Remove old link in case path/version changed

docs/docs/configuration/genai.md

+6-2
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,13 @@ id: genai
33
title: Generative AI
44
---
55

6-
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects.
6+
Generative AI can be used to automatically generate descriptive text based on the thumbnails of your tracked objects. This helps with [Semantic Search](/configuration/semantic_search) in Frigate to provide more context about your tracked objects. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
77

8-
Semantic Search must be enabled to use Generative AI. Descriptions are accessed via the _Explore_ view in the Frigate UI by clicking on a tracked object's thumbnail.
8+
:::info
9+
10+
Semantic Search must be enabled to use Generative AI.
11+
12+
:::
913

1014
## Configuration
1115

docs/docs/configuration/object_detectors.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -223,7 +223,7 @@ The model used for TensorRT must be preprocessed on the same hardware platform t
223223

224224
The Frigate image will generate model files during startup if the specified model is not found. Processed models are stored in the `/config/model_cache` folder. Typically the `/config` path is mapped to a directory on the host already and the `model_cache` does not need to be mapped separately unless the user wants to store it in a different location on the host.
225225

226-
By default, the `yolov7-320` model will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. To select no model generation, set the variable to an empty string, `YOLO_MODELS=""`. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder.
226+
By default, no models will be generated, but this can be overridden by specifying the `YOLO_MODELS` environment variable in Docker. One or more models may be listed in a comma-separated format, and each one will be generated. Models will only be generated if the corresponding `{model}.trt` file is not present in the `model_cache` folder, so you can force a model to be regenerated by deleting it from your Frigate data folder.
227227

228228
If you have a Jetson device with DLAs (Xavier or Orin), you can generate a model that will run on the DLA by appending `-dla` to your model name, e.g. specify `YOLO_MODELS=yolov7-320-dla`. The model will run on DLA0 (Frigate does not currently support DLA1). DLA-incompatible layers will fall back to running on the GPU.
229229

@@ -264,7 +264,7 @@ An example `docker-compose.yml` fragment that converts the `yolov4-608` and `yol
264264
```yml
265265
frigate:
266266
environment:
267-
- YOLO_MODELS=yolov4-608,yolov7x-640
267+
- YOLO_MODELS=yolov7-320,yolov7x-640
268268
- USE_FP16=false
269269
```
270270

frigate/api/event.py

+3-1
Original file line numberDiff line numberDiff line change
@@ -1017,9 +1017,11 @@ def regenerate_description(
10171017
status_code=404,
10181018
)
10191019

1020+
camera_config = request.app.frigate_config.cameras[event.camera]
1021+
10201022
if (
10211023
request.app.frigate_config.semantic_search.enabled
1022-
and request.app.frigate_config.genai.enabled
1024+
and camera_config.genai.enabled
10231025
):
10241026
request.app.event_metadata_updater.publish((event.id, params.source))
10251027

frigate/events/cleanup.py

+36-6
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,9 @@ class EventCleanupType(str, Enum):
2121
snapshots = "snapshots"
2222

2323

24+
CHUNK_SIZE = 50
25+
26+
2427
class EventCleanup(threading.Thread):
2528
def __init__(
2629
self, config: FrigateConfig, stop_event: MpEvent, db: SqliteVecQueueDatabase
@@ -107,6 +110,7 @@ def expire(self, media_type: EventCleanupType) -> list[str]:
107110
.namedtuples()
108111
.iterator()
109112
)
113+
logger.debug(f"{len(expired_events)} events can be expired")
110114
# delete the media from disk
111115
for expired in expired_events:
112116
media_name = f"{expired.camera}-{expired.id}"
@@ -125,13 +129,34 @@ def expire(self, media_type: EventCleanupType) -> list[str]:
125129
logger.warning(f"Unable to delete event images: {e}")
126130

127131
# update the clips attribute for the db entry
128-
update_query = Event.update(update_params).where(
132+
query = Event.select(Event.id).where(
129133
Event.camera.not_in(self.camera_keys),
130134
Event.start_time < expire_after,
131135
Event.label == event.label,
132136
Event.retain_indefinitely == False,
133137
)
134-
update_query.execute()
138+
139+
events_to_update = []
140+
141+
for batch in query.iterator():
142+
events_to_update.extend([event.id for event in batch])
143+
if len(events_to_update) >= CHUNK_SIZE:
144+
logger.debug(
145+
f"Updating {update_params} for {len(events_to_update)} events"
146+
)
147+
Event.update(update_params).where(
148+
Event.id << events_to_update
149+
).execute()
150+
events_to_update = []
151+
152+
# Update any remaining events
153+
if events_to_update:
154+
logger.debug(
155+
f"Updating clips/snapshots attribute for {len(events_to_update)} events"
156+
)
157+
Event.update(update_params).where(
158+
Event.id << events_to_update
159+
).execute()
135160

136161
events_to_update = []
137162

@@ -196,7 +221,11 @@ def expire(self, media_type: EventCleanupType) -> list[str]:
196221
logger.warning(f"Unable to delete event images: {e}")
197222

198223
# update the clips attribute for the db entry
199-
Event.update(update_params).where(Event.id << events_to_update).execute()
224+
for i in range(0, len(events_to_update), CHUNK_SIZE):
225+
batch = events_to_update[i : i + CHUNK_SIZE]
226+
logger.debug(f"Updating {update_params} for {len(batch)} events")
227+
Event.update(update_params).where(Event.id << batch).execute()
228+
200229
return events_to_update
201230

202231
def run(self) -> None:
@@ -222,10 +251,11 @@ def run(self) -> None:
222251
.iterator()
223252
)
224253
events_to_delete = [e.id for e in events]
254+
logger.debug(f"Found {len(events_to_delete)} events that can be expired")
225255
if len(events_to_delete) > 0:
226-
chunk_size = 50
227-
for i in range(0, len(events_to_delete), chunk_size):
228-
chunk = events_to_delete[i : i + chunk_size]
256+
for i in range(0, len(events_to_delete), CHUNK_SIZE):
257+
chunk = events_to_delete[i : i + CHUNK_SIZE]
258+
logger.debug(f"Deleting {len(chunk)} events from the database")
229259
Event.delete().where(Event.id << chunk).execute()
230260

231261
if self.config.semantic_search.enabled:

frigate/genai/__init__.py

+4-5
Original file line numberDiff line numberDiff line change
@@ -54,11 +54,10 @@ def _send(self, prompt: str, images: list[bytes]) -> Optional[str]:
5454

5555
def get_genai_client(genai_config: GenAIConfig) -> Optional[GenAIClient]:
5656
"""Get the GenAI client."""
57-
if genai_config.enabled:
58-
load_providers()
59-
provider = PROVIDERS.get(genai_config.provider)
60-
if provider:
61-
return provider(genai_config)
57+
load_providers()
58+
provider = PROVIDERS.get(genai_config.provider)
59+
if provider:
60+
return provider(genai_config)
6261
return None
6362

6463

frigate/output/output.py

+10
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,7 @@ def receiveSignal(signalNumber, frame):
6363
birdseye: Optional[Birdseye] = None
6464
preview_recorders: dict[str, PreviewRecorder] = {}
6565
preview_write_times: dict[str, float] = {}
66+
failed_frame_requests: dict[str, int] = {}
6667

6768
move_preview_frames("cache")
6869

@@ -99,7 +100,16 @@ def receiveSignal(signalNumber, frame):
99100

100101
if frame is None:
101102
logger.debug(f"Failed to get frame {frame_id} from SHM")
103+
failed_frame_requests[camera] = failed_frame_requests.get(camera, 0) + 1
104+
105+
if failed_frame_requests[camera] > config.cameras[camera].detect.fps:
106+
logger.warning(
107+
f"Failed to retrieve many frames for {camera} from SHM, consider increasing SHM size if this continues."
108+
)
109+
102110
continue
111+
else:
112+
failed_frame_requests[camera] = 0
103113

104114
# send camera frame to ffmpeg process if websockets are connected
105115
if any(

frigate/output/preview.py

+8-3
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ def __init__(
7878
# write a PREVIEW at fps and 1 key frame per clip
7979
self.ffmpeg_cmd = parse_preset_hardware_acceleration_encode(
8080
config.ffmpeg.ffmpeg_path,
81-
config.ffmpeg.hwaccel_args,
81+
"default",
8282
input="-f concat -y -protocol_whitelist pipe,file -safe 0 -threads 1 -i /dev/stdin",
8383
output=f"-threads 1 -g {PREVIEW_KEYFRAME_INTERVAL} -bf 0 -b:v {PREVIEW_QUALITY_BIT_RATES[self.config.record.preview.quality]} {FPS_VFR_PARAM} -movflags +faststart -pix_fmt yuv420p {self.path}",
8484
type=EncodeTypeEnum.preview,
@@ -154,6 +154,7 @@ def __init__(self, config: CameraConfig) -> None:
154154
self.start_time = 0
155155
self.last_output_time = 0
156156
self.output_frames = []
157+
157158
if config.detect.width > config.detect.height:
158159
self.out_height = PREVIEW_HEIGHT
159160
self.out_width = (
@@ -274,7 +275,7 @@ def should_write_frame(
274275

275276
return False
276277

277-
def write_frame_to_cache(self, frame_time: float, frame) -> None:
278+
def write_frame_to_cache(self, frame_time: float, frame: np.ndarray) -> None:
278279
# resize yuv frame
279280
small_frame = np.zeros((self.out_height * 3 // 2, self.out_width), np.uint8)
280281
copy_yuv_to_position(
@@ -303,7 +304,7 @@ def write_data(
303304
current_tracked_objects: list[dict[str, any]],
304305
motion_boxes: list[list[int]],
305306
frame_time: float,
306-
frame,
307+
frame: np.ndarray,
307308
) -> bool:
308309
# check for updated record config
309310
_, updated_record_config = self.config_subscriber.check_for_update()
@@ -332,6 +333,10 @@ def write_data(
332333
self.output_frames,
333334
self.requestor,
334335
).start()
336+
else:
337+
logger.debug(
338+
f"Not saving preview for {self.config.name} because there are no saved frames."
339+
)
335340

336341
# reset frame cache
337342
self.segment_end = (

web/src/api/ws.tsx

+7-1
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,10 @@ function useValue(): useValueReturn {
6969
...prevState,
7070
...cameraStates,
7171
}));
72-
setHasCameraState(true);
72+
73+
if (Object.keys(cameraStates).length > 0) {
74+
setHasCameraState(true);
75+
}
7376
// we only want this to run initially when the config is loaded
7477
// eslint-disable-next-line react-hooks/exhaustive-deps
7578
}, [wsState]);
@@ -93,6 +96,9 @@ function useValue(): useValueReturn {
9396
retain: false,
9497
});
9598
},
99+
onClose: () => {
100+
setHasCameraState(false);
101+
},
96102
shouldReconnect: () => true,
97103
retryOnError: true,
98104
});

0 commit comments

Comments
 (0)