An optimized object detection client for Frigate that leverages Apple Silicon's Neural Engine for high-performance inference using ONNX Runtime. Provides seamless integration with Frigate's ZMQ detector plugin.
Features
ZMQ IPC Communication: Implements the REQ/REP protocol over IPC endpoints
ONNX Runtime Integration: Runs inference using ONNX models with optimized execution providers
Apple Silicon Optimized: Defaults to CoreML execution provider for optimal performance on Apple Silicon
Error Handling: Robust error handling with fallback to zero results
Flexible Configuration: Configurable endpoints, model paths, and execution providers
Quick Start
Option A: macOS App (no terminal required)
Download the latest FrigateDetector.app.zip from the Releases page.
Unzip it and open FrigateDetector.app (first run: right‑click → Open to bypass Gatekeeper).
A Terminal window will appear and automatically:
create a local venv/
install dependencies
start the detector with --model AUTO
Option B: Makefile
make install
make run
The detector will automatically use the configured model and start communicating with Frigate.
What's Included
Model Loading: Uses whatever model Frigate configures via its automatic model loading
Apple Silicon Optimization: Uses CoreML execution provider for maximum performance
Frigate Integration: Drop-in replacement for Frigate's built-in detectors
Multiple Model Support: YOLOv9, RF-DETR, D-FINE, and custom ONNX models
Supported Models
The following models are supported by this detector:
Apple Silicon Chip
YOLOv9
RF-DETR
D-FINE
M1
M2
M3
320-t: 8 ms
320-Nano: 80 ms
640-s: 120 ms
M4
Model Configuration
The detector uses the model that Frigate configures:
Frigate automatically loads and configures the model via ZMQ
The detector receives model information from Frigate's automatic model loading
No manual model selection required - works with Frigate's existing model management