-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Central network-based database container for multiple Frigate processors #3305
Comments
Peewee (current python db library) seems to support a handful of different database types so this should not be extremely difficult, but likely still relatively complicated to get going and tested. I am not familiar with cockroach, but I think PostreSQL is suitable for this type of use case and would be a good starting point |
I see usage even in smaller deployments (e.g.: edge TPU-enabled device for event detection, ffmpeg+file storage on NAS server). I guess the "easy" workaround would be to mount nfs shares and store data there (clips, recordings, sqlite files). No idea on stability with this "solution" |
Mounting shares works great as it works through the standard docker flow, many users already use frigate this way |
+1 on this. At 10 cameras and limited ability to get corals that work with my systems I was having trouble figuring out how to scale out my system appropriately. Any device I could get the m.2 tpu to work on didn't have good enough gpu for decoding plus I was losing the benefits of my server (network link agg, raid, etc)...I finally found some mini pcie corals and ordered a motherboard I hope will support them.... but it would be nice to be able to scale multiple workers into a system. The main feature missing with the current approach is a unified event viewer. If that was added to the Lovelace card or in HASS potentially this isn't needed, but from an architecture future perspective I think it would still be better to have at the frigate level. One idea with the zones it would be possible to link events together through multiple cameras, also from a configuration perspective maybe it's possible to set it to like micro service workers? Not just detection vs recording but also a worker for db cleanup (removing expired clips etc) |
@kbrouder to be clear, the system that does the decoding needs to do the detection as well, I don't believe it's feasible to separate those out |
Using a combination of network share and a central database I think the unified view should be feasible, curious what @blakeblackshear thinks as I may be missing something |
Yeah that makes sense.... that would have to be a whole rearchitecture where the decoder sends the image via api to a detection system. That's probably not worth it, but a unified interface where several clusters can contribute to a master, or even a master that can read from child databases would be helpful. |
the problem in general is latency too, the latency might be too high for the real-time approach that frigate has. |
very useful for defining the scope of the enhancement. my assumption is pointing peewee to a mariadb or other supported database is a relatively easy task, and worthy alone because having the option to externally connect to the frigate database provides a foundation for new and interesting capabilities, including better horizontal scaling. taking on the goal of supporting horizontal scaling using a common user interface and shared database, I suggest limiting this FR scope to shared network storage. here's why, to illustrate with and without shared storage access: horizontal scaling multiple frigate instances w/ common database and shared storageadvantages:
required work:
horizontal scaling multiple frigate instances w/ common database without shared storage accessadvantages:
disadvantages compared to shared storage:
required work:
the more I consider these two scenarios, the more I believe independent non-shared storage should be out-of-scope for the current feature request due to the level of effort. A single user interface over multiple databases should also be out-of-scope, imo |
Maybe using something like S3 (or minio) could be used for storage? This would also allow to have tags attached to the objects. |
I agree here, scope crepe could make this very difficult to tackle at one time and I think this is a great starting point. The storage part using network shares should (I think) come mostly for free since multiple servers could point to the same network share and not overwrite each other's files. |
In consequence, shared storage only means shared access. Xprotect defines storage location per camera. This model could also work for frigate but the frigate ui node would require knowledge and permission to access every camera storage location It can be useful to spread streams across cheap large but slow drives |
I haven't created a dev environment or examined the code base yet, but isn't the UI already separate? Is the UI separate from the API? Would it be difficult to have a UI that can pull from multiple instances? I'm probably missing something in the architecture but that would be the cleanest approach to meeting several needs imo. I'm a .net guy and haven't done any web dev with video so I'm no expert by any means. Got too many projects at the moment but at some point I'd love to contribute to this project, I bet this will be solved by someone who knows what they are doing before I find the time. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Describe what you are trying to accomplish and why in non technical terms
Capability to host database container separate from frigate that supports frigate object recognition and storage scale out across multiple low-power video processing containers. SQLite does not support this deployment scenario well. SQLite is also limited in performance at scale.
Describe the solution you'd like
Usage context and design considerations
Frigate currently supports "pseudo" scale out deployment using Home Assistant to create a single surface to consume data from multiple frigate video processing servers, yet Frigate is so much larger than its Home Assistant integration! I'm a huge fan of Home Assistant, but the ha media browser integration doesn't meet my needs.
Multiple frigate container database silos can have a single surface in Home Assistant, but Frigate itself isn't capable of providing a singular video viewing surface. Current pseudo scale out is multiple Frigate databases that are unaware of each other. A central database over multiple video processing containers provides a true scale out model.
Example usage:
Deployment with 40 cameras, need to see events from all 40 cameras in one place. With a central database, it is possible to run 10 Frigate video processing servers, each with a separate storage location, that feed a single metadata repository of clips and recordings.
Food for thought: frigate server could be a camera itself ..... 😃 btw, this scaling design is similar to the milestone xprotect scale out model that has proven successful for large organizations
The text was updated successfully, but these errors were encountered: