Frigate+ Base Model 2024.0 Update #9466
Replies: 23 comments 43 replies
-
IIRC the first base model had ~15k images on it. How many does 2024.0 have? And do you think that is even a relevant metric? |
Beta Was this translation helpful? Give feedback.
-
Thank you so much for all the work you do! roughly how many model releases do you think you will do this year? Just a ballpark figure would be great so I can “budget” my tokens accordingly as Australia has a bad exchange rate right now so I’m wanting to make my token allocation last the whole year 😀 |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Can we please have a tag for USPS just like we have for FedEx, UPS and Amazon, @blakeblackshear ? |
Beta Was this translation helpful? Give feedback.
-
thanks nicolas... any change in the frigate more than the api key in the
config of the addon?
Andres Roepke Del Solar.
Ingeniero Civil Industrial
Ingeniero Electronico
Mobile 88196932
…On Sat, Jan 27, 2024 at 21:16 Nicolas Mowen ***@***.***> wrote:
This has been asked many times and is coming along with other labels in
the future, as was referenced at the end of this post.
—
Reply to this email directly, view it on GitHub
<#9466 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AIOTKWXRDISKBQ2LJZUTULLYQWKGVAVCNFSM6AAAAABCNR52C2VHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM4DENRYGI2TE>
.
You are receiving this because you commented.Message ID:
***@***.***
com>
|
Beta Was this translation helpful? Give feedback.
-
good to see frigate constantly improving. what is the timeplan for openvino? |
Beta Was this translation helpful? Give feedback.
-
Great work as always! I’ll be training a new model later today. A question; If I am not mistaken the number of positives and false positives we submit is important when it comes to training, if you submit say 100 false positives and 100 positives, then only 20 if your false positives will be used during training. On that basis it would be really helpful if the portal showed us the number of positives and the number of false positives we have submitted. Will that be coming and if so, any idea when? |
Beta Was this translation helpful? Give feedback.
-
Maybe a stupid question. Do I need to train any pictures myself and if so, how many? I can't see any models in models section |
Beta Was this translation helpful? Give feedback.
-
I have another suggestion: could you use Frigate+ to trigger a detection on a snapshot using the locally installed model and hardware? Usually snapshots sent to Frigate+ have a single detected object (or false positive), but I'd like to use the model to detect everything and then just tweak the bounding boxes/labels/etc. I guess a local API (accessible from the workstation you use to access Frigate+) could be called using a web protocol to send the snapshot back from Frigate+ to the local Frigate instance. |
Beta Was this translation helpful? Give feedback.
-
The new keyboard shortcuts are honestly insanely good and welcome. I think they just cut the time I spent labelling images to about a quarter ❤️ Thank you so much for them! |
Beta Was this translation helpful? Give feedback.
-
Thanks for all your work! I updated yesterday morning as I also happened to have 700 annotated images ready. I won't 100% know the improvements are all the model but so far there has been less false positives. And a HUGE improvement in face recognition for my use case |
Beta Was this translation helpful? Give feedback.
-
Two relatively random questions, but it seems like the correct thread to ask them on:
And as always great work, Frigate is by far the most useful and stable piece of home automation (paid or otherwise) that I use. |
Beta Was this translation helpful? Give feedback.
-
A question about false positive submission. |
Beta Was this translation helpful? Give feedback.
-
I'm still missing the feature to report false negatives (that is, easily-submit images for which no objects had been detected). I think that probably most of the current images in the database only include images with correct or wrong object detections (either detected by the default COCO model included in Frigate by default, or by the different Frigate+ models), as uploading images from recordings at this moment is a pain in the neck. |
Beta Was this translation helpful? Give feedback.
-
Do you plan to "offer" a possibility to test the Base Model 2024 during a few hours/days, before taking a subscription? |
Beta Was this translation helpful? Give feedback.
-
This change has led to me making a few mistakes, I'm sure I'll get used to it, but I'm sure others are as careless as me lol Can I request perhaps that next to the add button, just put the label list choice right there, it can retain the default/last choice still without a prompt, but make it easier to change first before drawing the label so we don't forget? Also I just trained my first model on 2024.0 and my perpetual false positives that I've been struggling with seem way down, at least in the first couple hours! I could never get the old model to realize an umbrella without a person is not a person, and there were quite a few chairs that the old model would insist someone was sitting in, no matter what, but 3 hours in, it finally figured that out! Great work, great improvements! |
Beta Was this translation helpful? Give feedback.
-
Hmm, I was excited when I switched yesterday to the new model and stopped seeing the continuous false positives with the umbrellas and chairs that are sometimes occupied but always indicate a person, but apparently I've just trained out a lot of them completely. Now I'm having false negatives everywhere that apparently I'm missing tons of events (for the last day). I upped my threshold to 90% under the previous model because I had so many false positives, and all 99% of the true positives seemed to hit 90% anyways, so maybe I need to try to drop it, but I do seem to be struggling with this. For the moment I switched back to my previous model as my next week has quite a busy schedule, but I figured I'd mention it here before trying to create yet another model and trying to get more training in. I guess it might also be I don't have enough training data. I got my last standalone DVR integrated with my Frigate installation and now have 38 cameras (and resource usage seems to be good, motion detection issues better then they have ever been). I only have 823 verified images, which I guess means I'm very, very short of the recommended minimum 3,800 trained images for my installation (some of them are very tough, there are a number of cameras that are in areas that are supposed to get 2-3 people a week walking by them, unless something is wrong (hence the camera). If you feel like taking a look to see if I'm doing something poorly or wrong, I'd appreciate it, but I won't complain any more at least until I hit the recommended minimums! |
Beta Was this translation helpful? Give feedback.
-
I read about people reporting here having way less false positives, but i wonder how this is for "stubborn" spots. I have hanging chair outdoor that moves a bit in the wind and frigate keeps thinking its a person. I submitted like 150 images teaching the model it is not a person, retrained, use new model, and now often it reports the chair as person with higher then 90% confidence. |
Beta Was this translation helpful? Give feedback.
-
I'm trying to get started with Frigate+. Found the following in the docs:
But how do I submit images of packages for example if the default model doesn't track them? Or do I just have to submit images of objects that are tracked (e.g. car, person) and only after getting my first model I can start submitting images of packages for a future model? |
Beta Was this translation helpful? Give feedback.
-
I have a set of 7 cameras and I was planning on moving the angle of two of them, will that negatively affect the models I've trained? |
Beta Was this translation helpful? Give feedback.
-
Might be nice if we could get a count of the different labels we have used and from which cameras. That way we can send an appropriate balance of object types for each camera. I've got 9 cameras and I'm finally up to about 50 on each. Most are dogs and the same cars parked outside my house and in the driveway. Do you really want that many repetitive pictures? I'm getting them at different times of the day, and in different weather conditions ( it rained last night ). Also, in the future do we need to submit 100 from each camera or just the one we are trying to improve the performance on? |
Beta Was this translation helpful? Give feedback.
-
I'm finding that the model does not do nearly as well at night for cars. Using Dahua TM5442AS cameras with IR. Pointed at a driveway. Don't seem be to able to land on a sweet spot threshold that works well to avoid false positives during the day while still identifying cars at night. Wondering if others see the same deficiency. |
Beta Was this translation helpful? Give feedback.
-
Hey @NickM-27 how are you getting multiple objects detected in a single snapshot? I've only ever seen one object labeled in any of my captures at any given time |
Beta Was this translation helpful? Give feedback.
-
TLDR: The annotation tool was upgraded to speed up labeling and the base model was updated by using some algorithms to select a diverse set of user images to include in the training set. You must request a new model to get a model trained on the new base.
Annotator upgrades
Before I get into the base model update, there were also some upgrades to the annotation tool in Frigate+ for a faster keyboard workflow:
Two common requests are zooming further and AI assisted labeling. These are coming in a future update.
Base Model 2024.0
Stats on new user submissions since launch
Image submissions have skyrocketed with no signs of slowing down since the initial invitation only launch. There have now been more than 10 times as many images submitted since the first base model was trained. In addition, the first base model was trained prior to false positive submissions in Frigate+, so there are an incredible number of images available for training.
Selecting new images
The first thing I had to figure out was which images should be selected to add to the training set. The typical guidance is to focus on selecting a diverse sampling of training examples.
To do this, I used an embeddings model to generate embeddings on all the images. This allows me to plot those images on a scatter plot. In the following image, every image is a unique dot. The pink dots represent the images used for the first base model.
From there, I used a k-means clustering algorithm to group the images into clusters of visually similar images. This image shows the results of the clustering. Many of the colors appear similar in color, but there are almost 10k unique clusters.
Then I selected one image from each cluster that did not already contain an image in the base training set. This image has a dark blue dot for each image used for the first base model.
This image adds a light blue dot for all of the images I selected to add to the base training set for the second base model. You can see it avoided selecting more images in areas that are already heavily represented.
I went through a similar process for sourcing false positive submissions.
Once I had the images I wanted to include, I had to review every single image to ensure the bounding boxes were accurate and there wasn't anything missing to ensure the base training set isn't polluted.
Performance metrics
Quantifying the improvement isn't exactly a straightforward process. The training set is changing, which means the goal posts are moving as well. A very popular metric used to evaluate performance is the Mean Average Precision (mAP). The specification is the standard for evaluating performance in the COCO competition. I calculated the mAP scores for the initial base model and the 2024.0 model for the most recent validation set which is a diverse sampling of current user submissions.
The mAP for 2024.0 is ~15% higher. I don't think its correct to say that means the model is 15% better than it was before, but this is a standard way to evaluate model performance. I may develop better evaluation methods in the future.
Caveats
While all the metrics from this base model indicate that it is an across the board improvement, I do expect that the differences will result in some new false positives. I personally experienced this when running my first fine tuned model from the new base. You may need a second training to address them if that happens. I believe this will occur less and less frequently over time as there are false positives submitted from multiple base model versions.
More to come
Beta Was this translation helpful? Give feedback.
All reactions