-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Track "K" frames at once instead of frame by frame #7515
Comments
@rohit901 , could you please share your ideas on how to annotate using a tracker faster? I'm not sure I got it from your issue. You can describe the pipeline step by step or share an existing good implementation in a 3rd-party tool/demo. |
Hi @nmanovic, It would be nice if the tracker predicts for the future "K" frames instead of only predicting for future "1" frame. For example, this third-party tool, i.e., Supervisely, allows you to track for future "K" frames at once (https://supervisely.com/) A feature like that in CVAT can be beneficial, where we can then run the self-hosted CVAT tool on our local machines with GPUs. |
Hello @nmanovic, I am interested in contributing to this project. Could you please guide me on the next steps to get involved? |
Hi, please find our application on the GSoC and contact us using of mentioned ways. We will guide you. https://summerofcode.withgoogle.com/programs/2024/organizations/cvat |
Why don't you just add a function to track every frame automatically in one server request? The speed is not low because it is tracking every frame, it is slow because CVAT sends a server request every frame |
Would be nice if you could have siammask auto annotate the next 100 frames or so |
I am also interested in contributing to this. Not because of GSoC but because I really need a tracker and the current functionality is super slow. Guidance would be appreciated on how this could be done. |
@siddtmb , feel free to contribute. It is an open-source project. |
Actions before raising this issue
Is your feature request related to a problem? Please describe.
It seems to take a long time to annotate videos with even AI trackers like TransT.
I'm running the model on M1 Macbook Air on CPU, but even if I run on GPU, I think running it frame by frame, makes it very slow.
What is the best way to annotate videos? My videos contain many frames roughly 200-700.
I have to run TransT frame by frame and each frame takes 2-3 seconds to run too :/ Is the current way to annotate videos is to manually do them with interpolation?
Describe the solution you'd like
A faster/semi-automatic or automatic way to annotate video dataset.
Describe alternatives you've considered
No response
Additional context
You may refer to related issues: #5686
#2949
The text was updated successfully, but these errors were encountered: