You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You should be able to just download the subset required for EM VQ by providing the --benchmarks em flag to the CLI (see here). This will download more videos than necessary (as it's for the entire EM) - sitting at 2.75TB, e.g.
If you want to download less, I would reccomend passing in the video uids to download via --video_uid_file, with the video uids derived from the annotation JSON files.
There are also canonical clips. These are clips specific to the benchmark task and are subsets of the videos. For VQ they are ~5FPS clips with frames for where there are annotations. They are much less in size, sitting at around 700GB (for all of EM).
@miguelmartin75 thanks for the response! What is the purpose of canonical clips? Should use them training my proposed model? Or would it result in suboptimal training?
Thanks for this wonderful work!
How to reduce the download size if I want to work only for VQ2D task?
Command given here downloads more than 5TB data: https://github.com/EGO4D/episodic-memory/blob/main/VQ2D/README.md#running-experiments
The text was updated successfully, but these errors were encountered: