-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Overall training process #9
Comments
Hello, you know how to train now, could you share your training experience (including the environment configuration of the operation), thank you very much! |
Hi! Has anyone reproduced the results? |
I tried to reproduce it, but limited to the memory of gpu(maybe CORA needs 8 x 80G), the result of novel ov-coco is slightly lower than 35.1 with RN50 (about 34.3 Welcome to more discussion with my wechat Maytherebe |
@eternaldolphin Hi, which GPU devices do you use to reproduce the results? |
4*40G |
That's wonderful! I would like to ask if it is possible for u to kindly open-source your implementation on github so that other people can learn from it and reproduce the results of CORA. I hope I'm not being rude. Thanks. |
Excuse me, what is your batchsize setting? I used 4 × 48G, batchsize is set to 4, but CUDA out of memory. @eternaldolphin |
sorry for missing the information, you can refer to this https://github.com/eternaldolphin/cora-dev. And welcome to use LaMI-DETR as baseline, which can train ov-lvis in one day with 8x40G A100 or in two days with 8x32G V100 |
Thank you for your excellent work.
I am a little confused about the overall training process of CORA.
Could you please describe the overall training process?
Thank you very much!
The text was updated successfully, but these errors were encountered: