Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overall training process #9

Open
yuki1ssad opened this issue May 24, 2023 · 8 comments
Open

Overall training process #9

yuki1ssad opened this issue May 24, 2023 · 8 comments

Comments

@yuki1ssad
Copy link

Thank you for your excellent work.
I am a little confused about the overall training process of CORA.
Could you please describe the overall training process?
Thank you very much!

@QHCV
Copy link

QHCV commented Jun 2, 2023

Thank you for your excellent work. I am a little confused about the overall training process of CORA. Could you please describe the overall training process? Thank you very much!

Hello, you know how to train now, could you share your training experience (including the environment configuration of the operation), thank you very much!

@wusize
Copy link

wusize commented Sep 2, 2023

Hi! Has anyone reproduced the results?

@eternaldolphin
Copy link

eternaldolphin commented Sep 2, 2023

Hi! Has anyone reproduced the results?

I tried to reproduce it, but limited to the memory of gpu(maybe CORA needs 8 x 80G), the result of novel ov-coco is slightly lower than 35.1 with RN50 (about 34.3

Welcome to more discussion with my wechat Maytherebe

@kinredon
Copy link

@eternaldolphin Hi, which GPU devices do you use to reproduce the results?

@eternaldolphin
Copy link

@eternaldolphin Hi, which GPU devices do you use to reproduce the results?

4*40G

@shaniaos
Copy link

shaniaos commented Sep 24, 2023

Hi! Has anyone reproduced the results?

I tried to reproduce it, but limited to the memory of gpu(maybe CORA needs 8 x 80G), the result of novel ov-coco is slightly lower than 35.1 with RN50 (about 34.3

Welcome to more discussion with my wechat Maytherebe

That's wonderful! I would like to ask if it is possible for u to kindly open-source your implementation on github so that other people can learn from it and reproduce the results of CORA. I hope I'm not being rude. Thanks.

@ysysys666
Copy link

Excuse me, what is your batchsize setting? I used 4 × 48G, batchsize is set to 4, but CUDA out of memory. @eternaldolphin

@eternaldolphin
Copy link

eternaldolphin commented Sep 4, 2024

Hi! Has anyone reproduced the results?

I tried to reproduce it, but limited to the memory of gpu(maybe CORA needs 8 x 80G), the result of novel ov-coco is slightly lower than 35.1 with RN50 (about 34.3
Welcome to more discussion with my wechat Maytherebe

That's wonderful! I would like to ask if it is possible for u to kindly open-source your implementation on github so that other people can learn from it and reproduce the results of CORA. I hope I'm not being rude. Thanks.

sorry for missing the information, you can refer to this https://github.com/eternaldolphin/cora-dev. And welcome to use LaMI-DETR as baseline, which can train ov-lvis in one day with 8x40G A100 or in two days with 8x32G V100

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants