We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, thank u for u code, I trained GRL base model on dn task, but the speed of training is a bit slow, is this normal?
training config Batch = 1 gpus = 2 workers = 2 patch_size = 128 stripe_size1 = 32 stripe_size2 = 64
The text was updated successfully, but these errors were encountered:
I have same issue, It tooks about 90sec to 10iteration with 288 patch size.
Sorry, something went wrong.
the same. i trained grl-s and it cost much more memory than i thought.
the same. i trained grl-s and it cost much more memory than i thought. i trained grl-s cost 34G memory on A100, but grl-base is out of memory
Is it possible for the author to publish an official training code in ReadMe.md? Thanks!
No branches or pull requests
Hi, thank u for u code, I trained GRL base model on dn task, but the speed of training is a bit slow, is this normal?
training config
Batch = 1
gpus = 2
workers = 2
patch_size = 128
stripe_size1 = 32
stripe_size2 = 64
The text was updated successfully, but these errors were encountered: