Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Atari training and LBFGS gpu memory overhead #24

Open
ViktorM opened this issue Jun 2, 2017 · 0 comments
Open

Atari training and LBFGS gpu memory overhead #24

ViktorM opened this issue Jun 2, 2017 · 0 comments

Comments

@ViktorM
Copy link

ViktorM commented Jun 2, 2017

Hi John,

I'm trying to apply TRPO to the robotics control task, using vision. But constantly hit a GPU memory overhead in class NnRegression in fit during baseline calculation. On the input there were one 128x128 greyscale image and 14 joints observations. Overhead can be seen even when I tried smaller number of iterations and switched to the 96x96 image size. Replacing LBFGS optimizer helped to some extent - there were no crashes but convergence and calculation time became worse.

Did you meet similar memory overhead issues during Atari training and if yes how did you solve them? Input in Atari games is at least 4 times larger then in my cases. So stored volume of the observations data in paths should be even larger or at least compared to my case.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant