-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EfficientZero high memory consumption / keeps increasing after replay buffer is full #26
Comments
in lines 240-241 of
and changing lines 252-253 to
also, try explicitly doing |
btw, in train/mean_score of your posted plot, 100K in x-axis is not for Atari 100K, but for Atrai 10M (i.e., 10M interactions with the env)? |
The X axis corresponds to training steps (not environment steps). My experiments were scheduled to run 900k training steps while performing 30M environment steps (I stopped them at around 600k). This means that for each 100k training steps in the x-axis there are around 30/9= 3,33M environment steps being processed. Is it clearer ? |
Thanks for your suggestions :). I already tried to add periodic |
did not try the exp in the large scale as you discussed. but the change on codes relevant to lastly, in line 17 of |
Hmm interesting. Could it be just because you never get to load the target weights in your experiments because they are shorter than the target model checkpoint interval (meaning that you never get into the if statement in line 252) ? |
no. it is just because this would save RAM memory, so |
I am currently experimenting on scaling EfficientZero to learning setups with high-data regimes.
As a first step, I am running experiments on Atari, with a replay buffer of 1M environment steps.
While doing this I observed that RAM consumption keeps increasing long after the replay buffer reached its maximum size.
Here are tensorboard plots on Breakout, for a 600k training steps run (20M environment steps / 80M environment frames):
I perform experiments on cluster computers featuring 4 tesla V100 gpus / 40 cpus and 187GB of RAM.
As you can see, although the maximum replay buffer size ("total_node_num") is reached after 30k training steps, RAM (in %) keeps increasing until around 250k steps, from 80% to 85%.
Ideally, I would also like to increase the batch size. But it seems like the problem gets worse in that setting:
The orange curves are from the same Breakout experiments, but with a batch size of 512 (instead of 256), and a smaller replay buffer size (0.1M). Here the maximum replay buffer size is obtained at 4k training steps but memory keeps increasing until 100K+ steps.
I understand that a bigger batch means more RAM because more data is being processed when updating/doing MCTS, but it does not explain why it keeps increasing after the replay buffer fills up
Any ideas on what causes this high ram consumption, and how we could mitigate that ?
Run details
Here are the parameters used for the first experiment I described (pink curves):
The text was updated successfully, but these errors were encountered: