-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
average gradients to update global theta? #7
Comments
@weicheng113 I think same as you think. In my code, I am adding the following division. p._grad = shared_grad_buffers.grads[n+'_grad']/params.num_processes |
@yusukeurakami Thanks for the reply. Do you mean you are going to add the averaging in this line - Line 16 in ec93034
Or you have already added somewhere, which I did not find it. Thanks. @yusukeurakami Sorry, I thought you were the author of the code. :) By the way, is the training working fine after you apply the division? |
@weicheng113 No problem. I replied to you because I was stacked at the same place. I don't have enough data points to compare the result yet, and I have to. I will update my result when I got it. |
@yusukeurakami Thanks a lot. |
@weicheng113 I've run my training with 7 workers in total. So, with average, gradients will be divided by 7 every update. however, from the result, both with average and non-average converged in the same values in almost same update steps. I don't really understand why it behaves same even the parameters were updated 7times smaller... |
@yusukeurakami Thanks for sharing good findings. I don't understand also. From gut feeling, the average will make update more steady with smaller steps. Could it be the env you are trying to solve is simple so that It cannot tell? |
@weicheng113 I am running a robot arm with 7 joints in continuous action and state space (original Mujoco environment). It should be complex enough. |
@yusukeurakami Ok, thanks. |
Thanks for the nice implementation in pytorch, which made easier for me to learn.
Regarding chief.py implementation, I got a question about updates to global weights. From Algorithm Pseudocode in the paper, it seems to use averaged gradients from workers to update the global weights, but chief.py looks using sum of worker's gradients? Thanks.
Cheng
The text was updated successfully, but these errors were encountered: