Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

average gradients to update global theta? #7

Open
weicheng113 opened this issue Apr 28, 2019 · 8 comments
Open

average gradients to update global theta? #7

weicheng113 opened this issue Apr 28, 2019 · 8 comments

Comments

@weicheng113
Copy link

Thanks for the nice implementation in pytorch, which made easier for me to learn.

Regarding chief.py implementation, I got a question about updates to global weights. From Algorithm Pseudocode in the paper, it seems to use averaged gradients from workers to update the global weights, but chief.py looks using sum of worker's gradients? Thanks.

Cheng

@yusukeurakami
Copy link

@weicheng113 I think same as you think. In my code, I am adding the following division.

p._grad = shared_grad_buffers.grads[n+'_grad']/params.num_processes

@weicheng113
Copy link
Author

weicheng113 commented May 1, 2019

@yusukeurakami Thanks for the reply. Do you mean you are going to add the averaging in this line -

p._grad = Variable(shared_grad_buffers.grads[n+'_grad'])

Or you have already added somewhere, which I did not find it. Thanks.

@yusukeurakami Sorry, I thought you were the author of the code. :) By the way, is the training working fine after you apply the division?

@yusukeurakami
Copy link

@weicheng113 No problem. I replied to you because I was stacked at the same place. I don't have enough data points to compare the result yet, and I have to. I will update my result when I got it.

@weicheng113
Copy link
Author

@yusukeurakami Thanks a lot.

@yusukeurakami
Copy link

@weicheng113 I've run my training with 7 workers in total. So, with average, gradients will be divided by 7 every update. however, from the result, both with average and non-average converged in the same values in almost same update steps. I don't really understand why it behaves same even the parameters were updated 7times smaller...

@weicheng113
Copy link
Author

@yusukeurakami Thanks for sharing good findings. I don't understand also. From gut feeling, the average will make update more steady with smaller steps. Could it be the env you are trying to solve is simple so that It cannot tell?

@yusukeurakami
Copy link

@weicheng113 I am running a robot arm with 7 joints in continuous action and state space (original Mujoco environment). It should be complex enough.

@weicheng113
Copy link
Author

@yusukeurakami Ok, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants