Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The compute_loss function is wrong for the Simplest Policy Gradient #414

Open
alantpetrescu opened this issue Jun 5, 2024 · 3 comments
Open

Comments

@alantpetrescu
Copy link

alantpetrescu commented Jun 5, 2024

I have been reading the 3 parts from "Introduction to RL" section and I have observed in part 3 that the compute_loss function for the Simplest Policy Gradient returns the mean of the product between the log probabilities of the actions taken by the agent and the weights of those actions, in other words, the finite-horizon undiscounted returns of the episodes in which they were taken.

image

In the estimation of the Basic Gradient Policy above, the sums of products is divided by the number of trajectories, but in the implementation, when you return the mean, the sums of products is divided by the number of all the actions taken across all the trajectories from one epoch. Maybe I am understanding this wrong, but I wanted to get a clear picture on the implementation.

image

@hirodeng
Copy link

I notice the same problem

@earnesdm
Copy link

earnesdm commented Aug 25, 2024

@alantpetrescu I think you are correct that the equation written differs from what is implemented in code, but only by a constant multiple. Since we multiply the gradient estimate by the learning rate when performing gradient ascent, the constant multiple doesn't really matter.

@burichh
Copy link

burichh commented Jan 30, 2025

@alantpetrescu I think you are correct that the equation written differs from what is implemented in code, but only by a constant multiple. Since we multiply the gradient estimate by the learning rate when performing gradient ascent, the constant multiple doesn't really matter.

I think the problem with this is that not all trajectories are of the same length. Some trajectories might be short (e.g. the game ends early), and some trajectories might be really long. This means that a single batch will most likely contain trajectories of different length.

Imagine the following. You sample a batch:

  1. First batch: For the sake of example, let's think of the most extreme case, a single batch contains only two trajectories: a very short one with only 1 observation, and a long one with 1000 observations. In this case, you should only divide the values with 2 (num of trajectories), and not with 1001 (num of observations). But actually, according to the current implementation -(logp * weights).mean(), you divide all values with 1001.

Update according to the gradients. Sample another batch:

  1. Second batch: In a more even case, a single batch contains a dozens of trajetories, maybe 100, finishing the game after 10 observations each time. In this case, you should only divide the values with 100 (num of trajectories) and not with 1000 (num of observations).

update according to the gradients.

Now, for both batches, we divided the sum with roughly 1000, which could be reshuffled into the learning rate indeed, but the big problem is that batch by batch the denominator will most like change! It's equivalent to changing the learning rate for each batch, which surely will cause a problem.

So @alantpetrescu I think this is still a bug. It might work with this because if the length of the episodes are roughly equal and do not vary too much batch by batch, this effect will not be observable too much, and can be thought of as a rescaled learning rate. But I'm not sure if we have such even length episodes, or any guarantee of that whatsoever.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants