Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fully observability #26

Closed
d3sm0 opened this issue Oct 19, 2018 · 4 comments
Closed

Fully observability #26

d3sm0 opened this issue Oct 19, 2018 · 4 comments

Comments

@d3sm0
Copy link
Contributor

d3sm0 commented Oct 19, 2018

Hey,
I haven't thought about two issues with the FullyObsWrapper:

  1. using the wrapper as it is, it renders an image of 800 x 800 x 3 which is quite heavy to manage
  2. one must call the env.render('human')
    The combination of the two makes the env very slow.

Any suggestion on how I can improve on it?

Thank you !

@maximecb
Copy link
Contributor

maximecb commented Oct 19, 2018

Hmm, first thing is, you should be calling env.render('rgb_array') (which will return a numpy array), not env.render('human'), and this should be done within the wrapper's observation method.

For speed, the simplest thing to do would be to render a smaller version of the grid. But the most efficient way would be to encode the grid directly as a numpy array with 3 values per cell as I did for partial observability. See the encode method of the Grid class: https://github.com/maximecb/gym-minigrid/blob/master/gym_minigrid/minigrid.py#L508

The only downside of this is that it doesn't currently encode the agent position.

@d3sm0
Copy link
Contributor Author

d3sm0 commented Oct 20, 2018

Thank you for the quick reply. Something like this should the trick then:

    def observation(self, obs):
        full_grid = self.env.grid.encode()
        full_grid[self.env.agent_pos[0]][self.env.agent_pos[1]] = self.env.agent_dir
        return full_grid

@maximecb
Copy link
Contributor

Yes, like that, except you'd want to also encode that the agent is at that position, not just the agent direction. Something like:

full_grid[x, y, 0] = 255
full_grid[x, y, 1] = self.env.agent_dir
full_grid[x, y, 2] = 0

You would also want to change the observation_space to have the correct shape.

@Driesssens
Copy link

For anyone using this: I recommend encoding the agent as a much lower number that is closer to how the other objects are encoded.

While training on the Unlock-environment converged in ~12 minutes with the regular, egocentric view, training with the FullyObsWrapper never converged. At first I thought the egocentric view gives a translation invariance that the FullyObsWrapper doesn't have, so I tried to compensate and make the environment much simpler by removing colors, giving a small reward when picking up the key or reducing the action space such that pickup and toggle were reduced to 'interact' and drop was removed. Despite all of this, the model just wouldn't learn to go to the door after picking up the key.

Finally I changed the agent encoding from 255 to 9, and now training with the fully observable view converges as fast as with the egocentric view. Possibly the high value is too dominant in the convnet's processing.

PS. The current FullyObsWrapper also doesn't encode which item is being carried. In the egocentric view, this is encoded by showing the item as if it is at the agent's position, but the FullyObsWrapper overwrites this grid position to encode the agent. If color is not important, you can encode the carried object's type at the agent's position in the 3rd layer. If color is important, you will need to add a 4th layer to also include the carried object's color.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants