Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Obtaining numerical state for an agent #29

Open
Rattko opened this issue Oct 11, 2023 · 2 comments
Open

[Question] Obtaining numerical state for an agent #29

Rattko opened this issue Oct 11, 2023 · 2 comments

Comments

@Rattko
Copy link

Rattko commented Oct 11, 2023

Hey, I've been working with LOCM for the past couple of weeks and I am at the point now where I would like to use my trained agents in the environment. I have subclassed the Agent abstract class, defined the necessary methods and created the environment as

env = gym.make(
    'LOCM-battle-v0', version='1.5',
    deck_building_agents=[my_draft_agent, opponent],
    battle_agent=opponent,
    reward_functions=['win-loss'],
    reward_weights=[1.0]
)

Now the problem is that during the initialisation, my_draft_agent is used for the draft phase but the agent gets state of the type gym_locm.engine.game_state.State which is unsuitable for a neural network that is used within the agent. Is there any way I can obtain a numerical representation of the state, such as the one returned by env.step?

From what I gathered looking at the source code. State does not have any method that would return the numerical representation of the state. The only place where I found such a method is in the LOCMEnv class which I, unfortunately, cannot access from the agent during the draft phase, I believe. Is there any other way? Thanks!

@Rattko
Copy link
Author

Rattko commented Nov 6, 2023

Hey, it's me again. I've found a working solution so I'll post it here, in case anyone else gets stuck on this. Moreover, I would like to hear your (@ronaldosvieira) opinion on this solution.

As Agent.act receives state of type gym_locm.engine.game_state.State and I need a numerical representation of the state, I've decided to maintain an instantiation of the environment we are in, e.g. self._draft_env = LOCMConstructedSingleEnv() within the agent itself and at the beginning of each call to Agent.act I do self._draft_env.state = state. This allows me to call self._draft_env.encode_state() to get the numerical representation of the state I need.

It seems that this solution returns the same numerical representation as if I called env.encode_state() on the original environment that is unavailable to the agent directly.

Let me know your thoughts on this!

@ronaldosvieira
Copy link
Owner

Hi @Rattko! Sorry for not answering earlier - somehow I didn't get notified.

What you brought up is indeed an issue. The LOCMEnv class has methods to transform a State object into a numerical representation, but they really should be in a different package and passed as a parameter to LOCMEnv and also accessible to whoever wants to use them separately as well.

Your suggestion is precisely what I would do as a workaround. An alternative would be copying the methods inside your Agent implementation. I'll leave this issue open so that I can eventually implement this for the next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants