You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey, I've been working with LOCM for the past couple of weeks and I am at the point now where I would like to use my trained agents in the environment. I have subclassed the Agent abstract class, defined the necessary methods and created the environment as
Now the problem is that during the initialisation, my_draft_agent is used for the draft phase but the agent gets state of the type gym_locm.engine.game_state.State which is unsuitable for a neural network that is used within the agent. Is there any way I can obtain a numerical representation of the state, such as the one returned by env.step?
From what I gathered looking at the source code. State does not have any method that would return the numerical representation of the state. The only place where I found such a method is in the LOCMEnv class which I, unfortunately, cannot access from the agent during the draft phase, I believe. Is there any other way? Thanks!
The text was updated successfully, but these errors were encountered:
Hey, it's me again. I've found a working solution so I'll post it here, in case anyone else gets stuck on this. Moreover, I would like to hear your (@ronaldosvieira) opinion on this solution.
As Agent.act receives state of type gym_locm.engine.game_state.State and I need a numerical representation of the state, I've decided to maintain an instantiation of the environment we are in, e.g. self._draft_env = LOCMConstructedSingleEnv() within the agent itself and at the beginning of each call to Agent.act I do self._draft_env.state = state. This allows me to call self._draft_env.encode_state() to get the numerical representation of the state I need.
It seems that this solution returns the same numerical representation as if I called env.encode_state() on the original environment that is unavailable to the agent directly.
Hi @Rattko! Sorry for not answering earlier - somehow I didn't get notified.
What you brought up is indeed an issue. The LOCMEnv class has methods to transform a State object into a numerical representation, but they really should be in a different package and passed as a parameter to LOCMEnv and also accessible to whoever wants to use them separately as well.
Your suggestion is precisely what I would do as a workaround. An alternative would be copying the methods inside your Agent implementation. I'll leave this issue open so that I can eventually implement this for the next release.
Hey, I've been working with LOCM for the past couple of weeks and I am at the point now where I would like to use my trained agents in the environment. I have subclassed the
Agent
abstract class, defined the necessary methods and created the environment asNow the problem is that during the initialisation,
my_draft_agent
is used for the draft phase but the agent getsstate
of the typegym_locm.engine.game_state.State
which is unsuitable for a neural network that is used within the agent. Is there any way I can obtain a numerical representation of the state, such as the one returned byenv.step
?From what I gathered looking at the source code.
State
does not have any method that would return the numerical representation of the state. The only place where I found such a method is in theLOCMEnv
class which I, unfortunately, cannot access from the agent during the draft phase, I believe. Is there any other way? Thanks!The text was updated successfully, but these errors were encountered: