RL ActionSpace
#1859
Replies: 1 comment 2 replies
-
Well, one workaround option is you can produce an action that includes a "tag" indicating which action type it is along with a value of X up to some level of precision. But, supporting multiple action space types seems like a much better approach. It's great that you are interested in helping. If you want to share a proposal (for this or anything else), you can put it in an issue or you can reach out to me on Slack to set up a meeting |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm struggling a little bit with the ActionSpace for reinforcement learning.
These are the possible actions my robot can take ...
Ahead Distance X
Back Distance X
Fire Power X
Turn Left X Degrees
Turn Right X Degrees
Turn Gun Left X Degrees
Turn Gun Right X Degrees
Turn Left X Degrees
Turn Right X Degrees
Turn Radar Left X Degrees
Turn Radar Right X Degrees
So any ideas how I would implement the action space for this?
It seems the current implementation just supports a discrete action space which makes implementing an agent that can take multiple actions and actions involving ranges hard (currently I would probably need to setup multiple agents, one for each action above with a set of possible actions 0-360 for each agent).
Other APIs offer additional action space types such as the bounded box in Open AI Gym or the nested array spec in tensorflow agents.
Is it possible we could look at implementing other ActionSpace types rather than just the discrete one that is currently supported as that would help with agents performing multiple actions like above.
I think we could support a bounded box action space type fairly easily and this would help with the actions I'm looking at above.
I can try to help with this but would need to discuss a proposal first.
Please let me know what you think?
Thanks
Beta Was this translation helpful? Give feedback.
All reactions