-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Search & Rescue Multi-Agent Environment #259
base: main
Are you sure you want to change the base?
Conversation
* Initial prototype * feat: Add environment tests * fix: Update esquilax version to fix type issues * docs: Add docstrings * docs: Add docstrings * test: Test multiple reward types * test: Add smoke tests and add max-steps check * feat: Implement pred-prey environment viewer * refactor: Pull out common viewer functionality * test: Add reward and view tests * test: Add rendering tests and add test docstrings * docs: Add predator-prey environment documentation page * docs: Cleanup docstrings * docs: Cleanup docstrings
Here you go @sash-a this is correct now. Will grab a look at the contributor license and Ci failure now. |
I think CI issue is I've Esquilax set to Python |
Python version PR is merged now so hopefully it will pass 😄 Should have time during the week to review this, really appreciate the contribution! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An initial review with some high level comments about jumanji conventions. Will go through it more in depth once these are addressed. In general it's looking really nice and well documented!
Not quite sure on the new swarms package, but also not sure where else we would put it. Not sure on it especially if we only have 1 env and no news ones planned.
One thing I don't quite understand is the benefit of amap
over vmap
specifically in the case of this env?
Please @ me when it's ready for another review or if you have any questions.
As for your questions in the description:
Nope just the environment is fine
Please do add animation it's a great help.
We do want defaults, I think we can discuss what makes sense.
It's generated with mkdocs, we need an entry in One big thing I've realized that this is missing after my review is training code. We like to validate that the env works. I'm not 100% sure if this is possible because the env has two teams, so which reward do you optimize, maybe training with simple heuristic, eg you are the predator and the prey moves randomly? For examples see the |
* refactor: Formatting fixes * fix: Implement rewards as class * refactor: Implement observation as NamedTuple * refactor: Implement initial state generator * docs: Update docstrings * refactor: Add env animate method * docs: Link env into API docs
Hi @sash-a, just merged changes that I think address all the comments, and the
Could you have something like a
Yeah in a couple cases using it is overkill, hang-over from when I was writing this example with esquilax demo in mind! Makes sense to use |
I'll look at adding something to training next. I think random prey with trained predators makes sense, will look to implement. |
If you can add more that would be great! Then I'm happy to keep the swarm package as is. What we'd be most interested in is some kind of env with only 1 team and strictly co-operative like predators vs heuristic prey or visa versa, not sure if you planned to make any envs like this? But I had a quick look at the changes and it mostly looks great! Will leave an in depth review later today/tomorrow 😄 Also I updated the CI yesterday, we're now using ruff, so you will need to update your pre-commit |
One other thing, the only reason I've been hesitant to add this to Jumanji is because it's not that related to industry problems which is a common focus between all the envs. I was thinking maybe we could re-frame the env from predator-prey to something else (without changing any code, just changing the idea). I was thinking maybe a continuous cleaner where your target position is changing or something to do with drones (maybe delivery), do you have any other ideas and would you be happy with this? |
Yeah I was very interested in developing envs for co-operative multi-agent RL so was keen to design or implement more environments along theses lines. There's a simpler version of this environment which is just the flock, i.e. where the agents move in a co-ordinated way with out colliding. Also seen an environment where the agents have to effectively cover an an area that I was going to look at.
How do I do this? I did try reinstalling pre-commit, but it raised an error that the config was invalid? |
Yeah definitely open to suggestions. I was thinking more in the abstract for this (will the agents develop some collective behaviour to avoid predators) but happy to modify towards something more concrete. |
Great to hear on the co-operative marl front those both sound like nice envs to have
Couple things to try:
If this doesn't work check
Agreed it would be nice to keep it abstract for the sake of research, but I think it's nice that this env suite is all industry focused. I quite like something to do with drones - seems quite industry focused although we must definitely avoid anything to do with war. I'll give it a think |
Hi @sash-a fixed the formatting and consolidated the predator-prey type. |
Thanks I'll try have a look tomorrow, sorry previous 2 days were a bit more busy than expected. For the theme I'm think maratime search and rescue works well. It's relatively real world and fits the current dynamics |
Thanks, no worries. Actually yeah funnily enough a co-ordinated search was something I'd been looking into. Yeah could have one set of agent have some drift w random movements that need to be found inside the simulated region. |
Sorry still didn't have time to review today and Mondays are usually super busy for me, but I'll get to this next week! As for the theme do you think we should then change the dynamics a bit to make prey heuristically controlled to move sort of randomly? |
No worries, sure I'll do a revision this weekend! |
* feat: Prototype search and rescue environment * test: Add additional tests * docs: Update docs * refactor: Update target plot color based on status * refactor: Formatting and fix remaining typos.
Hi @sash-a, this turned into a larger rewrite (sorry for the extra review work, let me know if you want me to close this PR and just start with a fresh one) but think it's a more realistic scenario
A couple choices we may want to consider:
|
Thanks for this @zombie-einstein I'll start having a look now 😄
awesome!
Agreed I think we should actually hide targets once they are located so as to not confuse other agents.
I think individual is fine and externally users can sum it outside if they want. e.g we do this in mava for connector
Not quite following what you mean here. I would say an agent should observe all agents and targets (that have not yet been rescued) within their local view.
Maybe add this as an optional reward type, I think I prefer 1 if target is saved and 0 otherwise - makes the env quite hard, but we should test what works best.
Definitely!
We don't have a convention for this. I wouldn't add remaining steps to the obs directly I don't see why the algorithm would need that, although again needs to be tested. Agreed with remaining targets, makes sense to observe that. I think normalised floats makes sense. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing job with this rewrite, haven't had time to fully look at everything but it does look great so far!
Some high level things:
- Please add a generator, dynamics and viewer test (see examples of the viewer test for other envs)
- Can you also add tests for the common/updates
- Can you start looking into the networks and testing for jumanji
Sorry a bit tedious tasks, but I really like the env we've landed on 😄
distance to the other agent. | ||
- `target_remaining`: float in the range [0, 1]. The normalised number of targets | ||
remaining to be detected (i.e. 1.0 when no targets have been found). | ||
- `time_remaining`: float in the range [0, 1]. The normalised number of steps remaining |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would make this step count instead similar to other envs
Thanks @sash-a, just a couple follow ups to your questions:
So I was picturing (and as currently implemented) a situation where the searchers have to come quite close the targets to "find" them (as if they are obscured/hard to find), but the agents have a larger vision range to visualise the location of other searchers agents (to allow them to improve search patterns for example). My feeling was that this created more of a search task, where if the targets are part of their larger vision range it feels like it could be more of a routing type task. I then thought it may be good to include found targets in the vision to allow agents to visualise density of located targets.
I thought if treating it as a time-sensitive task some indication of the remaining time to find targets could be a useful feature of the observation.
Yup will do! |
* refactor: Rename to targets_remaining * docs: Formatting and expand docs * refactor: Move target and reward checks into utils module * fix: Set agent and target numbers via generator * refactor: Terminate episode if all targets found * test: Add swarms.common tests * refactor: Move agent initialisation into generator * test: Add environment utility tests
I think this is great!
I see the thinking here, but I'm not even sure it's that beneficial because targets can move right? One thing I'm a bit concerned about, as you increase the number of agents will the problem not get easier? As I don't see an option to increase the world size, so as the number of agents increases the density of searchers increases making it easier to find targets. Is there a way we could increase the world size or another way to avoid this issue? Sorry been quite busy this last week, but I should have a lot more time next week to dedicate to this review 😄 |
Hey @sash-a, no worries, still got stuff to get on with.
Yeah this is correct. I guess it depends on the target dynamics. For something simple like noisey movement with some drift it could help with identifying drift and areas of low density that have not been searched yet? Kind of like the agents are communicating what they've found/have some memory.
Yeah it would. The region is fixed in Esquilax to the unit square, mainly just reduce the number of parameters used in describing the interaction between agents (and make my life a bit easier 😂) but could be something to add into the library. The way to avoid this here would be scaling other parameters, e.g. scaling the vision range and speed range of agents, the only issue possibly being numerical accuracy. For the network, is there a built-in way to do multi-agent training? If not I guess the most straightforward way to get it working would be to just have a single agent, and just wrap some means of flattening the rewards? |
True, maybe we can have it be an option? Although might get messy to define observation shapes
I see, so how many agents do you think it could scale to before it gets too crowded or too numerically unstable?
That's exactly what we do. We wrap things in a the multi to single wrapper and then treat it as a single agent problem. See how we do the learning for LBF and Connector and this if statement in the trainer setup. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Finally had time to go through this, it's looking really great. Some optimization suggestions and minor nitpicks.
I want to give the documentation a once over, but once you've addressed all these and added networks so we can see learning curves I'm happy to merge this 😄
searcher_vision_range: float, | ||
target_contact_range: float, | ||
num_vision: int, | ||
agent_radius: float, | ||
searcher_max_rotate: float, | ||
searcher_max_accelerate: float, | ||
searcher_min_speed: float, | ||
searcher_max_speed: float, | ||
searcher_view_angle: float, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to decide on sensible defaults for all of these
* refactor: Set -1.0 as default view value * refactor: Restructure tests * refactor: Pull out common functionality and fix formatting * refactor: Better function names
Yeah could be an optional thing, or could just include by default and user can omit? Only issue might be impact of additional computation that may be unused?
I just added this functionality to Esquilax, and pulled changes into this so user can control the size of the space. Do you mind if resolve some of these comments. Was not sure if you wanted to use them for tracking, but it'd be handy if could clear up outdated or implemented ones to see what remains outstanding. |
Ye not sure, let's leave it for now and we can come back to it later if we feel it makes the problem too easy or is unnecessary
That's awesome! 🔥
Ye go for it, if it's done then please resolve 😄 |
40a82e6
to
e959bf3
Compare
* Set plot range in viewer only * Detect targets in a single pass
Add a multi-agent search and rescue environment where a set of agents has to locate moving targets on a 2d space.
Changes
swarm
environment group/type (was not sure the new environment fit into an existing group, but happy to move if you think it would better fit somewhere else)Todo
Questions
jumanji.environments
do types also need forwarding somewhere?animate
method to the environment, but saw that some other do? Easy enough to add.