Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network update #953

Merged
merged 99 commits into from
May 28, 2020
Merged

Network update #953

merged 99 commits into from
May 28, 2020

Conversation

eugenevinitsky
Copy link
Member

  • put in correct version of i210
  • put in correct version of straight road
  • better logging in train.py
  • curricula in i210 env

eugenevinitsky and others added 30 commits March 28, 2020 02:06
Added code to output json. Added code to resolve macOS matplotlib imp…
* deleting unworking params from SumoChangeLaneParams

* deleted unworking params, sublane working in highway
:

* moved imports inside functions

* Apply suggestions from code review

* bug fixes

* bug fix

Co-authored-by: Aboudy Kreidieh <[email protected]>
* added bando model

* added ghost edge to the highway network

* added highway-single example

* bug fixes

* more tests
* Add the appropriate reward to the grid benchmark back

* Put the bottleneck in a congested regime

* Bump bottleneck inflows to put it in the congested regime
* added function to kernel/vehicle to get number of not departed vehiles

* fixed over indentation of the docstring

* indentation edit

* pep8

Co-authored-by: AboudyKreidieh <[email protected]>
* changed _departed_ids, and _arrived_ids in the update function

* fixed bug in get_departed_ids and get_arrived_ids
Copy link
Member

@kjang96 kjang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Lots of good stuff in here. This is part 1. I made comments on the code but haven't ran anything yet to verify. Also haven't looked through centralized_PPO and custom_ppo

examples/exp_configs/rl/multiagent/multiagent_i210.py Outdated Show resolved Hide resolved
flow/envs/multiagent/i210.py Outdated Show resolved Hide resolved
flow/envs/multiagent/i210.py Outdated Show resolved Hide resolved
flow/envs/multiagent/i210.py Outdated Show resolved Hide resolved
flow/envs/multiagent/i210.py Outdated Show resolved Hide resolved
flow/envs/multiagent/i210.py Outdated Show resolved Hide resolved
Comment on lines 117 to 118
else:
accel = min(max(accel, -self.max_deaccel), self.max_accel)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is already added into the Flow failsafes, so if you want it, use fail_safe='feasible_accel'

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks! removing

Copy link
Member

@kjang96 kjang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes look good!

Copy link
Member

@kjang96 kjang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some imports don't seem to be working, try python train.py multiagent_i210.py test or multiagent_straight_road to see

@@ -10,7 +10,8 @@
from flow.envs.multiagent.traffic_light_grid import MultiTrafficLightGridPOEnv
from flow.envs.multiagent.highway import MultiAgentHighwayPOEnv
from flow.envs.multiagent.merge import MultiAgentMergePOEnv
from flow.envs.multiagent.i210 import I210MultiEnv, MultiStraightRoad
from flow.envs.multiagent.i210 import I210MultiEnv, MultiStraightRoad, I210MADDPGMultiEnv, MultiStraightRoadMADDPG
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I210MADDPGMultiEnv import doesn't work. Try running python train.py multiagent_i210 test to verify

from ray.rllib.evaluation.postprocessing import compute_advantages, \
Postprocessing
from ray.rllib.policy.sample_batch import SampleBatch
from ray.rllib.policy.tf_policy import LearningRateSchedule, \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the ACTION_LOGP import fails for me

Copy link
Member

@kjang96 kjang96 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keyerror issue when ON_RAMP=True

@eugenevinitsky eugenevinitsky merged commit f7a278c into i210_dev May 28, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants