-
Notifications
You must be signed in to change notification settings - Fork 6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RLlib] "APPO-accelerate" vol 01: Make AggregatorActors work with IMPALA/APPO. #49284
Conversation
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
…top (local CPU learner). Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
…r) on 1 local(!) GPU, 29 EnvRunners: python cartpole_impala.py --num-env-runners=29 --num-envs-per-env-runner=20 --stop-iters=10 --num-learners=0 --num-gpus-per-learner=0.98 Signed-off-by: sven1977 <[email protected]>
…ny slowdowns wrt runs w/o evaluation active; and those might be due to the fact that we have 2 env runners less Signed-off-by: sven1977 <[email protected]>
…an be reverted to their "normal" master versions (add states and to-numpy). Signed-off-by: sven1977 <[email protected]>
- co-locate each agg-actor with exactly one learner and keep the exact mapping on the algo (to later match, which gpu-batch-ref should go to which learner) - formalize config option to NOT build learner connector on learner (b/c its already built on agg actors) - remove garbage code - deprecate agg workers on old api stack entirely. Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
…_accelerate Signed-off-by: sven1977 <[email protected]> # Conflicts: # rllib/algorithms/impala/impala_learner.py # rllib/connectors/learner/learner_connector_pipeline.py
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
…_accelerate Signed-off-by: sven1977 <[email protected]> # Conflicts: # rllib/algorithms/algorithm_config.py
…ot_finalize_episodes_sent_to_buffer Signed-off-by: sven1977 <[email protected]> # Conflicts: # rllib/utils/replay_buffers/episode_replay_buffer.py
Signed-off-by: sven1977 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Already reviewed for OSS Ray. One small question tothe value of num_aggregate_workers_per_learner
which could be None
as it looks like in the code.
0, | ||
( | ||
cf.num_gpus_per_learner | ||
- 0.01 * cf.num_aggregator_actors_per_learner |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this not fail, if cf.num_aggregator_actors_per_learner=None
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
B/c it's always some int. By default 0.
…ALA/APPO. (ray-project#49284) Signed-off-by: Puyuan Yao <[email protected]>
"APPO-accelerate" vol 01: Make AggregatorActors work with IMPALA/APPO.
num_aggregation_actors_per_learner(!)
.Why are these changes needed?
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.