-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RLlib] APPO accelerate vol 02: Bug fix for > 1 Learner actor. #49849
[RLlib] APPO accelerate vol 02: Bug fix for > 1 Learner actor. #49849
Conversation
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Working via the placement group is the way to go for letting Ray scheduling workers and learners as intended. There might be a problem when running on multi-node - specifically if nodes are (as usual when scaling out) too small for the complete scheduling group. Maybe we should test this.
self._aggregator_actor_to_learner = {} | ||
for agg_idx, aggregator_location in enumerate(aggregator_locations): | ||
for learner_idx, learner_location in enumerate(learner_locations): | ||
for agg_idx, aggregator_location in aggregator_locations: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My guess is that this does not work as intended on a multi-node cluster b/c you want the learners/workers then be placed by Ray on the same nodes. As soon as the number of learner x number of GPUs per learner becomes greater than the number of GPUs per node the placement group might not be scheduled anywhere.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm still trying to figure this one out. This is still in a hacky state and needs more testing (especially with >1 Learners and >0 agg actors per Learner) before I'll merge this. Thanks for your help thus far.
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
…_accelerate_02_fixes_and_enhancements
APPO accelerate vol 02: Bug fix for > 1 Learner actor.
The device detection on the Aggregation actors was faulty, leading to all aggregation actors to be placed on
cuda:0
.Why are these changes needed?
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.