Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] APPO accelerate vol 02: Bug fix for > 1 Learner actor. #49849

Merged

Conversation

sven1977
Copy link
Contributor

@sven1977 sven1977 commented Jan 15, 2025

APPO accelerate vol 02: Bug fix for > 1 Learner actor.

The device detection on the Aggregation actors was faulty, leading to all aggregation actors to be placed on cuda:0.

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Copy link
Collaborator

@simonsays1980 simonsays1980 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Working via the placement group is the way to go for letting Ray scheduling workers and learners as intended. There might be a problem when running on multi-node - specifically if nodes are (as usual when scaling out) too small for the complete scheduling group. Maybe we should test this.

self._aggregator_actor_to_learner = {}
for agg_idx, aggregator_location in enumerate(aggregator_locations):
for learner_idx, learner_location in enumerate(learner_locations):
for agg_idx, aggregator_location in aggregator_locations:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My guess is that this does not work as intended on a multi-node cluster b/c you want the learners/workers then be placed by Ray on the same nodes. As soon as the number of learner x number of GPUs per learner becomes greater than the number of GPUs per node the placement group might not be scheduled anywhere.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'm still trying to figure this one out. This is still in a hacky state and needs more testing (especially with >1 Learners and >0 agg actors per Learner) before I'll merge this. Thanks for your help thus far.

@sven1977 sven1977 added rllib RLlib related issues rllib-algorithms An RLlib algorithm/Trainer is not learning. rllib-newstack labels Jan 15, 2025
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
@sven1977 sven1977 enabled auto-merge (squash) January 16, 2025 13:16
@github-actions github-actions bot disabled auto-merge January 16, 2025 13:16
@github-actions github-actions bot added the go add ONLY when ready to merge, run all tests label Jan 16, 2025
@sven1977 sven1977 enabled auto-merge (squash) January 16, 2025 14:26
@sven1977 sven1977 merged commit 58c72f6 into ray-project:master Jan 16, 2025
7 of 8 checks passed
@sven1977 sven1977 deleted the appo_accelerate_02_fixes_and_enhancements branch January 16, 2025 17:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
go add ONLY when ready to merge, run all tests rllib RLlib related issues rllib-algorithms An RLlib algorithm/Trainer is not learning. rllib-newstack
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants