-
Notifications
You must be signed in to change notification settings - Fork 201
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make the stream module a part of the public API #1775
base: branch-25.02
Are you sure you want to change the base?
Conversation
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
# @singledispatchmethod | ||
# def _init_from_stream(self, obj): | ||
# if obj is None: | ||
# self._init_with_new_cuda_stream() | ||
# return | ||
# try: | ||
# protocol = getattr(obj, "__cuda_stream__") | ||
# except AttributeError: | ||
# raise ValueError( | ||
# "Argument must be None, a Stream, or implement __cuda_stream__" | ||
# ) | ||
# if protocol[0] != 0: | ||
# raise ValueError("Only protocol version 0 is supported") | ||
|
||
# self._cuda_stream = <cudaStream_t>obj | ||
# self.owner = obj | ||
|
||
# @_init_from_stream.register | ||
# def _(self, stream: Stream): | ||
# self._cuda_stream, self._owner = stream._cuda_stream, stream._owner | ||
|
||
# try: | ||
# from numba import cuda | ||
# @_init_from_stream.register | ||
# def _(self, obj: cuda.cudadrv.driver.Stream): | ||
# self._cuda_stream = <cudaStream_t><uintptr_t>(int(obj)) | ||
# self._owner = obj | ||
# except ImportError: | ||
# pass | ||
|
||
# try: | ||
# import cupy | ||
# @_init_from_stream.register(cupy.cuda.stream.Stream) | ||
# def _(self, obj): | ||
# self._cuda_stream = <cudaStream_t><uintptr_t>(obj.ptr) | ||
# self._owner = obj | ||
|
||
# @_init_from_stream.register(cupy.cuda.stream.ExternalStream) | ||
# def _(self, obj): | ||
# self._cuda_stream = <cudaStream_t><uintptr_t>(obj.ptr) | ||
# self._owner = obj | ||
# except ImportError: | ||
# pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the reviewer: I left this commented code here just to note that registering the dispatch methods in a try-except block is not working. The code compiles but the tests fail with an ImportError: module cuda not available
. I'll delete this comment if we can't get this approach to work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your work on this! I will need a bit of time to play with this and see what works well in the design (or not).
Do we need both cuda_stream.pyx
and stream.pyx
? Could/should we combine these files?
|
||
import warnings | ||
|
||
from rmm.pylibrmm.stream import ( # noqa: F401 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than using noqa: F401
, please define __all__
for this module.
) | ||
|
||
warnings.warn( | ||
"The `rmm.pylibrmm.stream` module is deprecated in will be removed in a future release. Use `rmm.pylibrmm.stream` instead.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"The `rmm.pylibrmm.stream` module is deprecated in will be removed in a future release. Use `rmm.pylibrmm.stream` instead.", | |
"The `rmm._cuda.stream` module is deprecated in 25.02 and will be removed in a future release. Use `rmm.pylibrmm.stream` instead.", |
What's the rationale of making this a public API now that we have cuda.core...? Couldn't it stay private while we perfect cuda.core and make it production ready? I might have missed something. |
We want to start adding streams to pylibcudf APIs in RAPIDS 25.02. Exposing streams publicly is important for speed-of-light performance on several workloads. Eventually we will want to move to cuda.core but I don't think we want RAPIDS to be tied to cuda.core's timeline at this moment. I would like cuda.core to take whatever time is needed to have a stable design. Once it's stable, we will be able to deprecate this in RAPIDS and move to cuda.core within ~2-4 months (one release to deprecate the existing design, then make the change in the subsequent release). Having this in RMM/pylibcudf in the short term also gives us a way to test the cuda.core APIs with a full application by changing imports and doing minor refactoring. |
Thanks for explanation!
First of all, I should not have given the impression that Next, I understand pylibcudf's needs and I am very supportive of having all APIs taking the stream explicitly. But I don't think it needs to be tied to any concrete type (not even Finally, with regard to |
@leofang I spent some time reviewing the Stream class in
|
Description
Closes #1770.
Checklist