-
Notifications
You must be signed in to change notification settings - Fork 53
using multi-GPU on nuFHE problem #6
Comments
Technically, yes, although currently you will have to handle data transfer yourself. The GPU the work is happening on is defined by the
(see First, Second, if you just create two All in all, it is not straightforward at the moment. I will leave this issue open and try to figure out what kind of interface should be exposed to make multi-GPU convenient.
I am not sure I understand you. Could you explain in more detail? And, perhaps, open a separate issue - this one will be reserved for multi-GPU. |
Also really interested in the evolution of this ticket |
Is anyone ever tested this on a vmware vGPU rig ? |
I am not sure at the moment what level of abstraction would be best. The minimal version would be something like this (this example is already working on my machine, just need to polish some things in the implementation). Essentially, this means one It may be possible to do single-thread multi-GPU, but there are several problems to solve. CUDA and OpenCL use different models for that, and I need to check if PyCUDA and PyOpenCL actually expose the corresponding API (and Edit: there may be problems if single-kernel bootstrap is not available, which means we'll have 500 kernel calls instead of several, so some internal thread/process pool will be necessary to parallelize that. |
I've added a multi-GPU example ( |
Hello. I have 2 questions about nuFHE.
I'll wait for your reply.
Thanks.
The text was updated successfully, but these errors were encountered: