Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QST]: DASK AND CUGRAPH #4831

Open
2 tasks done
williamcolegithub opened this issue Dec 12, 2024 · 3 comments
Open
2 tasks done

[QST]: DASK AND CUGRAPH #4831

williamcolegithub opened this issue Dec 12, 2024 · 3 comments
Assignees
Labels
question Further information is requested

Comments

@williamcolegithub
Copy link

williamcolegithub commented Dec 12, 2024

What is your question?

Hello! For the life of me I cannot get slurmcluster dask and cugraph to cooperate. I can get many configurations of slurmcluster dask and cudf to work. Cugraph has various errors for me such as a generic cufile error, or different modules dont exist, or code runs indefinitely... etc. All existing documentatino appears to be with localcudacluster which does not work for my setup and is this even truley multi-node + multi-gpu or just multigpu to use local cuda cluster?

I know my environments are consistent and up to date.

Looking for any better examples or hop on a quick call.
Thank you!!!!!

Code of Conduct

  • I agree to follow cuGraph's Code of Conduct
  • I have searched the open issues and have found no duplicates for this question
@williamcolegithub williamcolegithub added the question Further information is requested label Dec 12, 2024
@jnke2016
Copy link
Contributor

jnke2016 commented Dec 17, 2024

@williamcolegithub thank you for reaching out. Can you provide more information on how you setup your cluster please?

All existing documentatino appears to be with localcudacluster

LocalCudaCluster only supports single node multi-gpu hence if you want to run multi nodes, you will need to start each worker with a CLI command like dask-cuda-worker along with the scheduler on one of your nodes with dask-scheduler.

@williamcolegithub
Copy link
Author

williamcolegithub commented Dec 19, 2024

@jnke2016 Heres a reply please see: I see! OK, I will reach out to my slurm team. Yes, I have tried dask-cuda-worker and it resulted in a failure to connect with nanny. So I have been using dask-worker.
From my perspective the documentation was not clear that dask-cuda-worker was essential, it appeared optional. Thank you for clarifying and the fast reply. I will reach out if issues persist.

--- By any chance, is there a distributed notebook you all recommend? I only find examples using local cuda cluster, even notebooks that claim to be multi-node.

@quasiben
Copy link
Member

We have examples of how to deploy with SLURM on the HPC Deployment page:
https://docs.rapids.ai/deployment/stable/hpc/

If you run into trouble please ping here

cc @jacobtomlinson

As for a cuGraph notebooks, that is a great question. @acostadon / @jnke2016 / @rlratzel is that something you know ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants