Skip to content

Commit dc0ade7

Browse files
authored
docs: Document CUDA version support in README and installation page (#2197)
<!-- .github/pull_request_template.md --> ## 📌 Description Add this to public docs CUDA version support 12.6, 12.8, 13.0, 13.1 and state that our goal is to follow PyTorch support + latest CUDA version <!-- What does this PR do? Briefly describe the changes and why they’re needed. --> ## 🔍 Related Issues <!-- Link any related issues here --> ## 🚀 Pull Request Checklist Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete. ### ✅ Pre-commit Checks - [x] I have installed `pre-commit` by running `pip install pre-commit` (or used your preferred method). - [x] I have installed the hooks with `pre-commit install`. - [x] I have run the hooks manually with `pre-commit run --all-files` and fixed any reported issues. > If you are unsure about how to set up `pre-commit`, see [the pre-commit documentation](https://pre-commit.com/). ## 🧪 Tests - [x] Tests have been added or updated as needed. - [x] All tests are passing (`unittest`, etc.). ## Reviewer Notes <!-- Optional: anything you'd like reviewers to focus on, concerns, etc. --> <!-- This is an auto-generated comment: release notes by coderabbit.ai --> ## Summary by CodeRabbit * **Documentation** * Updated CUDA support documentation with explicit supported versions: 12.6, 12.8, 13.0, and 13.1. * Documented CUDA version prerequisites for installation. * Added clarification that supported CUDA versions align with PyTorch's officially supported versions and the latest CUDA release. * Enhanced GPU support section header for better clarity. <sub>✏️ Tip: You can customize this high-level summary in your review settings.</sub> <!-- end of auto-generated comment: release notes by coderabbit.ai -->
1 parent 9f1cb89 commit dc0ade7

File tree

2 files changed

+10
-1
lines changed

2 files changed

+10
-1
lines changed

README.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -187,10 +187,14 @@ For detailed information about logging levels, configuration, and advanced featu
187187

188188
Starting from FlashInfer v0.2, users can customize their own attention variants with additional parameters. For more details, refer to our [JIT examples](https://github.com/flashinfer-ai/flashinfer/blob/main/tests/utils/test_jit_example.py).
189189

190-
## GPU Support
190+
## GPU and CUDA Support
191191

192192
FlashInfer currently provides support for NVIDIA SM architectures 75 and higher and beta support for 103, 110, 120, and 121.
193193

194+
**Supported CUDA Versions:** 12.6, 12.8, 13.0, 13.1
195+
196+
> **Note:** FlashInfer strives to follow PyTorch's supported CUDA versions plus the latest CUDA release.
197+
194198
## Adoption
195199

196200
We are thrilled to share that FlashInfer is being adopted by many cutting-edge projects, including but not limited to:

docs/installation.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,11 @@ Prerequisites
1515

1616
- Python: 3.10, 3.11, 3.12, 3.13, 3.14
1717

18+
- CUDA: 12.6, 12.8, 13.0, 13.1
19+
20+
.. note::
21+
FlashInfer strives to follow PyTorch's supported CUDA versions plus the latest CUDA release.
22+
1823
Quick Start
1924
^^^^^^^^^^^
2025

0 commit comments

Comments
 (0)