Skip to content

Conversation

@bkryu
Copy link
Collaborator

@bkryu bkryu commented Dec 5, 2025

📌 Description

Currently the microbenchmarks in flashinfer_benchmark.py uses CUDA events to measure time. However, using CUPTI provides benefits in (a) nsys-matching accuracy; and (b) L2 cache flush implemented in the benchmark script.

The PR

  • Makes --use_cupti the default behavior
  • --use_cupti now does nothing and prints a deprecation warning.
  • --use_cuda_events flag is now added for users who would like to use CUDA events.

Example outputs on 5090:

## Provide use_cupti
$ python3 flashinfer_benchmark.py --routine BatchDecodeWithPagedKVCacheWrapper --backends fa2_tc cudnn trtllm-native --page_size 16 --batch_size 1 --s_qo 1 --s_kv 8192 --num_qo_heads 64 --num_kv_heads 8 --head_dim_qk 128 --head_dim_vo 128 --refcheck --q_dtype bfloat16 --kv_dtype bfloat16 --use_cupti
[WARNING] --use_cupti is deprecated and will be removed in a future release. CUPTI is now enabled by default.
[PERF] fa2_tc         :: median time 0.032 ms; std 0.001 ms; achieved tflops 8.482 TFLOPs/sec; achieved tb_per_sec 1.061 TB/sec
[PERF] cudnn          :: median time 0.030 ms; std 0.000 ms; achieved tflops 8.972 TFLOPs/sec; achieved tb_per_sec 1.123 TB/sec
[PERF] trtllm-native  :: median time 0.036 ms; std 0.001 ms; achieved tflops 7.561 TFLOPs/sec; achieved tb_per_sec 0.946 TB/sec

## Default behavior: same as use_cupti, except no warning message.
$ python3 flashinfer_benchmark.py --routine BatchDecodeWithPagedKVCacheWrapper --backends fa2_tc cudnn trtllm-native --page_size 16 --batch_size 1 --s_qo 1 --s_kv 8192 --num_qo_heads 64 --num_kv_heads 8 --head_dim_qk 128 --head_dim_vo 128 --refcheck --q_dtype bfloat16 --kv_dtype bfloat16
[PERF] fa2_tc         :: median time 0.032 ms; std 0.001 ms; achieved tflops 8.452 TFLOPs/sec; achieved tb_per_sec 1.058 TB/sec
[PERF] cudnn          :: median time 0.030 ms; std 0.000 ms; achieved tflops 8.982 TFLOPs/sec; achieved tb_per_sec 1.124 TB/sec
[PERF] trtllm-native  :: median time 0.035 ms; std 0.001 ms; achieved tflops 7.612 TFLOPs/sec; achieved tb_per_sec 0.952 TB/sec

## Provide use_cuda_events for CUDA events
# NOTE: performance when using CUDA events is spuriously inflated due to not starting with a cold L2 cache (RTX5090 has a peak mem bandwidth of 1.79 TB/s)
$ python3 flashinfer_benchmark.py --routine BatchDecodeWithPagedKVCacheWrapper --backends fa2_tc cudnn trtllm-native --page_size 16 --batch_size 1 --s_qo 1 --s_kv 8192 --num_qo_heads 64 --num_kv_heads 8 --head_dim_qk 128 --head_dim_vo 128 --refcheck --q_dtype bfloat16 --kv_dtype bfloat16 --use_cuda_events
[PERF] fa2_tc         :: median time 0.015 ms; std 0.000 ms; achieved tflops 18.461 TFLOPs/sec; achieved tb_per_sec 2.310 TB/sec
[PERF] cudnn          :: median time 0.017 ms; std 0.000 ms; achieved tflops 16.182 TFLOPs/sec; achieved tb_per_sec 2.025 TB/sec
[PERF] trtllm-native  :: median time 0.015 ms; std 0.000 ms; achieved tflops 18.120 TFLOPs/sec; achieved tb_per_sec 2.267 TB/sec

🔍 Related Issues

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • New Features

    • Added --use_cuda_events flag for GPU kernel timing.
  • Chores

    • Deprecated --use_cupti flag; displays warning when set.
    • CUPTI timing now enabled by default.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Dec 5, 2025

Walkthrough

The benchmark script is updated to deprecate the --use_cupti flag while introducing a new --use_cuda_events option for GPU timing. The logic inverts to use CUPTI by default unless CUDA events are explicitly requested, with a deprecation warning displayed when the old flag is used.

Changes

Cohort / File(s) Summary
GPU timing flag migration
benchmarks/flashinfer_benchmark.py
Deprecates --use_cupti flag with warning, adds --use_cuda_events flag, inverts timing logic so CUPTI is default unless CUDA events explicitly preferred

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

  • Single file modification with straightforward argument handling changes
  • Deprecation warning and flag inversion are self-contained logic updates
  • No complex interactions or business logic alterations

Suggested reviewers

  • yzh119
  • cyx-6

Poem

🐰 Old flags fade as new ones shine,
CUDA events now draw the line,
CUPTI takes the default crown,
Backwards-compat won't let us down! ✨

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: making CUPTI the default timing method for microbenchmarks, which is the primary objective of this PR.
Description check ✅ Passed The description provides comprehensive context including the rationale for the change, concrete implementation details, and example outputs demonstrating all three use cases. All required template sections are addressed.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @bkryu, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request streamlines and improves the accuracy of microbenchmark timing within the flashinfer_benchmark.py script. By making CUPTI the default measurement method, it leverages its benefits in nsys-matching accuracy and L2 cache flush implementation. The change deprecates the explicit --use_cupti flag, which now serves only to trigger a warning, and introduces --use_cuda_events for users who require the previous timing mechanism.

Highlights

  • Default Timing Method: The microbenchmarks in flashinfer_benchmark.py now use CUPTI for timing GPU kernels by default, enhancing accuracy and L2 cache flush handling.
  • Deprecated Flag: The --use_cupti flag is now deprecated and will issue a warning when used, as CUPTI is enabled automatically.
  • New Timing Option: A new --use_cuda_events flag has been introduced, allowing users to explicitly opt for CUDA event-based timing if preferred.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request makes CUPTI the default timing mechanism for microbenchmarks, which is a good improvement for accuracy. A new --use_cuda_events flag is introduced to revert to the old behavior, and the --use_cupti flag is deprecated with a warning. The implementation is logical and correct. I have one minor suggestion to improve code clarity by removing a redundant comment.

print(
"[WARNING] --use_cupti is deprecated and will be removed in a future release. CUPTI is now enabled by default."
)
# use_cupti is deprecated and will be removed in a future release. CUPTI is now enabled by default.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This comment is redundant as it almost exactly repeats the warning message printed on line 168 for the deprecated --use_cupti flag. It can be removed to improve code clarity and reduce noise.

@bkryu bkryu self-assigned this Dec 5, 2025
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
benchmarks/flashinfer_benchmark.py (2)

79-89: Clarify deprecated --use_cupti help to describe it as a no-op and point to --use_cuda_events.

Right now the help says [DEPRECATED] Use CUPTI for timing GPU kernels. This is now the default behavior., but the flag no longer changes behavior; it only triggers a warning. It’d be clearer to state that it’s a no-op and that users should switch to --use_cuda_events when they want CUDA events.

For example:

-    parser.add_argument(
-        "--use_cupti",
-        action="store_true",
-        default=False,
-        help="[DEPRECATED] Use CUPTI for timing GPU kernels. This is now the default behavior.",
-    )
+    parser.add_argument(
+        "--use_cupti",
+        action="store_true",
+        default=False,
+        help=(
+            "[DEPRECATED] No-op: CUPTI timing is enabled by default. "
+            "Use --use_cuda_events to time with CUDA events instead."
+        ),
+    )

This makes it explicit that the flag is deprecated, does not affect behavior, and what the new preferred flag is.


165-173: Deprecation logic matches the new semantics; consider documenting precedence and sending warnings to stderr.

The resolution

if args.use_cupti:
    print("[WARNING] --use_cupti is deprecated ...")
args.use_cupti = not args.use_cuda_events

correctly implements:

  • CUPTI on by default (no flags),
  • --use_cuda_events disabling CUPTI,
  • --use_cupti as a deprecated, behavior‑neutral flag.

Two small polish suggestions:

  1. When both flags are passed, you print the deprecation warning but end up running with CUDA events (use_cupti = False). That’s sensible, but it might be worth mentioning in the help or warning that --use_cuda_events takes precedence if both are provided.

  2. Consider directing the warning to stderr to avoid polluting stdout for scripts that parse benchmark output:

-    if args.use_cupti:
-        print(
-            "[WARNING] --use_cupti is deprecated and will be removed in a future release. CUPTI is now enabled by default."
-        )
+    if args.use_cupti:
+        print(
+            "[WARNING] --use_cupti is deprecated and will be removed in a future release. "
+            "CUPTI is now enabled by default; use --use_cuda_events to opt out.",
+            file=sys.stderr,
+        )

Overall, the behavioral change looks aligned with the PR description.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b972005 and 689746d.

📒 Files selected for processing (1)
  • benchmarks/flashinfer_benchmark.py (2 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Deploy Docs

Copy link
Collaborator

@yzh119 yzh119 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the improvement, as far as I remember cupti only works for cuda 13.* right, where should we inform user about this limitation?

@bkryu
Copy link
Collaborator Author

bkryu commented Dec 6, 2025

Thanks for the improvement, as far as I remember cupti only works for cuda 13.* right, where should we inform user about this limitation?

This is actually a good point. We have a logic to fall back to using CUDA events in flashinfer.testing.bench_gpu_time_with_cupti after informing the user.

For example in a CUDA 12.9 container on B200:

## Trying to use cupti
$ python3 flashinfer_benchmark.py --routine BatchDecodeWithPagedKVCacheWrapper --backends fa2_tc cudnn trtllm-native --page_size 16 --batch_size 1 --s_qo 1 --s_kv 8192 --num_qo_heads 64 --num_kv_heads 8 --head_dim_qk 128 --head_dim_vo 128 --refcheck --q_dtype bfloat16 --kv_dtype bfloat16
/opt/conda/envs/py312/lib/python3.12/site-packages/flashinfer/testing/utils.py:1024: UserWarning: CUPTI is not installed. Try 'pip install -U cupti-python'. Falling back to CUDA events for benchmarking.
  return bench_gpu_time_with_cupti(
/opt/conda/envs/py312/lib/python3.12/site-packages/flashinfer/testing/utils.py:1024: UserWarning: CUPTI is not installed. Try 'pip install -U cupti-python'. Falling back to CUDA events for benchmarking.
  return bench_gpu_time_with_cupti(
[PERF] fa2_tc         :: median time 0.014 ms; std 0.000 ms; achieved tflops 19.005 TFLOPs/sec; achieved tb_per_sec 2.378 TB/sec
[PERF] cudnn          :: median time 0.014 ms; std 0.000 ms; achieved tflops 19.859 TFLOPs/sec; achieved tb_per_sec 2.485 TB/sec
[PERF] trtllm-native  :: median time 0.012 ms; std 0.000 ms; achieved tflops 23.001 TFLOPs/sec; achieved tb_per_sec 2.878 TB/sec

## With --use_cuda_events there is no warning but the perf numbers are identical
$ python3 flashinfer_benchmark.py --routine BatchDecodeWithPagedKVCacheWrapper --backends fa2_tc cudnn trtllm-native --page_size 16 --batch_size 1 --s_qo 1 --s_kv 8192 --num_qo_heads 64 --num_kv_heads 8 --head_dim_qk 128 --head_dim_vo 128 --refcheck --q_dtype bfloat16 --kv_dtype bfloat16 --use_cuda_events
[PERF] fa2_tc         :: median time 0.014 ms; std 0.000 ms; achieved tflops 19.009 TFLOPs/sec; achieved tb_per_sec 2.378 TB/sec
[PERF] cudnn          :: median time 0.014 ms; std 0.000 ms; achieved tflops 19.859 TFLOPs/sec; achieved tb_per_sec 2.485 TB/sec
[PERF] trtllm-native  :: median time 0.012 ms; std 0.000 ms; achieved tflops 22.998 TFLOPs/sec; achieved tb_per_sec 2.878 TB/sec

@yzh119
Copy link
Collaborator

yzh119 commented Dec 6, 2025

Thanks for clarification, it make sense to me.

@yzh119 yzh119 merged commit 6dfc1ba into flashinfer-ai:main Dec 6, 2025
4 checks passed
@bkryu bkryu deleted the benchmark_cupti_default branch December 8, 2025 18:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants