We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
1 parent b169b03 commit 5256287Copy full SHA for 5256287
bitsandbytes/functional.py
@@ -943,7 +943,7 @@ def dequantize_4bit(
943
"""Dequantizes a packed 4-bit quantized tensor.
944
945
The input tensor is dequantized by dividing it into blocks of `blocksize` values.
946
- The the absolute maximum value within these blocks is used for scaling
+ The absolute maximum value within these blocks is used for scaling
947
the non-linear dequantization.
948
949
Args:
0 commit comments