Skip to content

Commit c9bce2b

Browse files
Bump minimum PyTorch to 2.3 (#1754)
* Bump minimum PyTorch to 2.3 * Tests: Fix Windows numpy<2 compatibility for torch<2.4.1
1 parent dd1929b commit c9bce2b

File tree

6 files changed

+13
-28
lines changed

6 files changed

+13
-28
lines changed

.github/workflows/tests.yml

Lines changed: 8 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -102,7 +102,7 @@ jobs:
102102
matrix:
103103
os: [ubuntu-22.04, ubuntu-22.04-arm, windows-2025, macos-15]
104104
# Test with the oldest supported torch version, the newest two stable/RC.
105-
torch_version: ["2.2.2", "2.7.1", "2.8.0"]
105+
torch_version: ["2.3.1", "2.7.1", "2.8.0"]
106106
include:
107107
- os: ubuntu-22.04
108108
arch: x86_64
@@ -118,7 +118,7 @@ jobs:
118118
arch: arm64
119119
exclude:
120120
- os: ubuntu-22.04-arm
121-
torch_version: "2.2.2"
121+
torch_version: "2.3.1"
122122

123123
runs-on: ${{ matrix.runner || matrix.os }}
124124
env:
@@ -144,13 +144,14 @@ jobs:
144144

145145
- name: Install dependencies
146146
run: |
147-
pip install torch==${{ matrix.torch_version }} --index-url https://download.pytorch.org/whl/${{ (matrix.torch_version == '2.8.0' && 'test/cpu') || 'cpu' }}
147+
pip install torch==${{ matrix.torch_version }} --index-url https://download.pytorch.org/whl/cpu
148148
pip install -e ".[test]"
149149
pip install pytest-cov
150150
151-
# We need to downgrade to numpy<2 for torch<2.3 compatibility.
151+
# We need to downgrade to numpy<2 for torch<2.4.1 compatibility on Windows
152+
# See: https://github.com/pytorch/pytorch/issues/131668
152153
- name: Downgrade NumPy
153-
if: startsWith(matrix.torch_version, '2.2.')
154+
if: startsWith(matrix.os, 'windows') && startsWith(matrix.torch_version, '2.3.')
154155
run: pip install "numpy<2"
155156

156157
- name: Show installed packages
@@ -345,7 +346,7 @@ jobs:
345346
cuda_version: ["11.8.0", "12.6.3", "12.8.1", "12.9.1"]
346347
include:
347348
- cuda_version: "11.8.0"
348-
torch_version: "2.2.2"
349+
torch_version: "2.3.1"
349350
pypi_index: "https://download.pytorch.org/whl/cu118"
350351
- cuda_version: "12.6.3"
351352
torch_version: "2.6.0"
@@ -374,7 +375,7 @@ jobs:
374375
gpu: T4
375376
runner: CUDA-Windows-x64
376377
cuda_version: "11.8.0"
377-
torch_version: "2.2.0"
378+
torch_version: "2.3.1"
378379
pypi_index: "https://download.pytorch.org/whl/cu118"
379380
- os: windows-2025
380381
arch: x86_64
@@ -430,12 +431,6 @@ jobs:
430431
pip install --pre torch~=${{ matrix.torch_version }}.dev0 --index-url ${{ matrix.pypi_index }}
431432
pip install -e ".[test]"
432433
pip install pytest-cov
433-
434-
# We need to downgrade to numpy<2 for torch<2.3 compatibility.
435-
- name: Downgrade NumPy
436-
if: startsWith(matrix.torch_version, '2.2.')
437-
run: pip install "numpy<2"
438-
439434
- name: Show installed packages
440435
run: pip list
441436

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ The library includes quantization primitives for 8-bit & 4-bit operations, throu
2020
bitsandbytes has the following minimum requirements for all platforms:
2121

2222
* Python 3.9+
23-
* [PyTorch](https://pytorch.org/get-started/locally/) 2.2+
23+
* [PyTorch](https://pytorch.org/get-started/locally/) 2.3+
2424
* _Note: While we aim to provide wide backwards compatibility, we recommend using the latest version of PyTorch for the best experience._
2525

2626
#### Accelerator support:

bitsandbytes/autograd/_functions.py

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -84,11 +84,7 @@ def get_inverse_transform_indices(
8484
return permuted_tile_indices
8585

8686

87-
# torch.compiler.is_compiling() is available only in torch >= 2.3
88-
if hasattr(torch.compiler, "is_compiling"):
89-
_is_compiling = torch.compiler.is_compiling
90-
else:
91-
_is_compiling = torch._dynamo.is_compiling
87+
_is_compiling = torch.compiler.is_compiling
9288

9389

9490
@deprecated(

bitsandbytes/triton/triton_utils.py

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,8 @@
44
@functools.lru_cache(None)
55
def is_triton_available():
66
try:
7-
# torch>=2.2.0
87
from torch.utils._triton import has_triton, has_triton_package
98

109
return has_triton_package() and has_triton()
11-
except ImportError:
12-
from torch._inductor.utils import has_triton
13-
14-
return has_triton()
10+
except Exception:
11+
return False

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ classifiers = [
4242
"Topic :: Scientific/Engineering :: Artificial Intelligence"
4343
]
4444
dependencies = [
45-
"torch>=2.2,<3",
45+
"torch>=2.3,<3",
4646
"numpy>=1.17",
4747
"packaging>=20.9"
4848
]

tests/test_functional.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1413,9 +1413,6 @@ def test_gemv_4bit(self, device, dim, dtype, storage_type, quant_storage, double
14131413
reason="this test is not supported on ROCm with gfx90a architecture yet",
14141414
)
14151415
def test_gemv_eye_4bit(self, device, storage_type, dtype):
1416-
if device == "cpu" and dtype == torch.bfloat16 and torch.__version__ < (2, 3):
1417-
pytest.skip("eye doe not support bfloat16 on CPU in torch < 2.3")
1418-
14191416
if device == "hpu" and not is_supported_on_hpu(storage_type, dtype):
14201417
pytest.skip("This configuration is not supported on HPU.")
14211418

0 commit comments

Comments
 (0)