Skip to content

Commit fdd337d

Browse files
Anurag Gudaangudadevfrancisguillier
authored
25.7.2 Release (#131)
* 25.7.2 Release --------- Co-authored-by: Anurag Guda <[email protected]> Co-authored-by: francisguillier <[email protected]>
1 parent c916e5f commit fdd337d

19 files changed

+387
-145
lines changed

README.md

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -26,17 +26,17 @@ When NVIDIA Cloud Native Stack batch is released, the previous batch enters main
2626

2727
| Batch | Status |
2828
| :-----: | :--------------:|
29-
| [25.7.1](https://github.com/NVIDIA/cloud-native-stack/releases/tag/v25.7.1) | Generally Available |
30-
| [25.7.0](https://github.com/NVIDIA/cloud-native-stack/releases/tag/v25.7.0) | Maintenance |
31-
| [25.4.0](https://github.com/NVIDIA/cloud-native-stack/releases/tag/v25.4.0) | EOL |
29+
| [25.7.2](https://github.com/NVIDIA/cloud-native-stack/releases/tag/v25.7.1) | Generally Available |
30+
| [25.7.1](https://github.com/NVIDIA/cloud-native-stack/releases/tag/v25.7.1) | Maintenance |
31+
| [25.7.0](https://github.com/NVIDIA/cloud-native-stack/releases/tag/v25.7.0) | EOL |
3232

3333
`NOTE:` CNS Version 15.0 and above is Now supports Ubuntu 24.04
3434

3535
For more information, Refer [Cloud Native Stack Releases](https://github.com/NVIDIA/cloud-native-stack/releases)
3636

3737
## Component Matrix
3838

39-
#### Cloud Native Stack Batch 25.7.1 (Release Date: 27 August 2025)
39+
#### Cloud Native Stack Batch 25.7.2 (Release Date: 2nd October 2025)
4040

4141
| CNS Version | 16.0 | 15.1 | 14.2 |
4242
| :-----: | :-----: | :------: | :------: |
@@ -47,11 +47,11 @@ For more information, Refer [Cloud Native Stack Releases](https://github.com/NVI
4747
| CRI-O | 1.33.2 | 1.32.6 | 1.31.10 |
4848
| Kubernetes | 1.33.2 | 1.32.6 | 1.31.10 |
4949
| CNI (Calico) | 3.30.2 | 3.30.2 | 3.30.2 |
50-
| NVIDIA GPU Operator | 25.3.2 | 25.3.2 | 25.3.2 |
50+
| NVIDIA GPU Operator | 25.3.4 | 25.3.4 | 25.3.4 |
5151
| NVIDIA Network Operator | N/A | 25.4.0 | 25.4.0 |
5252
| NVIDIA NIM Operator | 2.0.1 | 2.0.1 | 2.0.1 |
5353
| NVIDIA Nsight Operator | 1.1.2 | 1.1.2 | 1.1.2 |
54-
| NVIDIA Data Center Driver | 580.65.06 | 580.65.06 | 580.65.06 |
54+
| NVIDIA Data Center Driver | 580.82.07 | 580.82.07 | 580.82.07 |
5555
| Helm | 3.18.3 | 3.18.3 | 3.18.3 |
5656

5757
> NOTE: NVIDIA Network Operator is not Supported with CNS 16.0 yet
@@ -137,6 +137,11 @@ For more Information about customize the values, please refer [Installation](htt
137137

138138
`NOTE:` (Cloud Native Stack does not allow the deployment of several control plane nodes)
139139

140+
# Advanced Settings for NVIDIA Platforms
141+
142+
- [Optimizations for NVIDIA GB200 NVL72](./optimizations/GB200-NVL72.md)
143+
144+
140145
# Troubleshooting
141146

142147
[Troubleshoot CNS installation issues](https://github.com/NVIDIA/cloud-native-stack/blob/master/troubleshooting/README.md)

install-guides/RHEL-8-10_Server_x86-arm64.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -5,11 +5,11 @@ This document describes how to setup the NVIDIA Cloud Native Stack collection on
55

66
NVIDIA Cloud Native Stack v14.0 includes:
77
- RHEL 8.10
8-
- Containerd 2.0.3
8+
- Containerd 2.1.3
99
- Kubernetes version 1.31.2
1010
- Helm 3.17.2
11-
- NVIDIA GPU Operator 25.3.0
12-
- NVIDIA GPU Driver: 570.124.06
11+
- NVIDIA GPU Operator 25.3.4
12+
- NVIDIA GPU Driver: 580.82.07
1313
- NVIDIA Container Toolkit: 1.17.5
1414
- NVIDIA K8S Device Plugin: 0.17.1
1515
- NVIDIA DCGM-Exporter: 4.1.1-4.0.4
@@ -167,30 +167,30 @@ sudo sysctl --system
167167
Download the Containerd for `x86-64` system:
168168

169169
```
170-
wget https://github.com/containerd/containerd/releases/download/v2.0.3/containerd-2.0.3-linux-amd64.tar.gz
170+
wget https://github.com/containerd/containerd/releases/download/v2.1.3/containerd-2.1.3-linux-amd64.tar.gz
171171
```
172172

173173
```
174-
sudo tar --no-overwrite-dir -C / -xzf containerd-2.0.3-linux-amd64.tar.gz
174+
sudo tar --no-overwrite-dir -C / -xzf containerd-2.1.3-linux-amd64.tar.gz
175175
```
176176

177177
```
178-
rm -rf containerd-2.0.3-linux-amd64.tar.gz
178+
rm -rf containerd-2.1.3-linux-amd64.tar.gz
179179
```
180180

181181

182182
Download the Containerd for `ARM` system:
183183

184184
```
185-
wget https://github.com/containerd/containerd/releases/download/v2.0.3/containerd-2.0.3-linux-arm64.tar.gz
185+
wget https://github.com/containerd/containerd/releases/download/v2.1.3/containerd-2.1.3-linux-arm64.tar.gz
186186
```
187187

188188
```
189-
sudo tar --no-overwrite-dir -C / -xzf containerd-2.0.3-linux-arm64.tar.gz
189+
sudo tar --no-overwrite-dir -C / -xzf containerd-2.1.3-linux-arm64.tar.gz
190190
```
191191

192192
```
193-
rm -rf containerd-2.0.3-linux-arm64.tar.gz
193+
rm -rf containerd-2.1.3-linux-arm64.tar.gz
194194
```
195195

196196
Install the Containerd
@@ -655,7 +655,7 @@ Install GPU Operator:
655655
`NOTE:` If you installed Network Operator, please skip the below command and follow the [GPU Operator with RDMA](#GPU-Operator-with-RDMA)
656656

657657
```
658-
helm install --version 25.3.0 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --wait --generate-name
658+
helm install --version 25.3.4 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --wait --generate-name
659659
```
660660

661661
#### GPU Operator with RDMA
@@ -666,15 +666,15 @@ Install GPU Operator:
666666
After Network Operator installation is completed, execute the below command to install the GPU Operator to load nv_peer_mem modules:
667667

668668
```
669-
helm install --version 25.3.0 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.rdma.enabled=true --wait --generate-name
669+
helm install --version 25.3.4 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.rdma.enabled=true --wait --generate-name
670670
```
671671

672672
#### GPU Operator with Host MOFED Driver and RDMA
673673

674674
If the host is already installed MOFED driver without network operator, execute the below command to install the GPU Operator to load nv_peer_mem module
675675

676676
```
677-
helm install --version 25.3.0 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.rdma.enabled=true,driver.rdma.useHostMofed=true --wait --generate-name
677+
helm install --version 25.3.4 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.rdma.enabled=true,driver.rdma.useHostMofed=true --wait --generate-name
678678
679679
```
680680

@@ -683,7 +683,7 @@ If the host is already installed MOFED driver without network operator, execute
683683
Execute the below command to enable the GPU Direct Storage Driver on GPU Operator
684684

685685
```
686-
helm install --version 25.3.0 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set gds.enabled=true
686+
helm install --version 25.3.4 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set gds.enabled=true
687687
```
688688
For more information refer, [GPU Direct Storage](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/gpu-operator-rdma.html)
689689

@@ -1037,7 +1037,7 @@ Output:
10371037
```
10381038
Mon Mar 31 20:39:28 2025
10391039
+-----------------------------------------------------------------------------------------+
1040-
| NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8 |
1040+
| NVIDIA-SMI 580.82.07 Driver Version: 580.82.07 CUDA Version: 12.8 |
10411041
|-----------------------------------------+------------------------+----------------------+
10421042
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
10431043
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
@@ -1174,7 +1174,7 @@ Execute the below commands to uninstall the GPU Operator:
11741174
```
11751175
$ helm ls
11761176
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
1177-
gpu-operator-1606173805 nvidia-gpu-operator 1 2025-03-31 20:23:28.063421701 +0000 UTC deployed gpu-operator-25.3.0 v23.3.2
1177+
gpu-operator-1606173805 nvidia-gpu-operator 1 2025-03-31 20:23:28.063421701 +0000 UTC deployed gpu-operator-25.3.4 v23.3.2
11781178
11791179
$ helm del gpu-operator-1606173805 -n nvidia-gpu-operator
11801180

install-guides/Ubuntu-22-04_Server_Developer-x86-arm64_v14.2.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ NVIDIA Cloud Native Stack v14.2 includes:
88
- Containerd 2.1.3
99
- Kubernetes version 1.31.10
1010
- Helm 3.18.3
11-
- NVIDIA GPU Driver: 570.158.01
11+
- NVIDIA GPU Driver: 580.82.07
1212
- NVIDIA Container Toolkit: 1.17.8
13-
- NVIDIA GPU Operator 25.3.2
13+
- NVIDIA GPU Operator 25.3.4
1414
- NVIDIA K8S Device Plugin: 0.17.2
1515
- NVIDIA DCGM-Exporter: 4.2.3-4.1.3
1616
- NVIDIA DCGM: 4.2.3-1
@@ -96,7 +96,7 @@ Expected Output:
9696
```
9797
Mon Mar 31 20:39:28 2025
9898
+-----------------------------------------------------------------------------------------+
99-
| NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8 |
99+
| NVIDIA-SMI 580.82.07 Driver Version: 580.82.07 CUDA Version: 12.8 |
100100
|-----------------------------------------+------------------------+----------------------+
101101
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
102102
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
@@ -728,7 +728,7 @@ Install GPU Operator:
728728
`NOTE:` As we are preinstalled with NVIDIA Driver and NVIDIA Container Toolkit, we need to set as `false` when installing the GPU Operator
729729

730730
```
731-
helm install --version 25.3.2 --create-namespace --namespace nvidia-gpu-operator --devel nvidia/gpu-operator --set driver.enabled=false,toolkit.enabled=false --wait --generate-name
731+
helm install --version 25.3.4 --create-namespace --namespace nvidia-gpu-operator --devel nvidia/gpu-operator --set driver.enabled=false,toolkit.enabled=false --wait --generate-name
732732
```
733733

734734
#### Validating the State of the GPU Operator:
@@ -804,7 +804,7 @@ Output:
804804
```
805805
Mon Mar 31 20:39:28 2025
806806
+-----------------------------------------------------------------------------------------+
807-
| NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8 |
807+
| NVIDIA-SMI 580.82.07 Driver Version: 580.82.07 CUDA Version: 12.8 |
808808
|-----------------------------------------+------------------------+----------------------+
809809
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
810810
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
@@ -945,7 +945,7 @@ Execute the below commands to uninstall the GPU Operator:
945945
```
946946
$ helm ls
947947
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
948-
gpu-operator-1606173805 nvidia-gpu-operator 1 2025-03-31 20:23:28.063421701 +0000 UTC deployed gpu-operator-25.3.2 v25.3.2
948+
gpu-operator-1606173805 nvidia-gpu-operator 1 2025-03-31 20:23:28.063421701 +0000 UTC deployed gpu-operator-25.3.4 v25.3.4
949949
950950
$ helm del gpu-operator-1606173805 -n nvidia-gpu-operator
951951
```

install-guides/Ubuntu-22-04_Server_x86-arm64_v14.2.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,8 @@ NVIDIA Cloud Native Stack v14.2 includes:
88
- Containerd 2.1.3
99
- Kubernetes version 1.31.10
1010
- Helm 3.18.3
11-
- NVIDIA GPU Operator 25.3.2
12-
- NVIDIA GPU Driver: 570.158.01
11+
- NVIDIA GPU Operator 25.3.4
12+
- NVIDIA GPU Driver: 580.82.07
1313
- NVIDIA Container Toolkit: 1.17.8
1414
- NVIDIA K8S Device Plugin: 0.17.2
1515
- NVIDIA DCGM-Exporter: 4.2.3-4.1.3
@@ -606,7 +606,7 @@ Install GPU Operator:
606606
`NOTE:` If you installed Network Operator, please skip the below command and follow the [GPU Operator with RDMA](#GPU-Operator-with-RDMA)
607607

608608
```
609-
helm install --version 25.3.2 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.version=570.124.06 --wait --generate-name
609+
helm install --version 25.3.4 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.version=580.82.07 --wait --generate-name
610610
```
611611

612612
#### GPU Operator with RDMA
@@ -617,15 +617,15 @@ helm install --version 25.3.2 --create-namespace --namespace nvidia-gpu-operator
617617
After Network Operator installation is completed, execute the below command to install the GPU Operator to load nv_peer_mem modules:
618618

619619
```
620-
helm install --version 25.3.2 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.rdma.enabled=true --wait --generate-name
620+
helm install --version 25.3.4 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.rdma.enabled=true --wait --generate-name
621621
```
622622

623623
#### GPU Operator with Host MOFED Driver and RDMA
624624

625625
If the host is already installed MOFED driver without network operator, execute the below command to install the GPU Operator to load nv_peer_mem module
626626

627627
```
628-
helm install --version 25.3.2 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.rdma.enabled=true,driver.rdma.useHostMofed=true --wait --generate-name
628+
helm install --version 25.3.4 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.rdma.enabled=true,driver.rdma.useHostMofed=true --wait --generate-name
629629
630630
```
631631

@@ -634,7 +634,7 @@ If the host is already installed MOFED driver without network operator, execute
634634
Execute the below command to enable the GPU Direct Storage Driver on GPU Operator
635635

636636
```
637-
helm install --version 25.3.2 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set gds.enabled=true
637+
helm install --version 25.3.4 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set gds.enabled=true
638638
```
639639
For more information refer, [GPU Direct Storage](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/gpu-operator-rdma.html)
640640

@@ -988,7 +988,7 @@ Output:
988988
```
989989
Mon Mar 31 20:39:28 2025
990990
+-----------------------------------------------------------------------------------------+
991-
| NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8 |
991+
| NVIDIA-SMI 580.82.07 Driver Version: 580.82.07 CUDA Version: 12.8 |
992992
|-----------------------------------------+------------------------+----------------------+
993993
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
994994
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
@@ -1124,7 +1124,7 @@ Execute the below commands to uninstall the GPU Operator:
11241124
```
11251125
$ helm ls
11261126
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
1127-
gpu-operator-1606173805 nvidia-gpu-operator 1 2025-03-31 20:23:28.063421701 +0000 UTC deployed gpu-operator-25.3.2 25.3.2
1127+
gpu-operator-1606173805 nvidia-gpu-operator 1 2025-03-31 20:23:28.063421701 +0000 UTC deployed gpu-operator-25.3.4 25.3.4
11281128
11291129
$ helm del gpu-operator-1606173805 -n nvidia-gpu-operator
11301130

install-guides/Ubuntu-24-04_Server_Developer-x86-arm64_v15.1.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ NVIDIA Cloud Native Stack v15.1 includes:
88
- Containerd 2.1.3
99
- Kubernetes version 1.32.6
1010
- Helm 3.18.3
11-
- NVIDIA GPU Driver: 570.158.01
11+
- NVIDIA GPU Driver: 580.82.07
1212
- NVIDIA Container Toolkit: 1.17.8
13-
- NVIDIA GPU Operator 25.3.2
13+
- NVIDIA GPU Operator 25.3.4
1414
- NVIDIA K8S Device Plugin: 0.17.2
1515
- NVIDIA DCGM-Exporter: 4.2.3-4.1.3
1616
- NVIDIA DCGM: 4.2.3-1
@@ -95,7 +95,7 @@ Expected Output:
9595
```
9696
Mon Mar 31 20:39:28 2025
9797
+-----------------------------------------------------------------------------------------+
98-
| NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8 |
98+
| NVIDIA-SMI 580.82.07 Driver Version: 580.82.07 CUDA Version: 12.8 |
9999
|-----------------------------------------+------------------------+----------------------+
100100
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
101101
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
@@ -727,7 +727,7 @@ Install GPU Operator:
727727
`NOTE:` As we are preinstalled with NVIDIA Driver and NVIDIA Container Toolkit, we need to set as `false` when installing the GPU Operator
728728

729729
```
730-
helm install --version 25.3.2 --create-namespace --namespace nvidia-gpu-operator --devel nvidia/gpu-operator --set driver.enabled=false,toolkit.enabled=false --wait --generate-name
730+
helm install --version 25.3.4 --create-namespace --namespace nvidia-gpu-operator --devel nvidia/gpu-operator --set driver.enabled=false,toolkit.enabled=false --wait --generate-name
731731
```
732732

733733
#### Validating the State of the GPU Operator:
@@ -803,7 +803,7 @@ Output:
803803
```
804804
Mon Mar 31 20:39:28 2025
805805
+-----------------------------------------------------------------------------------------+
806-
| NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8 |
806+
| NVIDIA-SMI 580.82.07 Driver Version: 580.82.07 CUDA Version: 12.8 |
807807
|-----------------------------------------+------------------------+----------------------+
808808
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
809809
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
@@ -944,7 +944,7 @@ Execute the below commands to uninstall the GPU Operator:
944944
```
945945
$ helm ls
946946
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
947-
gpu-operator-1606173805 nvidia-gpu-operator 1 2025-03-31 20:23:28.063421701 +0000 UTC deployed gpu-operator-25.3.2 v25.3.2
947+
gpu-operator-1606173805 nvidia-gpu-operator 1 2025-03-31 20:23:28.063421701 +0000 UTC deployed gpu-operator-25.3.4 v25.3.4
948948
949949
$ helm del gpu-operator-1606173805 -n nvidia-gpu-operator
950950
```

install-guides/Ubuntu-24-04_Server_Developer-x86-arm64_v16.0.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,9 @@ NVIDIA Cloud Native Stack v16.0 includes:
88
- Containerd 2.1.3
99
- Kubernetes version 1.33.2
1010
- Helm 3.18.3
11-
- NVIDIA GPU Driver: 570.158.01
11+
- NVIDIA GPU Driver: 580.82.07
1212
- NVIDIA Container Toolkit: 1.17.8
13-
- NVIDIA GPU Operator 25.3.2
13+
- NVIDIA GPU Operator 25.3.4
1414
- NVIDIA K8S Device Plugin: 0.17.2
1515
- NVIDIA DCGM-Exporter: 4.2.3-4.1.3
1616
- NVIDIA DCGM: 4.2.3-1
@@ -95,7 +95,7 @@ Expected Output:
9595
```
9696
Mon Mar 31 20:39:28 2025
9797
+-----------------------------------------------------------------------------------------+
98-
| NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8 |
98+
| NVIDIA-SMI 580.82.07 Driver Version: 580.82.07 CUDA Version: 12.8 |
9999
|-----------------------------------------+------------------------+----------------------+
100100
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
101101
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
@@ -727,7 +727,7 @@ Install GPU Operator:
727727
`NOTE:` As we are preinstalled with NVIDIA Driver and NVIDIA Container Toolkit, we need to set as `false` when installing the GPU Operator
728728

729729
```
730-
helm install --version 25.3.2 --create-namespace --namespace nvidia-gpu-operator --devel nvidia/gpu-operator --set driver.enabled=false,toolkit.enabled=false --wait --generate-name
730+
helm install --version 25.3.4 --create-namespace --namespace nvidia-gpu-operator --devel nvidia/gpu-operator --set driver.enabled=false,toolkit.enabled=false --wait --generate-name
731731
```
732732

733733
#### Validating the State of the GPU Operator:
@@ -803,7 +803,7 @@ Output:
803803
```
804804
Mon Mar 31 20:39:28 2025
805805
+-----------------------------------------------------------------------------------------+
806-
| NVIDIA-SMI 570.124.06 Driver Version: 570.124.06 CUDA Version: 12.8 |
806+
| NVIDIA-SMI 580.82.07 Driver Version: 580.82.07 CUDA Version: 12.8 |
807807
|-----------------------------------------+------------------------+----------------------+
808808
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
809809
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
@@ -944,7 +944,7 @@ Execute the below commands to uninstall the GPU Operator:
944944
```
945945
$ helm ls
946946
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
947-
gpu-operator-1606173805 nvidia-gpu-operator 1 2025-03-31 20:23:28.063421701 +0000 UTC deployed gpu-operator-25.3.2 v25.3.2
947+
gpu-operator-1606173805 nvidia-gpu-operator 1 2025-03-31 20:23:28.063421701 +0000 UTC deployed gpu-operator-25.3.4 v25.3.4
948948
949949
$ helm del gpu-operator-1606173805 -n nvidia-gpu-operator
950950
```

0 commit comments

Comments
 (0)