@@ -8,7 +8,7 @@ NVIDIA Cloud Native Stack v14.2 includes:
88- Containerd 2.1.3
99- Kubernetes version 1.31.10
1010- Helm 3.18.3
11- - NVIDIA GPU Operator 25.3.1
11+ - NVIDIA GPU Operator 25.3.2
1212 - NVIDIA GPU Driver: 570.158.01
1313 - NVIDIA Container Toolkit: 1.17.8
1414 - NVIDIA K8S Device Plugin: 0.17.2
@@ -606,7 +606,7 @@ Install GPU Operator:
606606` NOTE: ` If you installed Network Operator, please skip the below command and follow the [ GPU Operator with RDMA] ( #GPU-Operator-with-RDMA )
607607
608608```
609- helm install --version 25.3.1 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.version=570.124.06 --wait --generate-name
609+ helm install --version 25.3.2 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.version=570.124.06 --wait --generate-name
610610```
611611
612612#### GPU Operator with RDMA
@@ -617,15 +617,15 @@ helm install --version 25.3.1 --create-namespace --namespace nvidia-gpu-operator
617617After Network Operator installation is completed, execute the below command to install the GPU Operator to load nv_peer_mem modules:
618618
619619```
620- helm install --version 25.3.1 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.rdma.enabled=true --wait --generate-name
620+ helm install --version 25.3.2 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.rdma.enabled=true --wait --generate-name
621621```
622622
623623#### GPU Operator with Host MOFED Driver and RDMA
624624
625625If the host is already installed MOFED driver without network operator, execute the below command to install the GPU Operator to load nv_peer_mem module
626626
627627```
628- helm install --version 25.3.1 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.rdma.enabled=true,driver.rdma.useHostMofed=true --wait --generate-name
628+ helm install --version 25.3.2 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set driver.rdma.enabled=true,driver.rdma.useHostMofed=true --wait --generate-name
629629
630630```
631631
@@ -634,7 +634,7 @@ If the host is already installed MOFED driver without network operator, execute
634634Execute the below command to enable the GPU Direct Storage Driver on GPU Operator
635635
636636```
637- helm install --version 25.3.1 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set gds.enabled=true
637+ helm install --version 25.3.2 --create-namespace --namespace nvidia-gpu-operator nvidia/gpu-operator --set gds.enabled=true
638638```
639639For more information refer, [ GPU Direct Storage] ( https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/gpu-operator-rdma.html )
640640
@@ -1124,7 +1124,7 @@ Execute the below commands to uninstall the GPU Operator:
11241124```
11251125$ helm ls
11261126NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
1127- gpu-operator-1606173805 nvidia-gpu-operator 1 2025-03-31 20:23:28.063421701 +0000 UTC deployed gpu-operator-25.3.1 25.3.1
1127+ gpu-operator-1606173805 nvidia-gpu-operator 1 2025-03-31 20:23:28.063421701 +0000 UTC deployed gpu-operator-25.3.2 25.3.2
11281128
11291129$ helm del gpu-operator-1606173805 -n nvidia-gpu-operator
11301130
0 commit comments