You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am experiencing an issue with the NVIDIA device plugin on my k3s cluster after enabling MPS mode. In fact switching from the GPU-Operator to the standalone nvidia-device-plugin, every time a systemctl daemon-reload is triggered, my pods lose access to the GPUS (NVIDIA/nvidia-container-toolkit#48).
As the official issue suggest, a workaround should be changing the /etc/containerd/config.toml in order to set SystemdCgroup = false.
Even if I do this, when I run crictl info this is the configuration I have for the nvidia runtime:
I am experiencing an issue with the NVIDIA device plugin on my k3s cluster after enabling MPS mode. In fact switching from the GPU-Operator to the standalone nvidia-device-plugin, every time a
systemctl daemon-reload
is triggered, my pods lose access to the GPUS (NVIDIA/nvidia-container-toolkit#48).As the official issue suggest, a workaround should be changing the
/etc/containerd/config.toml
in order to setSystemdCgroup = false
.Even if I do this, when I run
crictl info
this is the configuration I have for the nvidia runtime:While this is my containerd configuration:
How can I change the configuration I get when running
crictl info
to change theSystemdCgroup
?The text was updated successfully, but these errors were encountered: