Skip to content
This repository has been archived by the owner on Jan 11, 2023. It is now read-only.

Update NVIDIA drivers to 396 #3224

Merged
merged 2 commits into from
Jun 8, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/kubernetes/gpu.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Microsoft Azure Container Service Engine - Using GPUs with Kubernetes

If you created a Kubernetes cluster with one or multiple agent pool(s) whose VM size is `Standard_NC*` or `Standard_NV*` you can schedule GPU workload on your cluster.
The NVIDIA drivers are automatically installed on every GPU agent in your cluster, so you don't need to do that manually, unless you require a specific version of the drivers. Currently, the installed driver is version 390.30.
The NVIDIA drivers are automatically installed on every GPU agent in your cluster, so you don't need to do that manually, unless you require a specific version of the drivers. Currently, the installed driver is version 396.26.

To make sure everything is fine, run `kubectl describe node <name-of-a-gpu-node>`. You should see the correct number of GPU reported (in this example shows 2 GPU for a NC12 VM):

Expand Down
2 changes: 1 addition & 1 deletion pkg/acsengine/engine.go
Original file line number Diff line number Diff line change
Expand Up @@ -445,7 +445,7 @@ func isCustomVNET(a []*api.AgentPoolProfile) bool {
func getGPUDriversInstallScript(profile *api.AgentPoolProfile) string {

// latest version of the drivers. Later this parameter could be bubbled up so that users can choose specific driver versions.
dv := "390.30"
dv := "396.26"
dest := "/usr/local/nvidia"
nvidiaDockerVersion := "2.0.3"
dockerVersion := "1.13.1-1"
Expand Down