allow_privileged
must be enabled in Nomad config.
To make vultr csi work you need to run two components:
- csi-controller
- csi-node
csi-controller and csi-node uses the same binary, but different schedulers. csi-controller should be run as service and csi-node should be run as system (run on every host).
See more at Nomad documentation on CSI here.
You will need to run a separate deployment for each Vultr region.
In order for the csi to work properly, you will need to provide an API key to csi-controller.
To obtain a API key, please visit API settings.
API key can be passed to csi-controller securely via Vault integration.
Example snippet:
task "csi-controller" {
driver = "docker"
vault {
policies = ["vultr"]
}
config {
image = "vultr/vultr-csi:v0.5.0"
args = [
"-endpoint=unix:///csi/csi.sock",
"-token=${VULTR_API_KEY}",
]
}
template {
data = <<-EOF
VULTR_API_KEY={{ with secret "secret/vultr/csi" }}{{ .Data.data.key }}{{ end }}
EOF
destination = "secrets/api.env"
change_mode = "restart"
env = true
}
Adapt and run example jobs definitions:
In Nomad UI in Storage tab make sure plugin is healthy.
Nomad will not create volume on demand. You need to create a volume yourself
either by hand or with
Terraform
and then register it in Nomad, again by hand with
nomad volume create
command or with
Terraform.
Adapt and use this config to test.
To validate run example job and the following commands:
nomad exec -job example touch /data/example
nomad stop -purge example
nomad system gc
nomad run example.nomad.hcl
nomad exec -job example ls -alh /data
Examples of Nomad jobs and Terraform configs can be found here.