-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
volume_mount does not update allocation #13333
Comments
Hi @pikeas! This was a known bug prior to Nomad 1.3.1 but it should have been fixed with #13008. I just tested with the released binary and wasn't able to reproduce the problem. Here's my jobspec: jobspecjob "httpd" {
datacenters = ["dc1"]
group "web" {
volume "host_data" {
type = "host"
read_only = false
source = "shared_data"
}
task "http" {
driver = "docker"
config {
image = "busybox:1"
command = "httpd"
args = ["-v", "-f", "-p", "8001", "-h", "/local"]
}
# volume_mount {
# volume = "host_data"
# destination = "/host_data"
# read_only = false
# }
resources {
cpu = 128
memory = 128
}
}
}
}
Then I uncommented the
|
Thanks for the quick response! I just tried with your jobspec and am having the same issue. Notes:
|
I believe this would be the check a few lines later at https://github.com/hashicorp/nomad/blob/main/scheduler/util.go#L563-L565:
So, this should work. I'm not sure why we're seeing different behaviors, please let me know if there are any other logs I can check? |
All the decisions will be made on the server, so if no new allocation is being created I'd expect that there might be some information on the debug-level logs from the server. It also might help to get the |
I'm going to close this issue for now as we don't have the requested information that we'd need to debug. If you do get that info, please feel free to post here and we can re-open the issue. Thanks! |
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
Nomad version: 1.3.1
Operating system and Environment details: Ubuntu 22.04 LTS
Issue
Updating a job spec by adding a volume mount doesn't launch a new container with access to the volume. Stopping and starting the job does work, so it appears the scheduler does not view the new mount as requiring a relaunch.
Reproduction steps
Run job, add a new volume_mount, run the job again to update it.
Expected Result: new volume mount is available in the container.
Actual Result: new volume mount is not available.
Job file
The text was updated successfully, but these errors were encountered: