-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nomad 1.1.3 Issues with Handling Namespaces #11002
Comments
I was unable to reproduce the bug with either 1.1.3 OSS or Enterprise with the following jobspec: job "example" {
datacenters = ["dc1"]
namespace = "foo"
group "cache" {
network {
port "db" {
to = 6379
}
}
task "redis" {
driver = "docker"
config {
image = "redis:3.2"
ports = ["db"]
}
resources {
cpu = 500
memory = 256
}
}
}
} I created the ❓ Are you submitting the job via
In hindsight maybe one or two ways of specifying the namespace would have been better than 3. 😬 |
I also helped out with the failure this caused, so I can provide a bit more information:
The API. Specifically, we have a Go chatbot that submits the job via spec, _ := jobspec.Parse(strings.NewReader(jobSpecRaw))
plan, _, _ := client.Jobs().Plan(spec, true, nil)
_, _, _ := client.Jobs().EnforceRegister(spec, plan.JobModifyIndex, nil) At the time of the incident, the
Unfortunately, like you, I have not been able to reproduce the problem outside of the full system, so at least some of the elided details are significant, but I'm not sure which ones. We can share the source code for that chatbot with you privately and I can keep hacking on a more minimal reproduction of the issue. |
So The query parameter then takes precedence over whatever is in jobspec due to https://github.com/hashicorp/nomad/pull/10875/files#diff-56b3c82fcbc857f8fb93a903f1610f6e6859b3610a4eddf92bad9ea27fdc85ecR782 This follows the behavior of region as well but is definitely a backward compatibility issue! We should absolutely list it in the changelog and docs and do not! https://www.nomadproject.io/docs/upgrade/upgrade-specific If this explanation makes sense I'll treat this as a documentation issue and get the changelog and upgrade guide fixed up ASAP. |
I used this little program with the above job file to exercise the behavior. When https://gist.github.com/schmichael/e03fe40871edb64dacd6da9f7db4a152 |
Aha! The aforementioned chatbot is also running in Nomad and is in the default namespace. https://www.nomadproject.io/docs/runtime/environment And Nomad sets the I think this gives us a path forward, we will simply nil the Namespace out of |
Thanks for the quick and thorough report! Sorry for the significant backward compat issue! |
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
Nomad version
Nomad v.1.1.3+ent
Issue
We have a nomad job that has a defined namespace and other jobs that live in the default namespace. When we deployed the job that has the defined namespace it moved the job back to the default namespace and then we had 2 of the same jobs running. Something in the new version seems to be overriding the jobspec we have defined. We downgraded Nomad to v 1.1.2 and the issue went away and we were able to deploy our job and it was deployed into the correct namespace. We suspect that this change caused the issue: #10875.
Reproduction steps
Have a 1.1.3 nomad cluster running and deploy a job with a defined namespace.
Expected Result
Nomad job running properly in the defined namespace.
Actual Result
Nomad job running in the default and defined namespace.
Job file (if appropriate)
job "example-job" {
namespace = "example-job"
{etc. job def}
}
The text was updated successfully, but these errors were encountered: