-
Notifications
You must be signed in to change notification settings - Fork 2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nomad 1.2.x template stanza logs ConsulKV entries and data field #11594
Comments
Thanks for reporting this. We are aware of this issue and are actively working on it. The fix requires a change to the I'll leave this issue open for tracking purposes. |
Is this the same reason I'm suddenly seeing debug logs after upgrading to 1.2.2 even though I have a log_level set to info? Which is a major issue as it's dumping my raw job files into the logs, containing secrets, which are being shipped off to our log aggregator... |
Hi @mikehardenize! From your other ticket (closed as a dupe), you can see the |
The other card which you closed, had more detail. Isn't the log entry that I provided enough to show the problem:
It's the "final config:" entry. The only thing related to logs in my nomad config is:
In the "...etc" part I quoted above, there is a "Templates" section. In the "Contents" part of that it includes the contents of template blocks, which is how I (and presumably others) add config files to some docker containers. And certs. And keys. |
So it's the contents of the |
template.data is in there yes. Which is my specific concern. |
Thanks for the clarification @mikehardenize. We're definitely intending on fixing this in the upcoming patch release (1.2.3). The patch is pending an update to our Generally speaking, switching between "debug" vs "info"-level logging isn't intended to be a security boundary. If your workloads need secrets you really want to get those from Vault (or to a lesser extent, on-disk secrets) and not hard-coded into the jobspec. We did have someone report this issue through the [email protected] email address (which is how we'd like to get security reports) and that reporter told us that downgrading back to 1.1.6 worked for them, so you may want to do that while waiting on the 1.2.3 patch. |
Your upgrade docs state that downgrading nomad is not supported nor guaranteed to work. Will my cluster continue working if I downgrade the clients and the servers? If yes, which should I do first? |
Correct, it's not really a supported path nor guaranteed to work because it's not something we've designed for. But obviously you're in a bit of a bad spot here and looking at the changelog it looks like there haven't been any major forwards-compat changes between 1.1.6->1.2.x. Mostly what we worry about is jobs that don't work in the earlier versions. For example, if you have jobs that are using the new As far as the process goes, I would downgrade servers first and then clients, so that you don't have new servers making RPCs that include fields that clients don't expect. |
@mikehardenize just as an alternative to downgrading I wanted to walk you through a way to update your jobspecs to avoid having sensitive data in the template text at all. Using the following client configuration to create a shared host volume (this can be mounted from a ramdisk if you'd like for hardening, or you can use a Docker volume instead if you want to avoid having host volumes): client {
enabled = true
host_volume "shared_data" {
path = "/srv/data"
}
} We've got a file in there that we don't want to show up in the logs:
With the following jobspec: jobspecjob "example" {
datacenters = ["dc1"]
group "web" {
# this could also be a docker volume accessible only to the
# prestart task, which would prevent the http task from
# reading arbitrary data from here
volume "secrets" {
type = "host"
source = "shared_data"
read_only = true
}
# we need to copy this over with a prestart task because
# templates are rendered before volumes are attached
task "prestart" {
driver = "docker"
config {
image = "busybox:1"
command = "sh"
args = ["-c", "cp /shared_data/test2.txt /alloc/test2.txt"]
}
lifecycle {
hook = "prestart"
sidecar = false
}
volume_mount {
volume = "secrets"
destination = "/shared_data"
}
}
task "http" {
driver = "docker"
config {
image = "busybox:1"
command = "httpd"
args = ["-v", "-f", "-p", "8001", "-h", "/var/www"]
}
# this template.data will show up in the logs
template {
data = "template-data-field"
destination = "local/test1.txt"
}
# the contents of this file will not
template {
source = "../alloc/test2.txt"
destination = "local/test2.txt"
}
}
}
} That'll result in the following logs from the runner, even if Nomad is set to debug logging. You'll see the
|
Thanks. I've known that the way we're doing secrets isn't ideal, but I thought it was good enough for the moment. We have a custom system which builds our nomad job files from templates, so the secrets are getting inlined into the job file by our deployment system. I'm already using host volumes in a couple of places, but I'm not keen on having to update the nomad config and create directories on the host, just to provide volumes for particular jobs. I'm hoping to get Vault onto this system at some point, at which point I can start moving secrets there. I guarantee you I'm not the only person adding config, including secrets, to containers by having it inline in nomad job specs. It's the easy path. |
Sending this over to @lgfa29, who's going to do some testing with his go-hclog PR and this so that we can wrap this one up. |
Just checked locally and the PR I have open does fix this problem. Before:
After:
I will make sure it moves forward and raise a fix for Nomad as well. |
Thanks @lgfa29! |
I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues. |
Nomad version
Nomad v1.2.2 (78b8c17)
Operating system and Environment details
Nomad runs on Ubuntu 20.04LTS, installed with consul 1.10.4 on instances.
Nomad log-level=INFO
Issue
We use in production Nomad v1.1.6 and when updated to v1.2.0/1.2.2 started to see a lot of ConsulKV templates logs
Reproduction steps
Upgrade nomad from 1.1.6 -> 1.2.0/1.2.2
Expected Result
Normal upgrade log lines, don't want to see this lines in logs. Maybe to see this ONLY when Nomad log level is TRACE
Actual Result
Nomad jobs with template stanza, started to log lines from ConsulKV e.g:
Job file (if appropriate)
Nomad Server logs (if appropriate)
Posted few lines above
Nomad Client logs (if appropriate)
Posted few lines above
The text was updated successfully, but these errors were encountered: