Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nomad 1.2.x template stanza logs ConsulKV entries and data field #11594

Closed
mnuic opened this issue Dec 1, 2021 · 15 comments · Fixed by #11838
Closed

Nomad 1.2.x template stanza logs ConsulKV entries and data field #11594

mnuic opened this issue Dec 1, 2021 · 15 comments · Fixed by #11838
Assignees
Labels
stage/accepted Confirmed, and intend to work on. No timeline committment though. theme/template type/bug
Milestone

Comments

@mnuic
Copy link

mnuic commented Dec 1, 2021

Nomad version

Nomad v1.2.2 (78b8c17)

Operating system and Environment details

Nomad runs on Ubuntu 20.04LTS, installed with consul 1.10.4 on instances.

Nomad log-level=INFO

Issue

We use in production Nomad v1.1.6 and when updated to v1.2.0/1.2.2 started to see a lot of ConsulKV templates logs

Reproduction steps

Upgrade nomad from 1.1.6 -> 1.2.0/1.2.2

Expected Result

Normal upgrade log lines, don't want to see this lines in logs. Maybe to see this ONLY when Nomad log level is TRACE

Actual Result

Nomad jobs with template stanza, started to log lines from ConsulKV e.g:

"app":"nomad","time":"2021-11-17T12:14:42+00:00","msg":"agent: 2021/11/17 12:14:42.956627 [TRACE] kv.block(A/something): returned \"SOMETHING""}
"app":"nomad","time":"2021-11-17T12:14:42+00:00","msg":"agent: 2021/11/17 12:14:42.956665 [TRACE] (view) kv.block(B/something) marking successful data response"}
"app":"nomad","time":"2021-11-17T12:14:42+00:00","msg":"agent: 2021/11/17 12:14:42.956678 [TRACE] (view) kv.block(C/config) no new data (index was the same)"}
"app":"nomad","time":"2021-11-17T12:14:42+00:00","msg":"agent: 2021/11/17 12:14:42.956696 [TRACE] (view) kv.block(A/something) successful contact, resetting retries"} 

Job file (if appropriate)

          template {
                data = <<EOH
                [default]
                A = {{key "A/something_config"}}
                B = {{key "B/something_config"}}
                C = {{key "C/something"}}
                EOH
                destination = "file/env"
            }

Nomad Server logs (if appropriate)

Posted few lines above

Nomad Client logs (if appropriate)

Posted few lines above

@mnuic mnuic added the type/bug label Dec 1, 2021
@DerekStrickland DerekStrickland self-assigned this Dec 1, 2021
@DerekStrickland
Copy link
Contributor

Thanks for reporting this.

We are aware of this issue and are actively working on it. The fix requires a change to the go-hclog upstream and we have submitted this PR to address. Once that is merged, we will then have to update the dependency in Nomad.

I'll leave this issue open for tracking purposes.

@DerekStrickland DerekStrickland removed their assignment Dec 1, 2021
@mikehardenize
Copy link

Is this the same reason I'm suddenly seeing debug logs after upgrading to 1.2.2 even though I have a log_level set to info? Which is a major issue as it's dumping my raw job files into the logs, containing secrets, which are being shipped off to our log aggregator...

@tgross
Copy link
Member

tgross commented Dec 3, 2021

Hi @mikehardenize! From your other ticket (closed as a dupe), you can see the (runner) marker which shows this is indeed from this same problem. Can you clarify the specific source of the contents are being dumped in the logs? In our testing, the Consul KV values were being included, which is obviously not ideal and is a bug we're going to fix, but shouldn't contain secrets.

@mikehardenize
Copy link

mikehardenize commented Dec 3, 2021

The other card which you closed, had more detail. Isn't the log entry that I provided enough to show the problem:

[INFO] agent: 2021/12/03 11:03:18.663823 [DEBUG] (runner) final config: ...etc

It's the "final config:" entry. The only thing related to logs in my nomad config is:

log_level  = "INFO"

In the "...etc" part I quoted above, there is a "Templates" section. In the "Contents" part of that it includes the contents of template blocks, which is how I (and presumably others) add config files to some docker containers. And certs. And keys.

@tgross
Copy link
Member

tgross commented Dec 3, 2021

So it's the contents of the template.data or template.source block specifically?

@mikehardenize
Copy link

template.data is in there yes. Which is my specific concern.

@tgross tgross changed the title Nomad 1.2.x template stanza logs all ConsulKV entries Nomad 1.2.x template stanza logs ConsulKV entries and data field Dec 3, 2021
@tgross
Copy link
Member

tgross commented Dec 3, 2021

Thanks for the clarification @mikehardenize. We're definitely intending on fixing this in the upcoming patch release (1.2.3). The patch is pending an update to our hclog library (hashicorp/go-hclog#101)

Generally speaking, switching between "debug" vs "info"-level logging isn't intended to be a security boundary. If your workloads need secrets you really want to get those from Vault (or to a lesser extent, on-disk secrets) and not hard-coded into the jobspec. We did have someone report this issue through the [email protected] email address (which is how we'd like to get security reports) and that reporter told us that downgrading back to 1.1.6 worked for them, so you may want to do that while waiting on the 1.2.3 patch.

@mikehardenize
Copy link

Your upgrade docs state that downgrading nomad is not supported nor guaranteed to work. Will my cluster continue working if I downgrade the clients and the servers? If yes, which should I do first?

@tgross
Copy link
Member

tgross commented Dec 3, 2021

Correct, it's not really a supported path nor guaranteed to work because it's not something we've designed for. But obviously you're in a bit of a bad spot here and looking at the changelog it looks like there haven't been any major forwards-compat changes between 1.1.6->1.2.x. Mostly what we worry about is jobs that don't work in the earlier versions. For example, if you have jobs that are using the new sysbatch scheduler, they'll start failing new dispatches if you try to downgrade. You'll want to compare what your fleet of jobs looks like against the changelog for 1.1.6->1.2.x. But if you upgraded and immediately noticed this without changing jobs, it's likely everything will "just work" with no problem. If your fleet of jobs isn't that large, it may even be less work for you to try to move the secrets out of the template.data.

As far as the process goes, I would downgrade servers first and then clients, so that you don't have new servers making RPCs that include fields that clients don't expect.

@tgross
Copy link
Member

tgross commented Dec 3, 2021

@mikehardenize just as an alternative to downgrading I wanted to walk you through a way to update your jobspecs to avoid having sensitive data in the template text at all.

Using the following client configuration to create a shared host volume (this can be mounted from a ramdisk if you'd like for hardening, or you can use a Docker volume instead if you want to avoid having host volumes):

client {
  enabled    = true

  host_volume "shared_data" {
    path = "/srv/data"
  }
}

We've got a file in there that we don't want to show up in the logs:

$ cat /srv/data/test2.txt
super-secret-data-xyzzy

With the following jobspec:

jobspec
job "example" {
  datacenters = ["dc1"]

  group "web" {

    # this could also be a docker volume accessible only to the
    # prestart task, which would prevent the http task from
    # reading arbitrary data from here
    volume "secrets" {
      type      = "host"
      source    = "shared_data"
      read_only = true
    }

    # we need to copy this over with a prestart task because
    # templates are rendered before volumes are attached
    task "prestart" {
      driver = "docker"
      config {
        image   = "busybox:1"
        command = "sh"
        args    = ["-c", "cp /shared_data/test2.txt /alloc/test2.txt"]
      }
      lifecycle {
        hook    = "prestart"
        sidecar = false
      }
      volume_mount {
        volume      = "secrets"
        destination = "/shared_data"
      }
    }

    task "http" {

      driver = "docker"

      config {
        image   = "busybox:1"
        command = "httpd"
        args    = ["-v", "-f", "-p", "8001", "-h", "/var/www"]
      }

      # this template.data will show up in the logs
      template {
        data        = "template-data-field"
        destination = "local/test1.txt"
      }

      # the contents of this file will not
      template {
        source      = "../alloc/test2.txt"
        destination = "local/test2.txt"
      }

    }
  }
}

That'll result in the following logs from the runner, even if Nomad is set to debug logging. You'll see the template-data-field text but not the super-secret-data-xyzzy

    2021-12-03T14:39:26.151Z [INFO]  agent: 2021/12/03 14:39:26.151682 [INFO] (runner) creating new runner (dry: false, once: false)
    2021-12-03T14:39:26.152Z [INFO]  agent: 2021/12/03 14:39:26.152212 [DEBUG] (runner) final config: {"Consul":{"Address":"127.0.0.1:8500","Namespace":"","Auth":{"Enabled":false,"Username":"","Password":""},"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":true},"Token":"","Transport":{"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":5,"TLSHandshakeTimeout":10000000000}},"Dedup":{"Enabled":false,"MaxStale":2000000000,"Prefix":"consul-template/dedup/","TTL":15000000000,"BlockQueryWaitTime":60000000000},"DefaultDelims":{"Left":null,"Right":null},"Exec":{"Command":"","Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":0},"KillSignal":2,"LogLevel":"WARN","MaxStale":2000000000,"PidFile":"","ReloadSignal":1,"Syslog":{"Enabled":false,"Facility":"LOCAL0","Name":""},"Templates":[{"Backup":false,"Command":"","CommandTimeout":30000000000,"Contents":"template-data-field","CreateDestDirs":true,"Destination":"/tmp/NomadClient3796884855/a53a692e-16dc-0fab-1be8-8e6bcd0514f8/http/local/test1.txt","ErrMissingKey":false,"Exec":{"Command":"","Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":420,"Source":"","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"{{","RightDelim":"}}","FunctionDenylist":["plugin"],"SandboxPath":"/tmp/NomadClient3796884855/a53a692e-16dc-0fab-1be8-8e6bcd0514f8/http"},{"Backup":false,"Command":"","CommandTimeout":30000000000,"Contents":"","CreateDestDirs":true,"Destination":"/tmp/NomadClient3796884855/a53a692e-16dc-0fab-1be8-8e6bcd0514f8/http/local/test2.txt","ErrMissingKey":false,"Exec":{"Command":"","Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":420,"Source":"/tmp/NomadClient3796884855/a53a692e-16dc-0fab-1be8-8e6bcd0514f8/alloc/test2.txt","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"{{","RightDelim":"}}","FunctionDenylist":["plugin"],"SandboxPath":"/tmp/NomadClient3796884855/a53a692e-16dc-0fab-1be8-8e6bcd0514f8/http"}],"Vault":{"Address":"","Enabled":false,"Namespace":"","RenewToken":false,"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":true,"Key":"","ServerName":"","Verify":true},"Transport":{"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":5,"TLSHandshakeTimeout":10000000000},"UnwrapToken":false},"Wait":{"Enabled":false,"Min":0,"Max":0},"Once":false,"BlockQueryWaitTime":60000000000}
    2021-12-03T14:39:26.152Z [INFO]  agent: 2021/12/03 14:39:26.152283 [INFO] (runner) creating watcher
    2021-12-03T14:39:26.152Z [INFO]  agent: 2021/12/03 14:39:26.152429 [INFO] (runner) starting
    2021-12-03T14:39:26.152Z [INFO]  agent: 2021/12/03 14:39:26.152747 [DEBUG] (runner) running initial templates
    2021-12-03T14:39:26.152Z [INFO]  agent: 2021/12/03 14:39:26.152757 [DEBUG] (runner) initiating run
    2021-12-03T14:39:26.152Z [INFO]  agent: 2021/12/03 14:39:26.152764 [DEBUG] (runner) checking template b105373caf72f3f63a60b43153e98ac6
    2021-12-03T14:39:26.153Z [INFO]  agent: 2021/12/03 14:39:26.153111 [DEBUG] (runner) rendering "(dynamic)" => "/tmp/NomadClient3796884855/a53a692e-16dc-0fab-1be8-8e6bcd0514f8/http/local/test1.txt"
    2021-12-03T14:39:26.164Z [INFO]  agent: 2021/12/03 14:39:26.164615 [INFO] (runner) rendered "(dynamic)" => "/tmp/NomadClient3796884855/a53a692e-16dc-0fab-1be8-8e6bcd0514f8/http/local/test1.txt"
    2021-12-03T14:39:26.164Z [INFO]  agent: 2021/12/03 14:39:26.164700 [DEBUG] (runner) checking template 8172605edb4ccdc1f19c96eaa4de94a7
    2021-12-03T14:39:26.164Z [INFO]  agent: 2021/12/03 14:39:26.164962 [DEBUG] (runner) rendering "/tmp/NomadClient3796884855/a53a692e-16dc-0fab-1be8-8e6bcd0514f8/alloc/test2.txt" => "/tmp/NomadClient3796884855/a53a692e-16dc-0fab-1be8-8e6bcd0514f8/http/local/test2.txt"
    2021-12-03T14:39:26.167Z [INFO]  agent: 2021/12/03 14:39:26.167095 [INFO] (runner) rendered "/tmp/NomadClient3796884855/a53a692e-16dc-0fab-1be8-8e6bcd0514f8/alloc/test2.txt" => "/tmp/NomadClient3796884855/a53a692e-16dc-0fab-1be8-8e6bcd0514f8/http/local/test2.txt"
    2021-12-03T14:39:26.167Z [INFO]  agent: 2021/12/03 14:39:26.167127 [DEBUG] (runner) diffing and updating dependencies
    2021-12-03T14:39:26.167Z [INFO]  agent: 2021/12/03 14:39:26.167138 [DEBUG] (runner) watching 0 dependencies
    2021-12-03T14:39:26.167Z [INFO]  agent: 2021/12/03 14:39:26.167143 [DEBUG] (runner) all templates rendered

@mikehardenize
Copy link

Thanks. I've known that the way we're doing secrets isn't ideal, but I thought it was good enough for the moment. We have a custom system which builds our nomad job files from templates, so the secrets are getting inlined into the job file by our deployment system. I'm already using host volumes in a couple of places, but I'm not keen on having to update the nomad config and create directories on the host, just to provide volumes for particular jobs. I'm hoping to get Vault onto this system at some point, at which point I can start moving secrets there.

I guarantee you I'm not the only person adding config, including secrets, to containers by having it inline in nomad job specs. It's the easy path.

@tgross tgross self-assigned this Dec 3, 2021
@tgross tgross assigned lgfa29 and unassigned tgross Dec 6, 2021
@tgross
Copy link
Member

tgross commented Dec 6, 2021

Sending this over to @lgfa29, who's going to do some testing with his go-hclog PR and this so that we can wrap this one up.

@lgfa29
Copy link
Contributor

lgfa29 commented Dec 6, 2021

Just checked locally and the PR I have open does fix this problem.

Before:

    2021-12-06T18:07:40.769-0500 [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=69d2dc2e-974f-6c99-9818-745498416995 task=redis @module=logmon path=/private/tmp/NomadClient554318951/69d2dc2e-974f-6c99-9818-745498416995/alloc/logs/.redis.stdout.fifo timestamp=2021-12-06T18:07:40.769-0500
    2021-12-06T18:07:40.770-0500 [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=69d2dc2e-974f-6c99-9818-745498416995 task=redis path=/private/tmp/NomadClient554318951/69d2dc2e-974f-6c99-9818-745498416995/alloc/logs/.redis.stderr.fifo @module=logmon timestamp=2021-12-06T18:07:40.770-0500
    2021-12-06T18:07:40.773-0500 [INFO]  agent: 2021/12/06 18:07:40.773864 [INFO] (runner) creating new runner (dry: false, once: false)
    2021-12-06T18:07:40.774-0500 [INFO]  agent: 2021/12/06 18:07:40.774799 [DEBUG] (runner) final config: {"Consul":{"Address":"127.0.0.1:8500","Namespace":"","Auth":{"Enabled":false,"Username":"","Password":""},"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":true},"Token":"","Transport":{"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":17,"TLSHandshakeTimeout":10000000000}},"Dedup":{"Enabled":false,"MaxStale":2000000000,"Prefix":"consul-template/dedup/","TTL":15000000000,"BlockQueryWaitTime":60000000000},"DefaultDelims":{"Left":null,"Right":null},"Exec":{"Command":"","Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":0},"KillSignal":2,"LogLevel":"WARN","MaxStale":2000000000,"PidFile":"","ReloadSignal":1,"Syslog":{"Enabled":false,"Facility":"LOCAL0","Name":""},"Templates":[{"Backup":false,"Command":"","CommandTimeout":30000000000,"Contents":"a = {{ key \"config/a\" }}\n","CreateDestDirs":true,"Destination":"/private/tmp/NomadClient554318951/69d2dc2e-974f-6c99-9818-745498416995/redis/local/config.txt","ErrMissingKey":false,"Exec":{"Command":"","Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":420,"Source":"","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"{{","RightDelim":"}}","FunctionDenylist":["plugin"],"SandboxPath":"/private/tmp/NomadClient554318951/69d2dc2e-974f-6c99-9818-745498416995/redis"}],"Vault":{"Address":"","Enabled":false,"Namespace":"","RenewToken":false,"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":true,"Key":"","ServerName":"","Verify":true},"Transport":{"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":17,"TLSHandshakeTimeout":10000000000},"UnwrapToken":false},"Wait":{"Enabled":false,"Min":0,"Max":0},"Once":false,"BlockQueryWaitTime":60000000000}
    2021-12-06T18:07:40.777-0500 [INFO]  agent: 2021/12/06 18:07:40.777590 [INFO] (runner) creating watcher
    2021-12-06T18:07:40.779-0500 [INFO]  agent: 2021/12/06 18:07:40.779037 [INFO] (runner) starting
    2021-12-06T18:07:40.780-0500 [INFO]  agent: 2021/12/06 18:07:40.780350 [DEBUG] (runner) running initial templates
    2021-12-06T18:07:40.780-0500 [INFO]  agent: 2021/12/06 18:07:40.780395 [DEBUG] (runner) initiating run
    2021-12-06T18:07:40.780-0500 [INFO]  agent: 2021/12/06 18:07:40.780415 [DEBUG] (runner) checking template e5b83cd3bc66aab724373a9d794ddc0f
    2021-12-06T18:07:40.786-0500 [INFO]  agent: 2021/12/06 18:07:40.786012 [DEBUG] (runner) missing data for 1 dependencies
    2021-12-06T18:07:40.786-0500 [INFO]  agent: 2021/12/06 18:07:40.786042 [DEBUG] (runner) missing dependency: kv.block(config/a)
    2021-12-06T18:07:40.786-0500 [INFO]  agent: 2021/12/06 18:07:40.786068 [DEBUG] (runner) add used dependency kv.block(config/a) to missing since isLeader but do not have a watcher
    2021-12-06T18:07:40.786-0500 [INFO]  agent: 2021/12/06 18:07:40.786076 [DEBUG] (runner) was not watching 1 dependencies
    2021-12-06T18:07:40.786-0500 [INFO]  agent: 2021/12/06 18:07:40.786082 [DEBUG] (watcher) adding kv.block(config/a)
    2021-12-06T18:07:40.786-0500 [INFO]  agent: 2021/12/06 18:07:40.786087 [TRACE] (watcher) kv.block(config/a) starting
    2021-12-06T18:07:40.786-0500 [INFO]  agent: 2021/12/06 18:07:40.786388 [DEBUG] (runner) diffing and updating dependencies
    2021-12-06T18:07:40.786-0500 [INFO]  agent: 2021/12/06 18:07:40.786401 [DEBUG] (runner) watching 1 dependencies
    2021-12-06T18:07:40.791-0500 [INFO]  agent: 2021/12/06 18:07:40.786727 [TRACE] (view) kv.block(config/a) starting fetch
    2021-12-06T18:07:40.791-0500 [INFO]  agent: 2021/12/06 18:07:40.791979 [TRACE] kv.block(config/a): GET /v1/kv/config/a?stale=true&wait=1m0s
    2021-12-06T18:07:40.797-0500 [INFO]  agent: 2021/12/06 18:07:40.797229 [TRACE] kv.block(config/a): returned "config_value"
    2021-12-06T18:07:40.797-0500 [INFO]  agent: 2021/12/06 18:07:40.797249 [TRACE] (view) kv.block(config/a) marking successful data response
    2021-12-06T18:07:40.797-0500 [INFO]  agent: 2021/12/06 18:07:40.797265 [TRACE] (view) kv.block(config/a) successful contact, resetting retries
    2021-12-06T18:07:40.913-0500 [INFO]  agent: 2021/12/06 18:07:40.913288 [TRACE] (view) kv.block(config/a) received data
    2021-12-06T18:07:40.913-0500 [INFO]  agent: 2021/12/06 18:07:40.913336 [TRACE] (view) kv.block(config/a) starting fetch
    2021-12-06T18:07:40.913-0500 [INFO]  agent: 2021/12/06 18:07:40.913341 [DEBUG] (runner) receiving dependency kv.block(config/a)
    2021-12-06T18:07:40.913-0500 [INFO]  agent: 2021/12/06 18:07:40.913354 [TRACE] kv.block(config/a): GET /v1/kv/config/a?index=31&stale=true&wait=1m0s
    2021-12-06T18:07:40.913-0500 [INFO]  agent: 2021/12/06 18:07:40.913360 [DEBUG] (runner) initiating run
    2021-12-06T18:07:40.913-0500 [INFO]  agent: 2021/12/06 18:07:40.913448 [DEBUG] (runner) checking template e5b83cd3bc66aab724373a9d794ddc0f
    2021-12-06T18:07:40.913-0500 [INFO]  agent: 2021/12/06 18:07:40.913722 [DEBUG] (runner) rendering "(dynamic)" => "/private/tmp/NomadClient554318951/69d2dc2e-974f-6c99-9818-745498416995/redis/local/config.txt"
    2021-12-06T18:07:40.972-0500 [INFO]  agent: 2021/12/06 18:07:40.972407 [INFO] (runner) rendered "(dynamic)" => "/private/tmp/NomadClient554318951/69d2dc2e-974f-6c99-9818-745498416995/redis/local/config.txt"
    2021-12-06T18:07:40.972-0500 [INFO]  agent: 2021/12/06 18:07:40.972438 [DEBUG] (runner) diffing and updating dependencies
    2021-12-06T18:07:40.972-0500 [INFO]  agent: 2021/12/06 18:07:40.972463 [DEBUG] (runner) kv.block(config/a) is still needed
    2021-12-06T18:07:40.972-0500 [INFO]  agent: 2021/12/06 18:07:40.972479 [DEBUG] (runner) watching 1 dependencies
    2021-12-06T18:07:40.972-0500 [INFO]  agent: 2021/12/06 18:07:40.972485 [DEBUG] (runner) all templates rendered
    2021-12-06T18:07:41.058-0500 [INFO]  client.driver_mgr.docker: created container: driver=docker container_id=f03e0bd27e8981ed3ebe9ab599db8f68b09ea223c1833cf676897eb4d42222c3
    2021-12-06T18:07:41.425-0500 [INFO]  client.driver_mgr.docker: started container: driver=docker container_id=f03e0bd27e8981ed3ebe9ab599db8f68b09ea223c1833cf676897eb4d42222c3

After:

    2021-12-06T18:04:23.040-0500 [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=19ab59d2-407b-bbdc-7f32-02033840fa7b task=redis @module=logmon path=/private/tmp/NomadClient3462075204/19ab59d2-407b-bbdc-7f32-02033840fa7b/alloc/logs/.redis.stdout.fifo timestamp=2021-12-06T18:04:23.040-0500
    2021-12-06T18:04:23.040-0500 [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=19ab59d2-407b-bbdc-7f32-02033840fa7b task=redis @module=logmon path=/private/tmp/NomadClient3462075204/19ab59d2-407b-bbdc-7f32-02033840fa7b/alloc/logs/.redis.stderr.fifo timestamp=2021-12-06T18:04:23.040-0500
    2021-12-06T18:04:23.041-0500 [INFO]  agent: (runner) creating new runner (dry: false, once: false)
    2021-12-06T18:04:23.042-0500 [INFO]  agent: (runner) creating watcher
    2021-12-06T18:04:23.042-0500 [INFO]  agent: (runner) starting
    2021-12-06T18:04:23.176-0500 [INFO]  agent: (runner) rendered "(dynamic)" => "/private/tmp/NomadClient3462075204/19ab59d2-407b-bbdc-7f32-02033840fa7b/redis/local/config.txt"
    2021-12-06T18:04:23.241-0500 [INFO]  client.driver_mgr.docker: created container: driver=docker container_id=dc1e1b831ae0709a3c831f9c75df9199c3e2f79e4b86c7987f93961729e6891a
    2021-12-06T18:04:23.622-0500 [INFO]  client.driver_mgr.docker: started container: driver=docker container_id=dc1e1b831ae0709a3c831f9c75df9199c3e2f79e4b86c7987f93961729e6891a

I will make sure it moves forward and raise a fix for Nomad as well.

@tgross
Copy link
Member

tgross commented Dec 7, 2021

Thanks @lgfa29!

@lgfa29 lgfa29 added the stage/accepted Confirmed, and intend to work on. No timeline committment though. label Dec 20, 2021
@lgfa29 lgfa29 added this to the 1.2.4 milestone Jan 10, 2022
@github-actions
Copy link

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Oct 12, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
stage/accepted Confirmed, and intend to work on. No timeline committment though. theme/template type/bug
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants