Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fetching Nomad Variables in templates is flaky #15433

Closed
ahjohannessen opened this issue Nov 30, 2022 · 26 comments
Closed

Fetching Nomad Variables in templates is flaky #15433

ahjohannessen opened this issue Nov 30, 2022 · 26 comments

Comments

@ahjohannessen
Copy link

Fetching Nomad Variables seem to be flaky and sometimes it fails.

Nomad version

Servers & Clients: 1.4.3

Operating system and Environment details

Servers: Ubuntu 20.04.5 LTS
Clients: Fedora CoreOS 36.20221030.3.0

Issue

Sometimes it works fetching nomad variables, sometimes it fails. See logs for when it fails.

Reproduction steps

No idea. But the log might tell a story.

Expected Result

That it consistently works and when it does not work, it says why.

Actual Result

It is flaky, clients that have previously fetched variables randomly fail after a while.

Job file (if appropriate)

Templates that retrieve data from nomad servers:

      template {
        destination = "${NOMAD_SECRETS_DIR}/nomad-variables.env"
        env         = true
        change_mode = "noop"
        data        = <<EOT
  {{- if nomadVarExists "nomad/jobs/backup-verification/verify/fetch-data" }}
    {{- with nomadVar "nomad/jobs/backup-verification/verify/fetch-data" }}
      {{- range .Tuples }}
{{ .K | toUpper }}={{ .V | toJSON }}
      {{- end }}
    {{- end }}
  {{- end }}
EOT
      }

and

      template {
        destination = "${NOMAD_SECRETS_DIR}/nomad-variables.env"
        env         = true
        change_mode = "noop"
        data        = <<EOT
{{- range nomadVarListSafe "nomad/jobs" }}
  {{- if nomadVarExists .Path }}
    {{- with nomadVar .Path }}
      {{- range .Tuples }}
{{ .K | toUpper }}={{ .V | toJSON }}
      {{- end }}
    {{- end }}
  {{- end }}
{{- end }}
EOT
      } 

Nomad Client logs (if appropriate)

Nov 29 07:05:07 sn04 nomad[1001]:     2022-11-29T07:05:07.887Z [INFO]  agent: (runner) stopping
Nov 29 07:05:07 sn04 nomad[1001]:  agent: (runner) stopping
Nov 29 07:05:07 sn04 nomad[1001]:     2022-11-29T07:05:07.887Z [INFO]  client.alloc_runner: waiting for task to exit: alloc_id=be66454a-7ee5-65b3-e34b-f72e0ebaab87 task=main
Nov 29 07:05:07 sn04 nomad[1001]:     2022-11-29T07:05:07.887Z [INFO]  client.gc: marking allocation for GC: alloc_id=be66454a-7ee5-65b3-e34b-f72e0ebaab87
Nov 29 07:05:07 sn04 nomad[1001]:  client.alloc_runner: waiting for task to exit: alloc_id=be66454a-7ee5-65b3-e34b-f72e0ebaab87 task=main
Nov 29 07:05:07 sn04 nomad[1001]:  client.gc: marking allocation for GC: alloc_id=be66454a-7ee5-65b3-e34b-f72e0ebaab87
Nov 29 07:48:00 sn04 nomad[1001]:     2022-11-29T07:48:00.408Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878 task=fetch-data @module=logmon path=/var/nomad/alloc/953e524a-5122-69bb-3155-63f8cef1a878/alloc/logs/.fetch-data.stdout.fifo timestamp=2022-11-29T07:48:00.408Z
Nov 29 07:48:00 sn04 nomad[1001]:  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878 task=fetch-data @module=logmon path=/var/nomad/alloc/953e524a-5122-69bb-3155-63f8cef1a878/alloc/logs/.fetch-data.stdout.fifo timestamp=2022-11-29T07:48:00.408Z
Nov 29 07:48:00 sn04 nomad[1001]:  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878 task=fetch-data path=/var/nomad/alloc/953e524a-5122-69bb-3155-63f8cef1a878/alloc/logs/.fetch-data.stderr.fifo @module=logmon timestamp=2022-11-29T07:48:00.408Z
Nov 29 07:48:00 sn04 nomad[1001]:     2022-11-29T07:48:00.409Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878 task=fetch-data path=/var/nomad/alloc/953e524a-5122-69bb-3155-63f8cef1a878/alloc/logs/.fetch-data.stderr.fifo @module=logmon timestamp=2022-11-29T07:48:00.408Z
Nov 29 07:48:00 sn04 nomad[1001]:     2022-11-29T07:48:00.421Z [INFO]  agent: (runner) creating new runner (dry: false, once: false)
Nov 29 07:48:00 sn04 nomad[1001]:  agent: (runner) creating new runner (dry: false, once: false)
Nov 29 07:48:00 sn04 nomad[1001]:     2022-11-29T07:48:00.421Z [INFO]  agent: (runner) creating watcher
Nov 29 07:48:00 sn04 nomad[1001]:  agent: (runner) creating watcher
Nov 29 07:48:00 sn04 nomad[1001]:  agent: (runner) starting
Nov 29 07:48:00 sn04 nomad[1001]:     2022-11-29T07:48:00.421Z [INFO]  agent: (runner) starting
Nov 29 07:48:00 sn04 nomad[1001]:     2022-11-29T07:48:00.426Z [INFO]  agent: (runner) rendered "(dynamic)" => "/var/nomad/alloc/953e524a-5122-69bb-3155-63f8cef1a878/fetch-data/local/run.sh"
Nov 29 07:48:00 sn04 nomad[1001]:  agent: (runner) rendered "(dynamic)" => "/var/nomad/alloc/953e524a-5122-69bb-3155-63f8cef1a878/fetch-data/local/run.sh"
Nov 29 07:48:00 sn04 nomad[1001]:     2022-11-29T07:48:00.427Z [WARN]  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 1 after "250ms")
Nov 29 07:48:00 sn04 nomad[1001]:  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 1 after "250ms")
Nov 29 07:48:00 sn04 nomad[1001]:     2022-11-29T07:48:00.682Z [WARN]  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 2 after "500ms")
Nov 29 07:48:00 sn04 nomad[1001]:  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 2 after "500ms")
Nov 29 07:48:01 sn04 nomad[1001]:     2022-11-29T07:48:01.183Z [WARN]  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 3 after "1s")
Nov 29 07:48:01 sn04 nomad[1001]:  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 3 after "1s")
Nov 29 07:48:02 sn04 nomad[1001]:     2022-11-29T07:48:02.186Z [WARN]  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 4 after "2s")
Nov 29 07:48:02 sn04 nomad[1001]:  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 4 after "2s")
Nov 29 07:48:04 sn04 nomad[1001]:     2022-11-29T07:48:04.189Z [WARN]  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 5 after "4s")
Nov 29 07:48:04 sn04 nomad[1001]:  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 5 after "4s")
Nov 29 07:48:08 sn04 nomad[1001]:     2022-11-29T07:48:08.190Z [WARN]  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 6 after "8s")
Nov 29 07:48:08 sn04 nomad[1001]:  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 6 after "8s")
Nov 29 07:48:16 sn04 nomad[1001]:     2022-11-29T07:48:16.191Z [WARN]  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 7 after "16s")
Nov 29 07:48:16 sn04 nomad[1001]:  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 7 after "16s")
Nov 29 07:48:32 sn04 nomad[1001]:     2022-11-29T07:48:32.193Z [WARN]  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 8 after "32s")
Nov 29 07:48:32 sn04 nomad[1001]:  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 8 after "32s")
Nov 29 07:49:04 sn04 nomad[1001]:     2022-11-29T07:49:04.195Z [WARN]  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 9 after "1m0s")
Nov 29 07:49:04 sn04 nomad[1001]:  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 9 after "1m0s")
Nov 29 07:50:04 sn04 nomad[1001]:     2022-11-29T07:50:04.196Z [WARN]  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 10 after "1m0s")
Nov 29 07:50:04 sn04 nomad[1001]:  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 10 after "1m0s")
Nov 29 07:51:04 sn04 nomad[1001]:     2022-11-29T07:51:04.199Z [WARN]  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 11 after "1m0s")
Nov 29 07:51:04 sn04 nomad[1001]:  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 11 after "1m0s")
Nov 29 07:52:04 sn04 nomad[1001]:     2022-11-29T07:52:04.202Z [WARN]  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 12 after "1m0s")
Nov 29 07:52:04 sn04 nomad[1001]:  agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (retry attempt 12 after "1m0s")
Nov 29 07:53:04 sn04 nomad[1001]:     2022-11-29T07:53:04.207Z [ERROR] agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (exceeded maximum retries)
Nov 29 07:53:04 sn04 nomad[1001]: agent: (view) nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed (exceeded maximum retries)
Nov 29 07:53:04 sn04 nomad[1001]:     2022-11-29T07:53:04.208Z [ERROR] agent: (runner) watcher reported error: nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed
Nov 29 07:53:04 sn04 nomad[1001]: agent: (runner) watcher reported error: nomad.var.get(nomad/jobs/backup-verification/verify/[email protected]): Get "http://127.0.0.1:4646/v1/var/nomad/jobs/backup-verification/verify/fetch-data?namespace=default&stale=&wait=60000ms": closed
Nov 29 07:53:08 sn04 nomad[1001]:     2022-11-29T07:53:08.219Z [WARN]  client.alloc_runner.task_runner.task_hook.logmon.nomad: timed out waiting for read-side of process output pipe to close: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878 task=fetch-data @module=logmon timestamp=2022-11-29T07:53:08.219Z
Nov 29 07:53:08 sn04 nomad[1001]:  client.alloc_runner.task_runner.task_hook.logmon.nomad: timed out waiting for read-side of process output pipe to close: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878 task=fetch-data @module=logmon timestamp=2022-11-29T07:53:08.219Z
Nov 29 07:53:08 sn04 nomad[1001]:     2022-11-29T07:53:08.220Z [WARN]  client.alloc_runner.task_runner.task_hook.logmon.nomad: timed out waiting for read-side of process output pipe to close: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878 task=fetch-data @module=logmon timestamp=2022-11-29T07:53:08.219Z
Nov 29 07:53:08 sn04 nomad[1001]:  client.alloc_runner.task_runner.task_hook.logmon.nomad: timed out waiting for read-side of process output pipe to close: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878 task=fetch-data @module=logmon timestamp=2022-11-29T07:53:08.219Z
Nov 29 07:53:08 sn04 nomad[1001]:     2022-11-29T07:53:08.223Z [INFO]  agent: (runner) stopping
Nov 29 07:53:08 sn04 nomad[1001]:     2022-11-29T07:53:08.223Z [INFO]  client.alloc_runner: waiting for task to exit: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878 task=main
Nov 29 07:53:08 sn04 nomad[1001]:  agent: (runner) stopping
Nov 29 07:53:08 sn04 nomad[1001]:     2022-11-29T07:53:08.223Z [INFO]  client.gc: marking allocation for GC: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878
Nov 29 07:53:08 sn04 nomad[1001]:  client.alloc_runner: waiting for task to exit: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878 task=main
Nov 29 07:53:08 sn04 nomad[1001]:  client.gc: marking allocation for GC: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878
Nov 29 11:43:58 sn04 nomad[1001]:     2022-11-29T11:43:58.104Z [INFO]  client.gc: garbage collecting allocation: alloc_id=be66454a-7ee5-65b3-e34b-f72e0ebaab87 reason="forced collection"
Nov 29 11:43:58 sn04 nomad[1001]:  client.gc: garbage collecting allocation: alloc_id=be66454a-7ee5-65b3-e34b-f72e0ebaab87 reason="forced collection"
Nov 29 12:23:58 sn04 nomad[1001]:     2022-11-29T12:23:58.102Z [INFO]  client.gc: garbage collecting allocation: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878 reason="forced collection"
Nov 29 12:23:58 sn04 nomad[1001]:  client.gc: garbage collecting allocation: alloc_id=953e524a-5122-69bb-3155-63f8cef1a878 reason="forced collection"
Nov 30 16:46:44 sn04 nomad[1001]:     2022-11-30T16:46:44.210Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=19995cc7-b99a-9b3a-c768-7134dd6969a3 task=connect-proxy-payins @module=logmon path=/var/nomad/alloc/19995cc7-b99a-9b3a-c768-7134dd6969a3/alloc/logs/.connect-proxy-payins.stdout.fifo timestamp=2022-11-30T16:46:44.210Z
Nov 30 16:46:44 sn04 nomad[1001]:  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=19995cc7-b99a-9b3a-c768-7134dd6969a3 task=connect-proxy-payins @module=logmon path=/var/nomad/alloc/19995cc7-b99a-9b3a-c768-7134dd6969a3/alloc/logs/.connect-proxy-payins.stdout.fifo timestamp=2022-11-30T16:46:44.210Z
Nov 30 16:46:44 sn04 nomad[1001]:  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=19995cc7-b99a-9b3a-c768-7134dd6969a3 task=connect-proxy-payins path=/var/nomad/alloc/19995cc7-b99a-9b3a-c768-7134dd6969a3/alloc/logs/.connect-proxy-payins.stderr.fifo @module=logmon timestamp=2022-11-30T16:46:44.210Z
Nov 30 16:46:44 sn04 nomad[1001]:     2022-11-30T16:46:44.211Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=19995cc7-b99a-9b3a-c768-7134dd6969a3 task=connect-proxy-payins path=/var/nomad/alloc/19995cc7-b99a-9b3a-c768-7134dd6969a3/alloc/logs/.connect-proxy-payins.stderr.fifo @module=logmon timestamp=2022-11-30T16:46:44.210Z
Nov 30 16:46:44 sn04 nomad[1001]:     2022-11-30T16:46:44.226Z [INFO]  client.alloc_runner.task_runner.task_hook.envoy_bootstrap: bootstrapping envoy: alloc_id=19995cc7-b99a-9b3a-c768-7134dd6969a3 task=connect-proxy-payins namespace="" proxy_id=_nomad-task-19995cc7-b99a-9b3a-c768-7134dd6969a3-group-payins-payins-8080-sidecar-proxy service=payins gateway="" bootstrap_file=/var/nomad/alloc/19995cc7-b99a-9b3a-c768-7134dd6969a3/connect-proxy-payins/secrets/envoy_bootstrap.json grpc_addr=unix://alloc/tmp/consul_grpc.sock admin_bind=127.0.0.2:19001 ready_bind=127.0.0.1:19101
Nov 30 16:46:44 sn04 nomad[1001]:  client.alloc_runner.task_runner.task_hook.envoy_bootstrap: bootstrapping envoy: alloc_id=19995cc7-b99a-9b3a-c768-7134dd6969a3 task=connect-proxy-payins namespace="" proxy_id=_nomad-task-19995cc7-b99a-9b3a-c768-7134dd6969a3-group-payins-payins-8080-sidecar-proxy service=payins gateway="" bootstrap_file=/var/nomad/alloc/19995cc7-b99a-9b3a-c768-7134dd6969a3/connect-proxy-payins/secrets/envoy_bootstrap.json grpc_addr=unix://alloc/tmp/consul_grpc.sock admin_bind=127.0.0.2:19001 ready_bind=127.0.0.1:19101
Nov 30 16:46:52 sn04 nomad[1001]:     2022-11-30T16:46:52.540Z [INFO]  client.driver_mgr.docker: created container: driver=docker container_id=e8cf13eea94a6c17796c0ddffc92377b91cd421df3c117c3be74829870114e97
Nov 30 16:46:52 sn04 nomad[1001]:  client.driver_mgr.docker: created container: driver=docker container_id=e8cf13eea94a6c17796c0ddffc92377b91cd421df3c117c3be74829870114e97
Nov 30 16:46:52 sn04 nomad[1001]:     2022-11-30T16:46:52.672Z [INFO]  client.driver_mgr.docker: started container: driver=docker container_id=e8cf13eea94a6c17796c0ddffc92377b91cd421df3c117c3be74829870114e97
Nov 30 16:46:52 sn04 nomad[1001]:  client.driver_mgr.docker: started container: driver=docker container_id=e8cf13eea94a6c17796c0ddffc92377b91cd421df3c117c3be74829870114e97
Nov 30 16:46:52 sn04 nomad[1001]:     2022-11-30T16:46:52.724Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=19995cc7-b99a-9b3a-c768-7134dd6969a3 task=payins @module=logmon path=/var/nomad/alloc/19995cc7-b99a-9b3a-c768-7134dd6969a3/alloc/logs/.payins.stdout.fifo timestamp=2022-11-30T16:46:52.724Z
Nov 30 16:46:52 sn04 nomad[1001]:  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=19995cc7-b99a-9b3a-c768-7134dd6969a3 task=payins @module=logmon path=/var/nomad/alloc/19995cc7-b99a-9b3a-c768-7134dd6969a3/alloc/logs/.payins.stdout.fifo timestamp=2022-11-30T16:46:52.724Z
Nov 30 16:46:52 sn04 nomad[1001]:     2022-11-30T16:46:52.724Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=19995cc7-b99a-9b3a-c768-7134dd6969a3 task=payins @module=logmon path=/var/nomad/alloc/19995cc7-b99a-9b3a-c768-7134dd6969a3/alloc/logs/.payins.stderr.fifo timestamp=2022-11-30T16:46:52.724Z
Nov 30 16:46:52 sn04 nomad[1001]:  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=19995cc7-b99a-9b3a-c768-7134dd6969a3 task=payins @module=logmon path=/var/nomad/alloc/19995cc7-b99a-9b3a-c768-7134dd6969a3/alloc/logs/.payins.stderr.fifo timestamp=2022-11-30T16:46:52.724Z
Nov 30 16:46:52 sn04 nomad[1001]:     2022-11-30T16:46:52.733Z [INFO]  agent: (runner) creating new runner (dry: false, once: false)
Nov 30 16:46:52 sn04 nomad[1001]:  agent: (runner) creating new runner (dry: false, once: false)
Nov 30 16:46:52 sn04 nomad[1001]:     2022-11-30T16:46:52.734Z [INFO]  agent: (runner) creating watcher
Nov 30 16:46:52 sn04 nomad[1001]:  agent: (runner) creating watcher
Nov 30 16:46:52 sn04 nomad[1001]:     2022-11-30T16:46:52.734Z [INFO]  agent: (runner) starting
Nov 30 16:46:52 sn04 nomad[1001]:  agent: (runner) starting
Nov 30 16:46:52 sn04 nomad[1001]:     2022-11-30T16:46:52.741Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 1 after "250ms")
Nov 30 16:46:52 sn04 nomad[1001]:  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 1 after "250ms")
Nov 30 16:46:52 sn04 nomad[1001]:     2022-11-30T16:46:52.995Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 2 after "500ms")
Nov 30 16:46:52 sn04 nomad[1001]:  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 2 after "500ms")
Nov 30 16:46:53 sn04 nomad[1001]:     2022-11-30T16:46:53.497Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 3 after "1s")
Nov 30 16:46:53 sn04 nomad[1001]:  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 3 after "1s")
Nov 30 16:46:54 sn04 nomad[1001]:     2022-11-30T16:46:54.500Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 4 after "2s")
Nov 30 16:46:54 sn04 nomad[1001]:  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 4 after "2s")
Nov 30 16:46:56 sn04 nomad[1001]:     2022-11-30T16:46:56.504Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 5 after "4s")
Nov 30 16:46:56 sn04 nomad[1001]:  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 5 after "4s")
Nov 30 16:47:00 sn04 nomad[1001]:     2022-11-30T16:47:00.505Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 6 after "8s")
Nov 30 16:47:00 sn04 nomad[1001]:  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 6 after "8s")
Nov 30 16:47:08 sn04 nomad[1001]:     2022-11-30T16:47:08.506Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 7 after "16s")
Nov 30 16:47:08 sn04 nomad[1001]:  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 7 after "16s")
Nov 30 16:47:24 sn04 nomad[1001]:     2022-11-30T16:47:24.507Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 8 after "32s")
Nov 30 16:47:24 sn04 nomad[1001]:  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 8 after "32s")
Nov 30 16:47:56 sn04 nomad[1001]:     2022-11-30T16:47:56.509Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 9 after "1m0s")
Nov 30 16:47:56 sn04 nomad[1001]:  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 9 after "1m0s")
Nov 30 16:48:56 sn04 nomad[1001]:     2022-11-30T16:48:56.514Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 10 after "1m0s")
Nov 30 16:48:56 sn04 nomad[1001]:  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 10 after "1m0s")
Nov 30 16:49:56 sn04 nomad[1001]:     2022-11-30T16:49:56.515Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 11 after "1m0s")
Nov 30 16:49:56 sn04 nomad[1001]:  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 11 after "1m0s")
Nov 30 16:50:56 sn04 nomad[1001]:     2022-11-30T16:50:56.516Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 12 after "1m0s")
Nov 30 16:50:56 sn04 nomad[1001]:  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 12 after "1m0s")
Nov 30 16:51:56 sn04 nomad[1001]:     2022-11-30T16:51:56.518Z [ERROR] agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (exceeded maximum retries)
Nov 30 16:51:56 sn04 nomad[1001]:     2022-11-30T16:51:56.518Z [ERROR] agent: (runner) watcher reported error: nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed
Nov 30 16:51:56 sn04 nomad[1001]: agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (exceeded maximum retries)
Nov 30 16:51:56 sn04 nomad[1001]: agent: (runner) watcher reported error: nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed
@lgfa29
Copy link
Contributor

lgfa29 commented Nov 30, 2022

Hi @ahjohannessen 👋

Thanks for the report, sorry to hear you are having troubles with Nomad.

Unfortunately there is not much in the logs to help us debug the problem. Would it be possible to provide lower level logs? You can use the nomad monitor -log-level=TRACE command, pointing it to the client that is failing to render the secrets and also the leader if possible. Make sure the output doesn't contain any secret before sharing it with us.

And how does your client configuration look like? Do you have ACL and mTLS enabled?

@ahjohannessen
Copy link
Author

ahjohannessen commented Nov 30, 2022

Hi @lgfa29 - Yeah, I reckoned that I needed more info. I have ACL and mTLS enabled.

I have a base.hcl that looks like this:

name = "sn04"
region = "global"
datacenter = "storage"

enable_debug = false
disable_update_check = false


bind_addr = "0.0.0.0"
advertise {
    http = "192.168.20.114:4646"
    rpc = "192.168.20.114:4647"
    serf = "192.168.20.114:4648"
}
ports {
    http = 4646
    rpc = 4647
    serf = 4648
}

consul {
    # The address to the Consul agent.
    address = "127.0.0.1:8500"
    ssl = false
    ca_file = ""
    cert_file = ""
    key_file = ""
    token = ""
    # The service name to register the server and client with Consul.
    server_service_name = "nomad"
    client_service_name = "nomad-client"
    tags = {}

    # Enables automatically registering the services.
    auto_advertise = true

    # Enabling the server and client to bootstrap using Consul.
    server_auto_join = true
    client_auto_join = true
}

data_dir = "/var/nomad"

log_level = "INFO"
enable_syslog = true

leave_on_terminate = true
leave_on_interrupt = false

tls {
    http = true
    rpc = true
    ca_file = "/etc/nomad.d/tls/ca.crt"
    cert_file = "/etc/nomad.d/tls/agent.crt"
    key_file = "/etc/nomad.d/tls/agent.key"
    rpc_upgrade_mode = false
    verify_server_hostname = "true"
    verify_https_client = "false"
}

acl {
    enabled = true
    token_ttl = "30s"
    policy_ttl = "30s"
    replication_token = ""
}

vault {
    enabled = true
    address = "https://active.vault.service.consul:8200"
    allow_unauthenticated = true
    create_from_role = "nomad-cluster"
    task_token_ttl = ""
    ca_file = "/etc/nomad.d/tls/vault.crt"
    ca_path = ""
    cert_file = ""
    key_file = ""
    tls_server_name = ""
    tls_skip_verify = false
    namespace = ""
}

A client.hcl that looks like this:

client {
    enabled = true

    node_class = ""
    no_host_uuid = false


    max_kill_timeout = "30s"

    network_speed = 0
    cpu_total_compute = 0

    gc_interval = "1m"
    gc_disk_usage_threshold = 80
    gc_inode_usage_threshold = 70
    gc_parallel_destroys = 2

    reserved {
        cpu = 0
        memory = 0
        disk = 0
    }


    host_network "localhost" {
            interface = "lo"
            reserved_ports = ""
    }



    meta = {
        "als.node_kind" = "client"
        "als.node_group" = "worker"
        "connect.log_level" = "info"
    }
    }


plugin "docker" {
    config {
        auth {
            config = "/etc/nomad.d/.docker-creds"

        }

    }

}
plugin "nomad-driver-podman" {

}

and a custom.json that looks like this:

{
    "client": {
        "cni_config_dir": "/etc/cni/config",
        "cni_path": "/usr/local/lib/cni/1.1.1/bin"
    }
}

@ahjohannessen
Copy link
Author

I only have one ACL policy registered in Nomad, anonymous.hcl:

namespace "*" {
  policy = "read"
}

agent {
  policy = "read"
}

operator {
  policy = "read"
}

quota {
  policy = "read"
}

node {
  policy = "read"
}

host_volume "*" {
  policy = "read"
}

I do not use the ACL system as much as I could, I just use the root token when deploying jobs - Am I missing something?

@ahjohannessen
Copy link
Author

ahjohannessen commented Nov 30, 2022

I will turn trace on and try what you suggested tomorrow.

@lgfa29
Copy link
Contributor

lgfa29 commented Dec 1, 2022

Thanks for the extra info. I don't see anything particularly wrong with the config, though with mTLS on this log is a bit suspicious: Get "http://127.0.0.1:4646, I would expect it to be https 🤔

By any chance, did you have allocations running before you setup mTLS?

@lgfa29 lgfa29 self-assigned this Dec 1, 2022
@ahjohannessen
Copy link
Author

All clients are bootstrapped with mTLS. I use consul-template and vault for all Nomad agents.

If I set nomad address as an environment in systemd I can change that, but should nomad client not figure this out itself? Is it the defaults for the built-in consul-template for nomad that might be an issue?

@ahjohannessen
Copy link
Author

ahjohannessen commented Dec 1, 2022

@lgfa29 I tried a setup not in production, but similar. Starting nginx and fetching variables:

Client Example Job

job "nginx" {
  
  datacenters = ["als"]

  constraint {
    attribute = "${meta.als.node_group}"
    operator  = "="
    value     = "worker"
  }  

  group "nginx" {
    count = 1

    network {
      mode = "bridge"
      port "http" {
        static = 8111
        to = 80
      }
    }

    service {

      name = "nginx"
      port = "http"

      connect {

        sidecar_service {}
        sidecar_task {
          config {
            security_opt = ["label=disable"]
          }
        }
      }


    }

    task "nginx" {
      driver = "docker"

      config {
        image = "nginx:1.21"
        security_opt = ["label=disable"]
      }

      template {
        destination = "${NOMAD_SECRETS_DIR}/nomad-variables.env"
        env         = true
        change_mode = "noop"
        data        = <<EOT
{{- range nomadVarListSafe "nomad/jobs" }}
  {{- if nomadVarExists .Path }}
    {{- with nomadVar .Path }}
      {{- range .Tuples }}
{{ .K | toUpper }}={{ .V | toJSON }}
      {{- end }}
    {{- end }}
  {{- end }}
{{- end }}
EOT
      }

      resources {
        cpu    = 200
        memory = 256
      }
    }
  }
}

Logs Client

2022-12-01T08:42:06.579Z [TRACE] client.alloc_runner.task_coordinator: state transition: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 from=init to=init
2022-12-01T08:42:06.581Z [DEBUG] client: allocation updates applied: added=1 removed=0 updated=0 ignored=0 errors=0
2022-12-01T08:42:06.583Z [TRACE] client.cpuset.v2: add allocation: name=nginx.nginx[0] id=0018aefc-861f-ea84-5144-a1c129991692
2022-12-01T08:42:06.605Z [TRACE] client.alloc_runner.runner_hook.alloc_health_watcher: watching: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 deadline="2022-12-01 08:47:06.605345663 +0000 UTC m=+13536.958308299" checks=true min_healthy_time=10s
2022-12-01T08:42:06.606Z [DEBUG] client.driver_mgr.docker: InspectImage failed for infra_image container pull: driver=docker image=gcr.io/google_containers/pause-amd64:3.1 error="no such image"
2022-12-01T08:42:07.129Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration="99.934µs"
2022-12-01T08:42:08.682Z [DEBUG] client.driver_mgr.docker: docker pull succeeded: driver=docker image_ref=gcr.io/google_containers/pause-amd64:3.1
2022-12-01T08:42:08.683Z [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=gcr.io/google_containers/pause-amd64:3.1 image_id=sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e references=1
2022-12-01T08:42:09.326Z [TRACE] consul.sync: commit sync operations: ops="<1, 0, 0, 0>"
2022-12-01T08:42:09.327Z [TRACE] client.alloc_runner: handling task state update: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 done=false
2022-12-01T08:42:09.327Z [TRACE] client.alloc_runner.task_coordinator: state transition: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 from=init to=prestart
2022-12-01T08:42:09.327Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 client_status=pending desired_status=""
2022-12-01T08:42:09.327Z [TRACE] client.alloc_runner: handling task state update: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 done=false
2022-12-01T08:42:09.327Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 client_status=pending desired_status=""
2022-12-01T08:42:09.327Z [DEBUG] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx
2022-12-01T08:42:09.327Z [TRACE] consul.sync: execute sync: reason=operations
2022-12-01T08:42:09.329Z [TRACE] consul.sync: must register service: id=_nomad-task-0018aefc-861f-ea84-5144-a1c129991692-group-nginx-nginx-8111 exists=false reason=operations
2022-12-01T08:42:09.332Z [TRACE] client.alloc_runner: handling task state update: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 done=false
2022-12-01T08:42:09.333Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 client_status=pending desired_status=""
2022-12-01T08:42:09.334Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx path=/var/usrlocal/bin/nomad args=["/var/usrlocal/bin/nomad", "logmon"]
2022-12-01T08:42:09.334Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx path=/var/usrlocal/bin/nomad pid=13086
2022-12-01T08:42:09.334Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx path=/var/usrlocal/bin/nomad
2022-12-01T08:42:09.346Z [DEBUG] consul.sync: sync complete: registered_services=1 deregistered_services=0 registered_checks=0 deregistered_checks=0
2022-12-01T08:42:09.347Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx version=2
2022-12-01T08:42:09.347Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx @module=logmon address=/tmp/plugin3855178925 network=unix timestamp=2022-12-01T08:42:09.347Z
2022-12-01T08:42:09.348Z [TRACE] client.alloc_runner.task_runner.task_hook.logmon.stdio: waiting for stdio data: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx
2022-12-01T08:42:09.349Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx @module=logmon path=/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/alloc/logs/.connect-proxy-nginx.stdout.fifo timestamp=2022-12-01T08:42:09.348Z
2022-12-01T08:42:09.349Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx @module=logmon path=/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/alloc/logs/.connect-proxy-nginx.stderr.fifo timestamp=2022-12-01T08:42:09.349Z
2022-12-01T08:42:09.354Z [TRACE] client.alloc_runner.task_runner.task_hook.envoy_version: setting task envoy image: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx image=envoyproxy/envoy:v1.24.0
2022-12-01T08:42:09.358Z [DEBUG] client.alloc_runner.task_runner.task_hook.envoy_bootstrap: bootstrapping Consul connect-proxy: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx task=connect-proxy-nginx service=nginx
2022-12-01T08:42:09.358Z [TRACE] client.alloc_runner.task_runner.task_hook.envoy_bootstrap: no SI token to load: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx task=connect-proxy-nginx
2022-12-01T08:42:09.358Z [DEBUG] client.alloc_runner.task_runner.task_hook.envoy_bootstrap: check for SI token for task: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx task=connect-proxy-nginx exists=false
2022-12-01T08:42:09.358Z [INFO]  client.alloc_runner.task_runner.task_hook.envoy_bootstrap: bootstrapping envoy: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx namespace="" proxy_id=_nomad-task-0018aefc-861f-ea84-5144-a1c129991692-group-nginx-nginx-8111-sidecar-proxy service=nginx gateway="" bootstrap_file=/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/connect-proxy-nginx/secrets/envoy_bootstrap.json grpc_addr=unix://alloc/tmp/consul_grpc.sock admin_bind=127.0.0.2:19001 ready_bind=127.0.0.1:19101
2022-12-01T08:42:09.447Z [TRACE] client.alloc_runner.task_runner: waiting for cgroup to exist for: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx allocID=0018aefc-861f-ea84-5144-a1c129991692 task="&{connect-proxy-nginx docker  map[args:[-c ${NOMAD_SECRETS_DIR}/envoy_bootstrap.json -l ${meta.connect.log_level} --concurrency ${meta.connect.proxy_concurrency} --disable-hot-restart] image:envoyproxy/envoy:v1.24.0 security_opt:[label=disable]] map[] [] <nil> [] [${attr.consul.version} semver >= 1.8.0 ${attr.consul.grpc} > 0] [] 0xc000cbf320 0xc000ebb500 <nil> 0xc0008e84f8 map[] 5s 0xc001081210 [] false 0s [] []  connect-proxy:nginx <nil>}"
2022-12-01T08:42:09.449Z [TRACE] client.driver_mgr: task event received: driver=docker event="&{0018aefc-861f-ea84-5144-a1c129991692/connect-proxy-nginx/2e455a02 connect-proxy-nginx 0018aefc-861f-ea84-5144-a1c129991692 2022-12-01 08:42:09.449448878 +0000 UTC m=+13239.802411484 Downloading image map[image:envoyproxy/envoy:v1.24.0] <nil>}"
2022-12-01T08:42:09.449Z [TRACE] client.alloc_runner: handling task state update: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 done=false
2022-12-01T08:42:09.449Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 client_status=pending desired_status=""
2022-12-01T08:42:09.544Z [TRACE] client: next heartbeat: period=19.421300393s
2022-12-01T08:42:09.614Z [DEBUG] client: updated allocations: index=782262 total=1 pulled=0 filtered=1
2022-12-01T08:42:09.614Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
2022-12-01T08:42:09.614Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
2022-12-01T08:42:09.616Z [TRACE] client: next heartbeat: period=14.990243867s
2022-12-01T08:42:15.540Z [DEBUG] client.driver_mgr.docker: docker pull succeeded: driver=docker image_ref=envoyproxy/envoy:v1.24.0
2022-12-01T08:42:15.541Z [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=envoyproxy/envoy:v1.24.0 image_id=sha256:fecb2aa186ed7340309c1ace74f6f5d9b83b8108c5a15db10cf6ead2f853711b references=1
2022-12-01T08:42:15.542Z [TRACE] client.driver_mgr.docker: binding volumes: driver=docker task_name=connect-proxy-nginx volumes=["/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/alloc:/alloc", "/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/connect-proxy-nginx/local:/local", "/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/connect-proxy-nginx/secrets:/secrets"]
2022-12-01T08:42:15.542Z [TRACE] client.driver_mgr.docker: no docker log driver provided, defaulting to plugin config: driver=docker task_name=connect-proxy-nginx
2022-12-01T08:42:15.542Z [DEBUG] client.driver_mgr.docker: configured resources: driver=docker task_name=connect-proxy-nginx memory=314572800 memory_reservation=0 cpu_shares=100 cpu_quota=0 cpu_period=0
2022-12-01T08:42:15.542Z [DEBUG] client.driver_mgr.docker: binding directories: driver=docker task_name=connect-proxy-nginx binds="[]string{\"/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/alloc:/alloc\", \"/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/connect-proxy-nginx/local:/local\", \"/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/connect-proxy-nginx/secrets:/secrets\"}"
2022-12-01T08:42:15.542Z [DEBUG] client.driver_mgr.docker: configuring network mode for task group: driver=docker task_name=connect-proxy-nginx network_mode=container:498b87ed87db6c2d40d312ca19585cb8d113f9850989e06cd91f85397aa67210
2022-12-01T08:42:15.542Z [DEBUG] client.driver_mgr.docker: applied labels on the container: driver=docker task_name=connect-proxy-nginx labels=map[com.hashicorp.nomad.alloc_id:0018aefc-861f-ea84-5144-a1c129991692]
2022-12-01T08:42:15.542Z [DEBUG] client.driver_mgr.docker: setting container name: driver=docker task_name=connect-proxy-nginx container_name=connect-proxy-nginx-0018aefc-861f-ea84-5144-a1c129991692
2022-12-01T08:42:15.706Z [INFO]  client.driver_mgr.docker: created container: driver=docker container_id=e1311c6599cc5532ff25f256f64e7c7decc417e9dc286d44eb54404725f03520
2022-12-01T08:42:15.795Z [INFO]  client.driver_mgr.docker: started container: driver=docker container_id=e1311c6599cc5532ff25f256f64e7c7decc417e9dc286d44eb54404725f03520
2022-12-01T08:42:15.795Z [DEBUG] client.driver_mgr.docker.docker_logger: starting plugin: driver=docker path=/var/usrlocal/bin/nomad args=["/var/usrlocal/bin/nomad", "docker_logger"]
2022-12-01T08:42:15.795Z [DEBUG] client.driver_mgr.docker.docker_logger: plugin started: driver=docker path=/var/usrlocal/bin/nomad pid=13260
2022-12-01T08:42:15.795Z [DEBUG] client.driver_mgr.docker.docker_logger: waiting for RPC address: driver=docker path=/var/usrlocal/bin/nomad
2022-12-01T08:42:15.803Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: plugin address: driver=docker @module=docker_logger address=/tmp/plugin2108064639 network=unix timestamp=2022-12-01T08:42:15.803Z
2022-12-01T08:42:15.803Z [DEBUG] client.driver_mgr.docker.docker_logger: using plugin: driver=docker version=2
2022-12-01T08:42:15.804Z [TRACE] client.driver_mgr.docker.docker_logger.stdio: waiting for stdio data: driver=docker
2022-12-01T08:42:15.805Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: using client connection initialized from environment: driver=docker @module=docker_logger timestamp=2022-12-01T08:42:15.805Z
2022-12-01T08:42:15.808Z [TRACE] client.alloc_runner.task_runner: setting task state: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx state=running
2022-12-01T08:42:15.808Z [TRACE] client.alloc_runner.task_runner: appending task event: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx state=running event=Started
2022-12-01T08:42:15.808Z [TRACE] client.alloc_runner: handling task state update: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 done=false
2022-12-01T08:42:15.808Z [TRACE] client.alloc_runner.task_coordinator: state transition: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 from=prestart to=main
2022-12-01T08:42:15.808Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 client_status=running desired_status=""
2022-12-01T08:42:15.808Z [DEBUG] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx
2022-12-01T08:42:15.810Z [TRACE] client.alloc_runner: handling task state update: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 done=false
2022-12-01T08:42:15.810Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 client_status=running desired_status=""
2022-12-01T08:42:15.812Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx path=/var/usrlocal/bin/nomad args=["/var/usrlocal/bin/nomad", "logmon"]
2022-12-01T08:42:15.813Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx path=/var/usrlocal/bin/nomad pid=13270
2022-12-01T08:42:15.813Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx path=/var/usrlocal/bin/nomad
2022-12-01T08:42:15.821Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx network=unix @module=logmon address=/tmp/plugin1483103082 timestamp=2022-12-01T08:42:15.820Z
2022-12-01T08:42:15.821Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx version=2
2022-12-01T08:42:15.821Z [TRACE] client.alloc_runner.task_runner.task_hook.logmon.stdio: waiting for stdio data: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx
2022-12-01T08:42:15.822Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx path=/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/alloc/logs/.nginx.stdout.fifo @module=logmon timestamp=2022-12-01T08:42:15.822Z
2022-12-01T08:42:15.823Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx @module=logmon path=/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/alloc/logs/.nginx.stderr.fifo timestamp=2022-12-01T08:42:15.823Z
2022-12-01T08:42:15.846Z [INFO]  agent: (runner) creating new runner (dry: false, once: false)
2022-12-01T08:42:15.847Z [DEBUG] agent: (runner) final config: {"Consul":{"Address":"127.0.0.1:8500","Namespace":"","Auth":{"Enabled":false,"Username":""},"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":true},"Token":"","TokenFile":"","Transport":{"CustomDialer":null,"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":5,"TLSHandshakeTimeout":10000000000}},"Dedup":{"Enabled":false,"MaxStale":2000000000,"Prefix":"consul-template/dedup/","TTL":15000000000,"BlockQueryWaitTime":60000000000},"DefaultDelims":{"Left":null,"Right":null},"Exec":{"Command":[],"Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":0},"KillSignal":2,"LogLevel":"WARN","FileLog":{"LogFilePath":"","LogRotateBytes":0,"LogRotateDuration":86400000000000,"LogRotateMaxFiles":0},"MaxStale":2000000000,"PidFile":"","ReloadSignal":1,"Syslog":{"Enabled":false,"Facility":"LOCAL0","Name":"consul-template"},"Templates":[{"Backup":false,"Command":[],"CommandTimeout":30000000000,"Contents":"{{- range nomadVarListSafe \"nomad/jobs\" }}\n  {{- if nomadVarExists .Path }}\n    {{- with nomadVar .Path }}\n      {{- range .Tuples }}\n{{ .K | toUpper }}={{ .V | toJSON }}\n      {{- end }}\n    {{- end }}\n  {{- end }}\n{{- end }}\n","CreateDestDirs":true,"Destination":"/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/nginx/secrets/nomad-variables.env","ErrMissingKey":false,"ErrFatal":true,"Exec":{"Command":[],"Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":420,"User":null,"Uid":null,"Group":null,"Gid":null,"Source":"","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"{{","RightDelim":"}}","FunctionDenylist":["plugin","writeToFile"],"SandboxPath":"/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/nginx"},{"Backup":false,"Command":[],"CommandTimeout":30000000000,"Contents":"upstream backend {\n  server {{ env \"NOMAD_HOST_IP_nginx\" }}:1234;\n}\n\nserver {\n   listen 8111;\n\n   location /als {\n      proxy_pass http://backend;\n   }\n}\n\n","CreateDestDirs":true,"Destination":"/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/nginx/local/default.conf","ErrMissingKey":false,"ErrFatal":true,"Exec":{"Command":[],"Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":493,"User":null,"Uid":null,"Group":null,"Gid":null,"Source":"","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"{{","RightDelim":"}}","FunctionDenylist":["plugin","writeToFile"],"SandboxPath":"/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/nginx"}],"TemplateErrFatal":null,"Vault":{"Address":"https://active.vault.service.consul:8200","Enabled":true,"Namespace":"","RenewToken":false,"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"/etc/nomad.d/tls/vault.crt","CaPath":"","Cert":"","Enabled":true,"Key":"","ServerName":"","Verify":true},"Transport":{"CustomDialer":null,"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":5,"TLSHandshakeTimeout":10000000000},"UnwrapToken":false,"DefaultLeaseDuration":300000000000,"LeaseRenewalThreshold":0.9,"K8SAuthRoleName":"","K8SServiceAccountTokenPath":"/run/secrets/kubernetes.io/serviceaccount/token","K8SServiceAccountToken":"","K8SServiceMountPath":"kubernetes"},"Nomad":{"Address":"http://127.0.0.1:4646","Enabled":true,"Namespace":"default","SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":true},"AuthUsername":"","AuthPassword":"","Transport":{"CustomDialer":{},"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":5,"TLSHandshakeTimeout":10000000000},"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true}},"Wait":{"Enabled":false,"Min":0,"Max":0},"Once":false,"ParseOnly":false,"BlockQueryWaitTime":60000000000}
2022-12-01T08:42:15.847Z [INFO]  agent: (runner) creating watcher
2022-12-01T08:42:15.848Z [INFO]  agent: (runner) starting
2022-12-01T08:42:15.848Z [DEBUG] agent: (runner) running initial templates
2022-12-01T08:42:15.848Z [DEBUG] agent: (runner) initiating run
2022-12-01T08:42:15.848Z [DEBUG] agent: (runner) checking template 1548d3a7b1e21a57b8e9bba313608868
2022-12-01T08:42:15.849Z [DEBUG] agent: (runner) missing data for 1 dependencies
2022-12-01T08:42:15.849Z [DEBUG] agent: (runner) missing dependency: nomad.var.list(nomad/[email protected])
2022-12-01T08:42:15.849Z [DEBUG] agent: (runner) add used dependency nomad.var.list(nomad/[email protected]) to missing since isLeader but do not have a watcher
2022-12-01T08:42:15.849Z [DEBUG] agent: (runner) was not watching 1 dependencies
2022-12-01T08:42:15.849Z [DEBUG] agent: (watcher) adding nomad.var.list(nomad/[email protected])
2022-12-01T08:42:15.849Z [TRACE] agent: (watcher) nomad.var.list(nomad/[email protected]) starting
2022-12-01T08:42:15.849Z [DEBUG] agent: (runner) checking template dd990e0b35866c60d9580131f4f6a214
2022-12-01T08:42:15.849Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) starting fetch
2022-12-01T08:42:15.849Z [TRACE] agent: nomad.var.list(nomad/[email protected]): GET /v1/vars/?stale=true&wait=1m0s
2022-12-01T08:42:15.849Z [DEBUG] agent: (runner) rendering "(dynamic)" => "/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/nginx/local/default.conf"
2022-12-01T08:42:15.850Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 1 after "250ms")
2022-12-01T08:42:15.850Z [INFO]  agent: (runner) rendered "(dynamic)" => "/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/nginx/local/default.conf"
2022-12-01T08:42:15.850Z [DEBUG] agent: (runner) diffing and updating dependencies
2022-12-01T08:42:15.850Z [DEBUG] agent: (runner) watching 1 dependencies
2022-12-01T08:42:16.001Z [TRACE] client: next heartbeat: period=18.953933284s
2022-12-01T08:42:16.066Z [DEBUG] client: updated allocations: index=782263 total=1 pulled=0 filtered=1
2022-12-01T08:42:16.066Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
2022-12-01T08:42:16.066Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
2022-12-01T08:42:16.067Z [TRACE] client: next heartbeat: period=11.435082142s
2022-12-01T08:42:16.100Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) starting fetch
2022-12-01T08:42:16.100Z [TRACE] agent: nomad.var.list(nomad/[email protected]): GET /v1/vars/?stale=true&wait=1m0s
2022-12-01T08:42:16.102Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 2 after "500ms")
2022-12-01T08:42:16.443Z [TRACE] client: next heartbeat: period=12.6310687s
2022-12-01T08:42:16.603Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) starting fetch
2022-12-01T08:42:16.604Z [TRACE] agent: nomad.var.list(nomad/[email protected]): GET /v1/vars/?stale=true&wait=1m0s
2022-12-01T08:42:16.604Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 3 after "1s")
2022-12-01T08:42:17.133Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration="118.057µs"
2022-12-01T08:42:17.605Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) starting fetch
2022-12-01T08:42:17.605Z [TRACE] agent: nomad.var.list(nomad/[email protected]): GET /v1/vars/?stale=true&wait=1m0s
2022-12-01T08:42:17.605Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 4 after "2s")
2022-12-01T08:42:18.851Z [TRACE] client.alloc_runner: handling task state update: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 done=false
2022-12-01T08:42:18.851Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 client_status=running desired_status=""
2022-12-01T08:42:19.057Z [TRACE] client: next heartbeat: period=14.634366475s
2022-12-01T08:42:19.151Z [DEBUG] client: updated allocations: index=782264 total=1 pulled=0 filtered=1
2022-12-01T08:42:19.152Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
2022-12-01T08:42:19.152Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
2022-12-01T08:42:19.153Z [TRACE] client: next heartbeat: period=16.792617425s
2022-12-01T08:42:19.606Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) starting fetch
2022-12-01T08:42:19.606Z [TRACE] agent: nomad.var.list(nomad/[email protected]): GET /v1/vars/?stale=true&wait=1m0s
2022-12-01T08:42:19.607Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 5 after "4s")
2022-12-01T08:42:23.607Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) starting fetch
2022-12-01T08:42:23.607Z [TRACE] agent: nomad.var.list(nomad/[email protected]): GET /v1/vars/?stale=true&wait=1m0s
2022-12-01T08:42:23.608Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 6 after "8s")
2022-12-01T08:42:27.138Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration="99.723µs"
2022-12-01T08:42:31.608Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) starting fetch
2022-12-01T08:42:31.608Z [TRACE] agent: nomad.var.list(nomad/[email protected]): GET /v1/vars/?stale=true&wait=1m0s
2022-12-01T08:42:31.609Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "http://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": closed (retry attempt 7 after "16s")
2022-12-01T08:42:35.947Z [TRACE] client: next heartbeat: period=17.423542593s
2022-12-01T08:42:37.141Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration="185.207µs"
2022-12-01T08:42:39.347Z [TRACE] consul.sync: execute sync: reason=periodic
2022-12-01T08:42:43.273Z [DEBUG] client: updated allocations: index=782267 total=1 pulled=1 filtered=0
2022-12-01T08:42:43.273Z [DEBUG] client: allocation updates: added=0 removed=0 updated=1 ignored=0
2022-12-01T08:42:43.273Z [TRACE] client: next heartbeat: period=10.758514714s
2022-12-01T08:42:43.274Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=1 ignored=0 errors=0
2022-12-01T08:42:43.274Z [TRACE] consul.sync: commit sync operations: ops="<0, 0, 1, 0>"
2022-12-01T08:42:43.274Z [TRACE] consul.sync: execute sync: reason=operations
2022-12-01T08:42:43.275Z [TRACE] client.alloc_runner.task_runner: Kill requested: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx
2022-12-01T08:42:43.275Z [TRACE] client.alloc_runner.task_runner: Kill event: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx event_type=Killing event_reason=""
2022-12-01T08:42:43.277Z [TRACE] client.alloc_runner: handling task state update: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 done=false
2022-12-01T08:42:43.277Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 client_status=running desired_status=""
2022-12-01T08:42:43.277Z [TRACE] client.alloc_runner.task_runner: setting task state: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx state=dead
2022-12-01T08:42:43.278Z [TRACE] client.alloc_runner: handling task state update: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 done=false
2022-12-01T08:42:43.278Z [DEBUG] client.alloc_runner: main tasks dead, destroying all sidecar tasks: alloc_id=0018aefc-861f-ea84-5144-a1c129991692
2022-12-01T08:42:43.279Z [TRACE] consul.sync: commit sync operations: ops="<0, 0, 1, 0>"
2022-12-01T08:42:43.280Z [TRACE] client.alloc_runner.task_runner: Kill requested: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx
2022-12-01T08:42:43.280Z [TRACE] client.alloc_runner.task_runner: Kill event: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx event_type=Killing event_reason=""
2022-12-01T08:42:43.294Z [DEBUG] consul.sync: sync complete: registered_services=0 deregistered_services=1 registered_checks=0 deregistered_checks=0
2022-12-01T08:42:43.295Z [TRACE] consul.sync: execute sync: reason=operations
2022-12-01T08:42:43.514Z [TRACE] client: next heartbeat: period=19.280452396s
2022-12-01T08:42:43.576Z [DEBUG] client: updated allocations: index=782269 total=1 pulled=0 filtered=1
2022-12-01T08:42:43.576Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
2022-12-01T08:42:43.576Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
2022-12-01T08:42:43.577Z [TRACE] client: next heartbeat: period=12.619074752s
2022-12-01T08:42:47.146Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration="106.217µs"
2022-12-01T08:42:47.279Z [WARN]  client.alloc_runner.task_runner.task_hook.logmon.nomad: timed out waiting for read-side of process output pipe to close: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx @module=logmon timestamp=2022-12-01T08:42:47.279Z
2022-12-01T08:42:47.279Z [WARN]  client.alloc_runner.task_runner.task_hook.logmon.nomad: timed out waiting for read-side of process output pipe to close: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx @module=logmon timestamp=2022-12-01T08:42:47.279Z
2022-12-01T08:42:47.280Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.stdio: received EOF, stopping recv loop: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx err="rpc error: code = Unavailable desc = error reading from server: EOF"
2022-12-01T08:42:47.281Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx path=/var/usrlocal/bin/nomad pid=13270
2022-12-01T08:42:47.281Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin exited: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx
2022-12-01T08:42:47.281Z [INFO]  agent: (runner) stopping
2022-12-01T08:42:47.281Z [DEBUG] agent: (runner) stopping watcher
2022-12-01T08:42:47.281Z [DEBUG] agent: (watcher) stopping all views
2022-12-01T08:42:47.281Z [TRACE] agent: (watcher) stopping nomad.var.list(nomad/[email protected])
2022-12-01T08:42:47.281Z [DEBUG] client.alloc_runner.task_runner: task run loop exiting: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=nginx
2022-12-01T08:42:47.282Z [INFO]  agent: (runner) received finish
2022-12-01T08:42:47.282Z [TRACE] client.alloc_runner.task_runner: Kill requested: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx
2022-12-01T08:42:47.282Z [TRACE] client.alloc_runner.task_runner: Kill event: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx event_type=Killing event_reason=""
2022-12-01T08:42:47.282Z [TRACE] client.alloc_runner.task_runner: Kill requested: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx
2022-12-01T08:42:47.282Z [TRACE] client.alloc_runner.task_runner: Kill event: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx event_type=Killing event_reason=""
2022-12-01T08:42:47.292Z [TRACE] client.alloc_runner.runner_hook: proxy to Consul complete: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 src_local=/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/alloc/tmp/consul_grpc.sock src_remote=@ bytes=114886
2022-12-01T08:42:47.292Z [TRACE] client.alloc_runner.runner_hook: proxy from Consul complete: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 src_local=/var/nomad/alloc/0018aefc-861f-ea84-5144-a1c129991692/alloc/tmp/consul_grpc.sock src_remote=@ bytes=3034
2022-12-01T08:42:47.371Z [INFO]  client.driver_mgr.docker: stopped container: container_id=e1311c6599cc5532ff25f256f64e7c7decc417e9dc286d44eb54404725f03520 driver=docker
2022-12-01T08:42:47.373Z [DEBUG] client.driver_mgr.docker.docker_logger.stdio: received EOF, stopping recv loop: driver=docker err="rpc error: code = Unavailable desc = error reading from server: EOF"
2022-12-01T08:42:47.375Z [DEBUG] client.driver_mgr.docker.docker_logger: plugin process exited: driver=docker path=/var/usrlocal/bin/nomad pid=13260
2022-12-01T08:42:47.375Z [DEBUG] client.driver_mgr.docker.docker_logger: plugin exited: driver=docker
2022-12-01T08:42:47.377Z [DEBUG] client.driver_mgr.docker: image id reference count decremented: driver=docker image_id=sha256:fecb2aa186ed7340309c1ace74f6f5d9b83b8108c5a15db10cf6ead2f853711b references=0
2022-12-01T08:42:47.378Z [TRACE] client.alloc_runner.task_runner: setting task state: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx state=dead
2022-12-01T08:42:47.380Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.stdio: received EOF, stopping recv loop: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx err="rpc error: code = Unavailable desc = error reading from server: EOF"
2022-12-01T08:42:47.381Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin process exited: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx path=/var/usrlocal/bin/nomad pid=13086
2022-12-01T08:42:47.381Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin exited: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx
2022-12-01T08:42:47.381Z [DEBUG] client.alloc_runner.task_runner: task run loop exiting: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx
2022-12-01T08:42:47.381Z [TRACE] client.cpuset.v2: remove allocation: id=0018aefc-861f-ea84-5144-a1c129991692
2022-12-01T08:42:47.381Z [INFO]  client.alloc_runner: waiting for task to exit: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 task=connect-proxy-nginx
2022-12-01T08:42:47.381Z [TRACE] client.alloc_runner.task_coordinator: state transition: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 from=main to=poststart
2022-12-01T08:42:47.382Z [INFO]  client.gc: marking allocation for GC: alloc_id=0018aefc-861f-ea84-5144-a1c129991692
2022-12-01T08:42:47.382Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 client_status=complete desired_status=""
2022-12-01T08:42:47.382Z [TRACE] client.alloc_runner: handling task state update: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 done=false
2022-12-01T08:42:47.382Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 client_status=complete desired_status=""
2022-12-01T08:42:47.530Z [DEBUG] client.driver_mgr.docker: image id reference count decremented: driver=docker image_id=sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e references=0
2022-12-01T08:42:47.530Z [TRACE] client.alloc_runner: handling task state update: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 done=true
2022-12-01T08:42:47.530Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=0018aefc-861f-ea84-5144-a1c129991692 client_status=complete desired_status=""
2022-12-01T08:42:47.573Z [TRACE] client: next heartbeat: period=11.767244756s
2022-12-01T08:42:47.655Z [DEBUG] client: updated allocations: index=782270 total=1 pulled=0 filtered=1
2022-12-01T08:42:47.655Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
2022-12-01T08:42:47.655Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
2022-12-01T08:42:47.656Z [TRACE] client: next heartbeat: period=16.724303674s
2022-12-01T08:42:47.831Z [TRACE] client: next heartbeat: period=10.562184363s
2022-12-01T08:42:47.889Z [DEBUG] client: updated allocations: index=782271 total=1 pulled=0 filtered=1
2022-12-01T08:42:47.889Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
2022-12-01T08:42:47.889Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
2022-12-01T08:42:47.890Z [TRACE] client: next heartbeat: period=16.246421255s

Logs Server (Leader)

2022-12-01T08:40:19.831Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:47424
2022-12-01T08:40:20.337Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=4.018056ms
2022-12-01T08:40:22.436Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.986256ms
2022-12-01T08:40:22.930Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="120.115µs"
2022-12-01T08:40:24.531Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.455019ms
2022-12-01T08:40:26.623Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.373577ms
2022-12-01T08:40:27.558Z [DEBUG] worker: dequeued evaluation: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 eval_id=c145ba83-6a9d-9d81-a26b-32a3200e27d9 type=_core namespace=- job_id=csi-volume-claim-gc node_id="" triggered_by=scheduled
2022-12-01T08:40:27.558Z [TRACE] worker: changed workload status: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 from=WaitingToDequeue to=WaitingForRaft
2022-12-01T08:40:27.558Z [TRACE] worker: changed workload status: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 from=WaitingForRaft to=Scheduling
2022-12-01T08:40:27.558Z [TRACE] core.sched: garbage collecting unclaimed CSI volume claims: eval.JobID=csi-volume-claim-gc
2022-12-01T08:40:27.558Z [DEBUG] core.sched: CSI volume claim GC scanning before cutoff index: index=782245 csi_volume_claim_gc_threshold=5m0s
2022-12-01T08:40:27.558Z [DEBUG] worker: ack evaluation: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 eval_id=c145ba83-6a9d-9d81-a26b-32a3200e27d9 type=_core namespace=- job_id=csi-volume-claim-gc node_id="" triggered_by=scheduled
2022-12-01T08:40:27.558Z [TRACE] worker: changed workload status: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 from=Scheduling to=WaitingToDequeue
2022-12-01T08:40:27.558Z [DEBUG] worker: dequeued evaluation: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 eval_id=d2534162-9074-3803-37f3-4e0c04dcd1f2 type=_core namespace=- job_id=root-key-rotate-gc node_id="" triggered_by=scheduled
2022-12-01T08:40:27.558Z [TRACE] worker: changed workload status: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 from=WaitingToDequeue to=WaitingForRaft
2022-12-01T08:40:27.558Z [TRACE] worker: changed workload status: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 from=WaitingForRaft to=Scheduling
2022-12-01T08:40:27.558Z [DEBUG] core.sched: root key GC scanning before cutoff index: index=0 root_key_rotation_threshold=720h0m0s
2022-12-01T08:40:27.558Z [DEBUG] core.sched: root key GC scanning before cutoff index: index=782179 root_key_gc_threshold=1h0m0s
2022-12-01T08:40:27.558Z [DEBUG] worker: ack evaluation: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 eval_id=d2534162-9074-3803-37f3-4e0c04dcd1f2 type=_core namespace=- job_id=root-key-rotate-gc node_id="" triggered_by=scheduled
2022-12-01T08:40:27.558Z [TRACE] worker: changed workload status: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 from=Scheduling to=WaitingToDequeue
2022-12-01T08:40:27.558Z [DEBUG] worker: dequeued evaluation: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 eval_id=11d5656b-a8b9-ece1-4dd7-4c7e9d369a93 type=_core namespace=- job_id=node-gc node_id="" triggered_by=scheduled
2022-12-01T08:40:27.558Z [TRACE] worker: changed workload status: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 from=WaitingToDequeue to=WaitingForRaft
2022-12-01T08:40:27.558Z [TRACE] worker: changed workload status: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 from=WaitingForRaft to=Scheduling
2022-12-01T08:40:27.558Z [DEBUG] core.sched: node GC scanning before cutoff index: index=780283 node_gc_threshold=24h0m0s
2022-12-01T08:40:27.558Z [DEBUG] worker: ack evaluation: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 eval_id=11d5656b-a8b9-ece1-4dd7-4c7e9d369a93 type=_core namespace=- job_id=node-gc node_id="" triggered_by=scheduled
2022-12-01T08:40:27.558Z [TRACE] worker: changed workload status: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 from=Scheduling to=WaitingToDequeue
2022-12-01T08:40:27.577Z [DEBUG] worker: dequeued evaluation: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 eval_id=a19a3875-285a-1a50-7434-1ed3ccebedbe type=_core namespace=- job_id=local-token-expired-gc node_id="" triggered_by=scheduled
2022-12-01T08:40:27.577Z [TRACE] worker: changed workload status: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 from=WaitingToDequeue to=WaitingForRaft
2022-12-01T08:40:27.577Z [TRACE] worker: changed workload status: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 from=WaitingForRaft to=Scheduling
2022-12-01T08:40:27.577Z [DEBUG] core.sched: local expired ACL tokens GC scanning before cutoff index: index=782190 acl_token_expiration_gc_threshold=1h0m0s
2022-12-01T08:40:27.577Z [DEBUG] worker: ack evaluation: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 eval_id=a19a3875-285a-1a50-7434-1ed3ccebedbe type=_core namespace=- job_id=local-token-expired-gc node_id="" triggered_by=scheduled
2022-12-01T08:40:27.577Z [TRACE] worker: changed workload status: worker_id=95e20ce3-2e90-976a-8996-29cf3cb09084 from=Scheduling to=WaitingToDequeue
2022-12-01T08:40:28.710Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=2.942553ms
2022-12-01T08:40:29.832Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:35740
2022-12-01T08:40:30.807Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.5404ms
2022-12-01T08:40:32.908Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.698972ms
2022-12-01T08:40:32.933Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="97.95µs"
2022-12-01T08:40:33.008Z [TRACE] consul.sync: execute sync: reason=periodic
2022-12-01T08:40:34.901Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=4.278621ms
2022-12-01T08:40:36.998Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=4.093724ms
2022-12-01T08:40:39.093Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.21482ms
2022-12-01T08:40:39.833Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:42598
2022-12-01T08:40:41.185Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.881236ms
2022-12-01T08:40:41.849Z [DEBUG] http: request complete: method=GET path=/v1/node/58ab83fc-de31-fbf8-63e3-af28e5e68323/allocations?index=782129 duration=5m3.010872663s
2022-12-01T08:40:42.935Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="106.415µs"
2022-12-01T08:40:43.280Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.260786ms
2022-12-01T08:40:43.894Z [DEBUG] http: request complete: method=GET path=/v1/namespaces?index=1 duration=5m0.691330076s
2022-12-01T08:40:45.378Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=8.258002ms
2022-12-01T08:40:46.815Z [DEBUG] http: request complete: method=GET path=/v1/node/b4d5b282-405c-5a82-b7e7-66596f8dc7a7/allocations?index=781976 duration=5m7.981058641s
2022-12-01T08:40:47.110Z [DEBUG] http: request complete: method=GET path=/v1/node/5416d319-8c4a-cbca-1c85-065af402be74/allocations?index=780739 duration=5m8.280631787s
2022-12-01T08:40:47.464Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.491097ms
2022-12-01T08:40:47.603Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx_xroad/evaluations?index=782128 duration=5m3.160229007s
2022-12-01T08:40:48.307Z [DEBUG] http: request complete: method=GET path=/v1/node/fcfbf457-4c88-50c7-889e-bc246b7b9ac9/allocations?index=782037 duration=5m2.303125313s
2022-12-01T08:40:49.562Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=4.144516ms
2022-12-01T08:40:49.834Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:41802
2022-12-01T08:40:50.623Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx_xroad/allocations?index=782129 duration=5m6.181060984s
2022-12-01T08:40:51.651Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.485067ms
2022-12-01T08:40:52.684Z [DEBUG] http: request complete: method=GET path=/v1/node/fcfbf457-4c88-50c7-889e-bc246b7b9ac9/allocations?index=782037 duration=5m13.86880832s
2022-12-01T08:40:52.796Z [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&index=782034" duration=5m9.596500681s
2022-12-01T08:40:52.938Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="94.581µs"
2022-12-01T08:40:53.744Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.844285ms
2022-12-01T08:40:54.460Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.102:41658
2022-12-01T08:40:55.015Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx_xroad?index=782028 duration=5m10.565151972s
2022-12-01T08:40:55.835Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.734112ms
2022-12-01T08:40:56.013Z [DEBUG] http: request complete: method=GET path=/v1/node/37e97055-73d4-2976-3c02-bb70d95f9d04/allocations?index=781952 duration=5m17.188115235s
2022-12-01T08:40:56.620Z [DEBUG] http: request complete: method=GET path=/v1/node/0d6f9e61-d3c6-bfa0-0acc-595b297a3ff5/allocations?index=782001 duration=5m17.800271677s
2022-12-01T08:40:56.658Z [DEBUG] http: request complete: method=GET path=/v1/node/fcfbf457-4c88-50c7-889e-bc246b7b9ac9?index=782036 duration=5m10.645811366s
2022-12-01T08:40:57.373Z [DEBUG] http: request complete: method=GET path=/v1/nodes?index=782036 duration=5m18.639499461s
2022-12-01T08:40:57.932Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.586568ms
2022-12-01T08:40:59.834Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:44948
2022-12-01T08:41:00.029Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.040272ms
2022-12-01T08:41:02.126Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.22544ms
2022-12-01T08:41:02.941Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="115.699µs"
2022-12-01T08:41:03.011Z [TRACE] consul.sync: execute sync: reason=periodic
2022-12-01T08:41:04.203Z [DEBUG] http: request complete: method=GET path=/v1/client/stats?node_id=58ab83fc-de31-fbf8-63e3-af28e5e68323 duration=3.73589ms
2022-12-01T08:41:04.284Z [DEBUG] http: request complete: method=GET path=/v1/jobs?meta=true duration="258.071µs"
2022-12-01T08:41:04.285Z [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="184.349µs"
2022-12-01T08:41:05.322Z [DEBUG] nomad: memberlist: Initiating push/pull sync with: f02.global 192.168.81.101:4648
2022-12-01T08:41:05.638Z [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="227.51µs"
2022-12-01T08:41:05.641Z [DEBUG] http: request complete: method=GET path="/v1/vars?namespace=*" duration="224.393µs"
2022-12-01T08:41:09.836Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:43276
2022-12-01T08:41:09.971Z [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="203.121µs"
2022-12-01T08:41:12.944Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="243.185µs"
2022-12-01T08:41:14.050Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.101:46244
2022-12-01T08:41:16.633Z [DEBUG] http: request complete: method=GET path=/v1/node/58ab83fc-de31-fbf8-63e3-af28e5e68323/allocations?index=782129 duration=5m1.77123319s
2022-12-01T08:41:17.956Z [DEBUG] http: request complete: method=GET path=/v1/node/b4d5b282-405c-5a82-b7e7-66596f8dc7a7/allocations?index=781976 duration=5m3.097820233s
2022-12-01T08:41:18.717Z [DEBUG] http: request complete: method=GET path=/v1/namespaces?index=1 duration=5m5.673888064s
2022-12-01T08:41:18.801Z [DEBUG] http: request complete: method=GET path=/v1/node/fcfbf457-4c88-50c7-889e-bc246b7b9ac9/allocations?index=782037 duration=5m3.949595302s
2022-12-01T08:41:19.836Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:45384
2022-12-01T08:41:19.910Z [DEBUG] http: request complete: method=GET path=/v1/nodes?index=782036 duration=5m5.12797142s
2022-12-01T08:41:22.948Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="97.7µs"
2022-12-01T08:41:23.039Z [DEBUG] http: request complete: method=GET path=/v1/node/0d6f9e61-d3c6-bfa0-0acc-595b297a3ff5/allocations?index=782001 duration=5m8.183838329s
2022-12-01T08:41:26.821Z [DEBUG] http: request complete: method=PUT path="/v1/var/nomad%2Fjobs%2Fnginx%2Fnginx?cas=0" duration=6.938788ms
2022-12-01T08:41:26.834Z [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="201.954µs"
2022-12-01T08:41:26.834Z [DEBUG] http: request complete: method=GET path="/v1/vars?namespace=*" duration="262.667µs"
2022-12-01T08:41:26.859Z [DEBUG] http: request complete: method=GET path="/v1/var/nomad%2Fjobs%2Fnginx%2Fnginx?namespace=default" duration="289.846µs"
2022-12-01T08:41:28.260Z [DEBUG] http: request complete: method=GET path=/v1/node/5416d319-8c4a-cbca-1c85-065af402be74/allocations?index=780739 duration=5m13.391350507s
2022-12-01T08:41:29.731Z [DEBUG] http: request complete: method=GET path=/v1/node/58ab83fc-de31-fbf8-63e3-af28e5e68323?index=781921 duration=5m6.338109913s
2022-12-01T08:41:29.837Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:34330
2022-12-01T08:41:29.849Z [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&index=782034" duration=5m16.805614603s
2022-12-01T08:41:32.449Z [DEBUG] http: request complete: method=GET path=/v1/node/37e97055-73d4-2976-3c02-bb70d95f9d04/allocations?index=781952 duration=5m17.584162911s
2022-12-01T08:41:32.952Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="126.127µs"
2022-12-01T08:41:33.013Z [TRACE] consul.sync: execute sync: reason=periodic
2022-12-01T08:41:37.295Z [DEBUG] http: request complete: method=GET path=/v1/node/58ab83fc-de31-fbf8-63e3-af28e5e68323/allocations?index=782129 duration=5m13.904292108s
2022-12-01T08:41:39.838Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:42488
2022-12-01T08:41:42.956Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="131.006µs"
2022-12-01T08:41:49.839Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:43700
2022-12-01T08:41:51.022Z [DEBUG] http: request complete: method=GET path=/v1/jobs?meta=true duration="172.571µs"
2022-12-01T08:41:51.023Z [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="435.444µs"
2022-12-01T08:41:52.959Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="84.984µs"
2022-12-01T08:41:54.462Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.102:37860
2022-12-01T08:41:59.835Z [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="115.684µs"
2022-12-01T08:41:59.840Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:57670
2022-12-01T08:42:02.963Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="165.822µs"
2022-12-01T08:42:03.016Z [TRACE] consul.sync: execute sync: reason=periodic
2022-12-01T08:42:03.970Z [DEBUG] http: request complete: method=POST path="/v1/jobs/parse?namespace=*" duration=4.220236ms
2022-12-01T08:42:03.975Z [TRACE] nomad.job: job mutate results: mutator=canonicalize warnings=[] error=<nil>
2022-12-01T08:42:03.975Z [TRACE] nomad.job: job mutate results: mutator=connect warnings=[] error=<nil>
2022-12-01T08:42:03.975Z [TRACE] nomad.job: job mutate results: mutator=expose-check warnings=[] error=<nil>
2022-12-01T08:42:03.975Z [TRACE] nomad.job: job mutate results: mutator=constraints warnings=[] error=<nil>
2022-12-01T08:42:03.975Z [TRACE] nomad.job: job validate results: validator=connect warnings=[] error=<nil>
2022-12-01T08:42:03.975Z [TRACE] nomad.job: job validate results: validator=expose-check warnings=[] error=<nil>
2022-12-01T08:42:03.975Z [TRACE] nomad.job: job validate results: validator=vault warnings=[] error=<nil>
2022-12-01T08:42:03.975Z [TRACE] nomad.job: job validate results: validator=namespace-constraint-check warnings=[] error=<nil>
2022-12-01T08:42:03.976Z [TRACE] nomad.job: job validate results: validator=validate warnings=[] error=<nil>
2022-12-01T08:42:03.976Z [TRACE] nomad.job: job validate results: validator=memory_oversubscription warnings=[] error=<nil>
2022-12-01T08:42:03.976Z [TRACE] nomad.job.service_sched.binpack: NewBinPackIterator created: eval_id=671af5c3-8a23-0977-df15-be6228e861eb job_id=nginx namespace=default algorithm=binpack
2022-12-01T08:42:03.976Z [DEBUG] nomad.job.service_sched: reconciled current state with desired state: eval_id=671af5c3-8a23-0977-df15-be6228e861eb job_id=nginx namespace=default
  results=
  | Total changes: (place 1) (destructive 0) (inplace 0) (stop 0) (disconnect 0) (reconnect 0)
  | Created Deployment: "7a95794f-4f1e-9afb-c9b4-dd00e1b62f35"
  | Desired Changes for "nginx": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)

2022-12-01T08:42:03.976Z [DEBUG] nomad.job.service_sched: setting eval status: eval_id=671af5c3-8a23-0977-df15-be6228e861eb job_id=nginx namespace=default status=complete
2022-12-01T08:42:03.978Z [DEBUG] http: request complete: method=POST path=/v1/job/nginx/plan duration=4.591832ms
2022-12-01T08:42:05.325Z [DEBUG] nomad: memberlist: Initiating push/pull sync with: f02.global 192.168.81.101:4648
2022-12-01T08:42:06.218Z [TRACE] nomad.job: job mutate results: mutator=canonicalize warnings=[] error=<nil>
2022-12-01T08:42:06.218Z [TRACE] nomad.job: job mutate results: mutator=connect warnings=[] error=<nil>
2022-12-01T08:42:06.218Z [TRACE] nomad.job: job mutate results: mutator=expose-check warnings=[] error=<nil>
2022-12-01T08:42:06.218Z [TRACE] nomad.job: job mutate results: mutator=constraints warnings=[] error=<nil>
2022-12-01T08:42:06.218Z [TRACE] nomad.job: job validate results: validator=connect warnings=[] error=<nil>
2022-12-01T08:42:06.218Z [TRACE] nomad.job: job validate results: validator=expose-check warnings=[] error=<nil>
2022-12-01T08:42:06.218Z [TRACE] nomad.job: job validate results: validator=vault warnings=[] error=<nil>
2022-12-01T08:42:06.218Z [TRACE] nomad.job: job validate results: validator=namespace-constraint-check warnings=[] error=<nil>
2022-12-01T08:42:06.218Z [TRACE] nomad.job: job validate results: validator=validate warnings=[] error=<nil>
2022-12-01T08:42:06.218Z [TRACE] nomad.job: job validate results: validator=memory_oversubscription warnings=[] error=<nil>
2022-12-01T08:42:06.223Z [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&index=782034" duration=15.173030795s
2022-12-01T08:42:06.223Z [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&index=782034" duration=1m1.908635643s
2022-12-01T08:42:06.223Z [DEBUG] http: request complete: method=POST path=/v1/jobs duration=4.984052ms
2022-12-01T08:42:06.252Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx duration="591.283µs"
2022-12-01T08:42:06.269Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/allocations duration="605.359µs"
2022-12-01T08:42:06.270Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/evaluations duration="399.425µs"
2022-12-01T08:42:06.271Z [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default" duration="393.34µs"
2022-12-01T08:42:06.272Z [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="305.868µs"
2022-12-01T08:42:06.282Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/deployment?index=1 duration="583.763µs"
2022-12-01T08:42:06.292Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/summary?index=1 duration="387.099µs"
2022-12-01T08:42:06.303Z [DEBUG] http: request complete: method=GET path="/v1/vars?prefix=nomad%2Fjobs%2Fnginx" duration=1.178066ms
2022-12-01T08:42:06.313Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/deployment duration="161.892µs"
2022-12-01T08:42:06.565Z [TRACE] nomad: evaluating plan: plan="(eval 9c169670, job nginx, deploy 2c2b38ba, NodeAllocations: (node[58ab83fc] (0018aefc nginx.nginx[0] run)))"
2022-12-01T08:42:06.570Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx?index=782259 duration=281.308642ms
2022-12-01T08:42:06.570Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/evaluations?index=782259 duration=288.506976ms
2022-12-01T08:42:06.571Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/allocations?index=782129 duration=289.729348ms
2022-12-01T08:42:08.287Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/summary?index=782259 duration="187.419µs"
2022-12-01T08:42:08.289Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/evaluations?index=782260 duration="465.618µs"
2022-12-01T08:42:08.289Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/deployment?index=782041 duration="632.286µs"
2022-12-01T08:42:08.314Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/versions?diffs=true duration="269.601µs"
2022-12-01T08:42:09.545Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/allocations?index=782260 duration=1.257746538s
2022-12-01T08:42:09.840Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/scale duration="475.171µs"
2022-12-01T08:42:09.842Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:58686
2022-12-01T08:42:12.966Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="152.961µs"
2022-12-01T08:42:14.052Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.101:43868
2022-12-01T08:42:16.002Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/summary?index=782260 duration=6.149995444s
2022-12-01T08:42:16.002Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/allocations?index=782262 duration=6.161673932s
2022-12-01T08:42:16.446Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=4.296027ms
2022-12-01T08:42:16.450Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=2.883055ms
2022-12-01T08:42:18.964Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=3.332143ms
2022-12-01T08:42:18.967Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=3.510134ms
2022-12-01T08:42:19.059Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/allocations?index=782263 duration=3.042706655s
2022-12-01T08:42:19.843Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:32852
2022-12-01T08:42:21.481Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=4.148866ms
2022-12-01T08:42:21.481Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=3.925431ms
2022-12-01T08:42:22.970Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="108.05µs"
2022-12-01T08:42:24.000Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=4.031202ms
2022-12-01T08:42:24.000Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=4.428007ms
2022-12-01T08:42:26.449Z [DEBUG] http: request failed: method=GET path="/v1/client/fs/logs/0018aefc-861f-ea84-5144-a1c129991692?follow=true&offset=50000&origin=end&task=nginx&type=stdout" error="task \"nginx\" not started yet. No logs available" code=404
2022-12-01T08:42:26.449Z [DEBUG] http: request complete: method=GET path="/v1/client/fs/logs/0018aefc-861f-ea84-5144-a1c129991692?follow=true&offset=50000&origin=end&task=nginx&type=stdout" duration=5.642795ms
2022-12-01T08:42:26.512Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=4.046922ms
2022-12-01T08:42:26.514Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=5.733511ms
2022-12-01T08:42:28.493Z [DEBUG] http: request failed: method=GET path="/v1/client/fs/logs/0018aefc-861f-ea84-5144-a1c129991692?follow=true&offset=50000&origin=end&task=nginx&type=stderr" error="task \"nginx\" not started yet. No logs available" code=404
2022-12-01T08:42:28.493Z [DEBUG] http: request complete: method=GET path="/v1/client/fs/logs/0018aefc-861f-ea84-5144-a1c129991692?follow=true&offset=50000&origin=end&task=nginx&type=stderr" duration=4.816424ms
2022-12-01T08:42:29.030Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=3.528053ms
2022-12-01T08:42:29.030Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=3.99292ms
2022-12-01T08:42:29.843Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:34372
2022-12-01T08:42:31.344Z [DEBUG] http: request failed: method=GET path="/v1/client/fs/logs/0018aefc-861f-ea84-5144-a1c129991692?follow=true&offset=50000&origin=end&task=nginx&type=stdout" error="task \"nginx\" not started yet. No logs available" code=404
2022-12-01T08:42:31.344Z [DEBUG] http: request complete: method=GET path="/v1/client/fs/logs/0018aefc-861f-ea84-5144-a1c129991692?follow=true&offset=50000&origin=end&task=nginx&type=stdout" duration=4.403867ms
2022-12-01T08:42:31.546Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=4.583504ms
2022-12-01T08:42:31.549Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=7.109765ms
2022-12-01T08:42:32.973Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="139.384µs"
2022-12-01T08:42:33.018Z [TRACE] consul.sync: execute sync: reason=periodic
2022-12-01T08:42:34.065Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=4.151693ms
2022-12-01T08:42:34.066Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=4.417156ms
2022-12-01T08:42:35.155Z [DEBUG] http: request complete: method=GET path=/v1/allocation/0018aefc-861f-ea84-5144-a1c129991692 duration="300.171µs"
2022-12-01T08:42:35.156Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx duration="173.715µs"
2022-12-01T08:42:35.157Z [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="249.748µs"
2022-12-01T08:42:35.163Z [DEBUG] http: request complete: method=GET path=/v1/allocation/0018aefc-861f-ea84-5144-a1c129991692?index=1 duration="338.601µs"
2022-12-01T08:42:35.272Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=4.023401ms
2022-12-01T08:42:36.412Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx duration="219.701µs"
2022-12-01T08:42:39.844Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:34862
2022-12-01T08:42:40.132Z [DEBUG] http: request complete: method=GET path=/v1/jobs?meta=true duration="291.588µs"
2022-12-01T08:42:40.132Z [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="118.56µs"
2022-12-01T08:42:41.586Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx duration="180.466µs"
2022-12-01T08:42:41.592Z [DEBUG] http: request complete: method=GET path=/v1/namespaces duration="269.294µs"
2022-12-01T08:42:41.593Z [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&namespace=default" duration="193.783µs"
2022-12-01T08:42:41.660Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=4.369242ms
2022-12-01T08:42:41.665Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=3.919604ms
2022-12-01T08:42:42.920Z [DEBUG] http: request complete: method=DELETE path=/v1/job/nginx duration=4.701228ms
2022-12-01T08:42:42.920Z [DEBUG] http: request complete: method=GET path="/v1/jobs?meta=true&index=782263" duration=2.757671308s
2022-12-01T08:42:42.920Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx?index=782260 duration=33.072208489s
2022-12-01T08:42:42.921Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx?index=782260 duration=34.634998529s
2022-12-01T08:42:42.922Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/evaluations?index=782261 duration=1.316856143s
2022-12-01T08:42:42.923Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx?index=782260 duration=1.309869182s
2022-12-01T08:42:42.977Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="115.594µs"
2022-12-01T08:42:43.263Z [TRACE] nomad: evaluating plan: plan="(eval 42681aab, job nginx, NodeUpdates: (node[58ab83fc] (0018aefc stop/evict)), DeploymentUpdates: (2c2b38ba cancelled))"
2022-12-01T08:42:43.269Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/allocations?index=782264 duration=24.2051241s
2022-12-01T08:42:43.269Z [DEBUG] http: request complete: method=GET path=/v1/allocation/0018aefc-861f-ea84-5144-a1c129991692?index=782264 duration=6.104763567s
2022-12-01T08:42:43.269Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/allocations?index=782264 duration=1.664755679s
2022-12-01T08:42:43.269Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/deployment?index=782260 duration=1.660957815s
2022-12-01T08:42:43.269Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/deployment?index=782260 duration=33.428607682s
2022-12-01T08:42:43.287Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=5.266718ms
2022-12-01T08:42:43.295Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=5.99863ms
2022-12-01T08:42:43.611Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx?index=782266 duration="770.137µs"
2022-12-01T08:42:43.612Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/evaluations?index=782266 duration="822.784µs"
2022-12-01T08:42:43.615Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/allocations?index=782267 duration=4.088132ms
2022-12-01T08:42:43.636Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=3.456655ms
2022-12-01T08:42:43.642Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=4.288214ms
2022-12-01T08:42:46.156Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=4.043936ms
2022-12-01T08:42:46.159Z [DEBUG] http: request complete: method=GET path=/v1/client/allocation/0018aefc-861f-ea84-5144-a1c129991692/stats duration=2.8248ms
2022-12-01T08:42:47.574Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/allocations?index=782269 duration=1.959053388s
2022-12-01T08:42:47.575Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/summary?index=782263 duration=31.567927907s
2022-12-01T08:42:47.575Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/summary?index=782263 duration=5.959915791s
2022-12-01T08:42:47.832Z [DEBUG] http: request complete: method=GET path=/v1/job/nginx/allocations?index=782270 duration=212.529906ms
2022-12-01T08:42:49.845Z [DEBUG] nomad: memberlist: Stream connection from=192.168.81.100:54270
2022-12-01T08:42:52.980Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=server duration="124.146µs"

Config Client

/etc/nomad.d/base.hcl:

name = "f04"
region = "global"
datacenter = "als"

enable_debug = false
disable_update_check = false


bind_addr = "0.0.0.0"
advertise {
    http = "192.168.81.103:4646"
    rpc = "192.168.81.103:4647"
    serf = "192.168.81.103:4648"
}
ports {
    http = 4646
    rpc = 4647
    serf = 4648
}

consul {
    # The address to the Consul agent.
    address = "127.0.0.1:8500"
    ssl = false
    ca_file = ""
    cert_file = ""
    key_file = ""
    token = ""
    # The service name to register the server and client with Consul.
    server_service_name = "nomad"
    client_service_name = "nomad-client"
    tags = {}

    # Enables automatically registering the services.
    auto_advertise = true

    # Enabling the server and client to bootstrap using Consul.
    server_auto_join = true
    client_auto_join = true
}

data_dir = "/var/nomad"

log_level = "INFO"
enable_syslog = true

leave_on_terminate = true
leave_on_interrupt = false

tls {
    http = true
    rpc = true
    ca_file = "/etc/nomad.d/tls/ca.crt"
    cert_file = "/etc/nomad.d/tls/agent.crt"
    key_file = "/etc/nomad.d/tls/agent.key"
    rpc_upgrade_mode = false
    verify_server_hostname = "true"
    verify_https_client = "false"
}

acl {
    enabled = true
    token_ttl = "30s"
    policy_ttl = "30s"
    replication_token = ""
}

vault {
    enabled = true
    address = "https://active.vault.service.consul:8200"
    allow_unauthenticated = true
    create_from_role = "nomad-cluster"
    task_token_ttl = ""
    ca_file = "/etc/nomad.d/tls/vault.crt"
    ca_path = ""
    cert_file = ""
    key_file = ""
    tls_server_name = ""
    tls_skip_verify = false
    namespace = ""
}

/etc/nomad.d/client.hcl:

client {
    enabled = true

    node_class = ""
    no_host_uuid = false


    max_kill_timeout = "30s"

    network_speed = 0
    cpu_total_compute = 0

    gc_interval = "1m"
    gc_disk_usage_threshold = 80
    gc_inode_usage_threshold = 70
    gc_parallel_destroys = 2

    reserved {
        cpu = 0
        memory = 0
        disk = 0
    }

    host_volume "esdb" {
        path = "/var/esdb"
        read_only = false
    }
    host_volume "pgdata" {
        path = "/var/pgdata"
        read_only = false
    }

    host_network "localhost" {
            interface = "lo"
            reserved_ports = ""
    }



    meta = {
        "als.node_kind" = "client"
        "als.node_group" = "worker"
        "connect.log_level" = "info"
    }
    }


plugin "docker" {
    config {
        auth {
            config = "/etc/nomad.d/.docker-creds"

        }

    }

}
plugin "nomad-driver-podman" {
    config {
        volumes {
            enabled = true
            selinuxlabel = "z"

        }
        recover_stopped = true

    }

}

/etc/nomad.d/custom.json:

{
    "client": {
        "cni_config_dir": "/etc/cni/config",
        "cni_path": "/usr/local/lib/cni/1.1.1/bin"
    }
}

The variables are there. It is a bit odd that http is used when templating - especially when tls.http is true.

[core@f04 ~]$ echo $NOMAD_ADDR
https://127.0.0.1:4646
[core@f04 ~]$ nomad var get "nomad/jobs/nginx/nginx"
Namespace   = default
Path        = nomad/jobs/nginx/nginx
Create Time = 2022-12-01T08:41:26Z
Check Index = 782258

Items
test_key = abc
[core@f04 ~]$ export NOMAD_ADDR=http://127.0.0.1:4646
[core@f04 ~]$ nomad var get "nomad/jobs/nginx/nginx"
Error retrieving variable: Unexpected response code: 400 (Client sent an HTTP request to an HTTPS server.)

@ahjohannessen
Copy link
Author

ahjohannessen commented Dec 1, 2022

@lgfa29 If I change my nomad.service systemd file to include NOMAD_ADDR and NOMAD_CACERT things seem to start working.

[Service]
User=root
Group=bin
Environment="NOMAD_ADDR=https://127.0.0.1:4646"
Environment="NOMAD_CACERT=/etc/nomad.d/tls/ca.crt"
ExecStart=/usr/local/bin/nomad agent -config=/etc/nomad.d

Logs Client

2022-12-01T09:22:22.339Z [TRACE] client: next heartbeat: period=16.674916997s
2022-12-01T09:22:22.339Z [DEBUG] client: state updated: node_status=ready
2022-12-01T09:22:23.660Z [DEBUG] client: state changed, updating node and re-registering
2022-12-01T09:22:23.668Z [INFO]  client: node registration complete
2022-12-01T09:22:23.668Z [TRACE] client: next heartbeat: period=11.042731673s
2022-12-01T09:22:28.451Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration="303.738µs"
2022-12-01T09:22:31.681Z [TRACE] client: next heartbeat: period=11.332304028s
2022-12-01T09:22:31.682Z [DEBUG] client: updated allocations: index=782389 total=1 pulled=1 filtered=0
2022-12-01T09:22:31.682Z [DEBUG] client: allocation updates: added=1 removed=0 updated=0 ignored=0
2022-12-01T09:22:31.683Z [TRACE] client: next heartbeat: period=12.268676589s
2022-12-01T09:22:31.683Z [TRACE] client.alloc_runner.task_coordinator: state transition: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 from=init to=init
2022-12-01T09:22:31.685Z [DEBUG] client: allocation updates applied: added=1 removed=0 updated=0 ignored=0 errors=0
2022-12-01T09:22:31.686Z [TRACE] client.cpuset.v2: add allocation: name=nginx.nginx[0] id=fcee5e05-9a9d-7b1e-a411-368402614fd7
2022-12-01T09:22:31.702Z [TRACE] client.alloc_runner.runner_hook.alloc_health_watcher: watching: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 deadline="2022-12-01 09:27:31.702109952 +0000 UTC m=+325.009003477" checks=true min_healthy_time=10s
2022-12-01T09:22:31.703Z [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=gcr.io/google_containers/pause-amd64:3.1 image_id=sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e references=1
2022-12-01T09:22:32.075Z [TRACE] consul.sync: commit sync operations: ops="<1, 0, 0, 0>"
2022-12-01T09:22:32.075Z [TRACE] consul.sync: execute sync: reason=operations
2022-12-01T09:22:32.076Z [TRACE] client.alloc_runner: handling task state update: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 done=false
2022-12-01T09:22:32.076Z [TRACE] client.alloc_runner.task_coordinator: state transition: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 from=init to=prestart
2022-12-01T09:22:32.076Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 client_status=pending desired_status=""
2022-12-01T09:22:32.076Z [DEBUG] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx
2022-12-01T09:22:32.076Z [TRACE] client.alloc_runner: handling task state update: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 done=false
2022-12-01T09:22:32.077Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 client_status=pending desired_status=""
2022-12-01T09:22:32.077Z [TRACE] consul.sync: must register service: id=_nomad-task-fcee5e05-9a9d-7b1e-a411-368402614fd7-group-nginx-nginx-http exists=false reason=operations
2022-12-01T09:22:32.078Z [TRACE] client.alloc_runner: handling task state update: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 done=false
2022-12-01T09:22:32.078Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 client_status=pending desired_status=""
2022-12-01T09:22:32.080Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx path=/var/usrlocal/bin/nomad args=["/var/usrlocal/bin/nomad", "logmon"]
2022-12-01T09:22:32.080Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx path=/var/usrlocal/bin/nomad pid=17954
2022-12-01T09:22:32.080Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx path=/var/usrlocal/bin/nomad
2022-12-01T09:22:32.088Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx version=2
2022-12-01T09:22:32.088Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx @module=logmon address=/tmp/plugin3696784374 network=unix timestamp=2022-12-01T09:22:32.088Z
2022-12-01T09:22:32.089Z [TRACE] client.alloc_runner.task_runner.task_hook.logmon.stdio: waiting for stdio data: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx
2022-12-01T09:22:32.089Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx @module=logmon path=/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/alloc/logs/.connect-proxy-nginx.stdout.fifo timestamp=2022-12-01T09:22:32.089Z
2022-12-01T09:22:32.089Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx @module=logmon path=/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/alloc/logs/.connect-proxy-nginx.stderr.fifo timestamp=2022-12-01T09:22:32.089Z
2022-12-01T09:22:32.095Z [TRACE] client.alloc_runner.task_runner.task_hook.envoy_version: setting task envoy image: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx image=envoyproxy/envoy:v1.24.0
2022-12-01T09:22:32.096Z [DEBUG] client.alloc_runner.task_runner.task_hook.envoy_bootstrap: bootstrapping Consul connect-proxy: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx task=connect-proxy-nginx service=nginx
2022-12-01T09:22:32.096Z [TRACE] client.alloc_runner.task_runner.task_hook.envoy_bootstrap: no SI token to load: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx task=connect-proxy-nginx
2022-12-01T09:22:32.096Z [DEBUG] client.alloc_runner.task_runner.task_hook.envoy_bootstrap: check for SI token for task: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx task=connect-proxy-nginx exists=false
2022-12-01T09:22:32.096Z [INFO]  client.alloc_runner.task_runner.task_hook.envoy_bootstrap: bootstrapping envoy: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx namespace="" proxy_id=_nomad-task-fcee5e05-9a9d-7b1e-a411-368402614fd7-group-nginx-nginx-http-sidecar-proxy service=nginx gateway="" bootstrap_file=/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/connect-proxy-nginx/secrets/envoy_bootstrap.json grpc_addr=unix://alloc/tmp/consul_grpc.sock admin_bind=127.0.0.2:19001 ready_bind=127.0.0.1:19101
2022-12-01T09:22:32.103Z [DEBUG] consul.sync: sync complete: registered_services=1 deregistered_services=0 registered_checks=0 deregistered_checks=0
2022-12-01T09:22:32.160Z [TRACE] client.alloc_runner.task_runner: waiting for cgroup to exist for: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx allocID=fcee5e05-9a9d-7b1e-a411-368402614fd7 task="&{connect-proxy-nginx docker  map[args:[-c ${NOMAD_SECRETS_DIR}/envoy_bootstrap.json -l ${meta.connect.log_level} --concurrency ${meta.connect.proxy_concurrency} --disable-hot-restart] image:envoyproxy/envoy:v1.24.0 security_opt:[label=disable]] map[] [] <nil> [] [${attr.consul.version} semver >= 1.8.0 ${attr.consul.grpc} > 0] [] 0xc00012b1a0 0xc000d1c4b0 <nil> 0xc0009761c8 map[] 5s 0xc000e54cf0 [] false 0s [] []  connect-proxy:nginx <nil>}"
2022-12-01T09:22:32.161Z [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=envoyproxy/envoy:v1.24.0 image_id=sha256:fecb2aa186ed7340309c1ace74f6f5d9b83b8108c5a15db10cf6ead2f853711b references=1
2022-12-01T09:22:32.161Z [TRACE] client.driver_mgr.docker: binding volumes: driver=docker task_name=connect-proxy-nginx volumes=["/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/alloc:/alloc", "/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/connect-proxy-nginx/local:/local", "/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/connect-proxy-nginx/secrets:/secrets"]
2022-12-01T09:22:32.161Z [TRACE] client.driver_mgr.docker: no docker log driver provided, defaulting to plugin config: driver=docker task_name=connect-proxy-nginx
2022-12-01T09:22:32.161Z [DEBUG] client.driver_mgr.docker: configured resources: driver=docker task_name=connect-proxy-nginx memory=314572800 memory_reservation=0 cpu_shares=100 cpu_quota=0 cpu_period=0
2022-12-01T09:22:32.161Z [DEBUG] client.driver_mgr.docker: binding directories: driver=docker task_name=connect-proxy-nginx binds="[]string{\"/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/alloc:/alloc\", \"/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/connect-proxy-nginx/local:/local\", \"/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/connect-proxy-nginx/secrets:/secrets\"}"
2022-12-01T09:22:32.161Z [DEBUG] client.driver_mgr.docker: configuring network mode for task group: driver=docker task_name=connect-proxy-nginx network_mode=container:e4ce0ce4fb723122883efc23ac3087cd5d8696281385edf9405ecda7ae0b64f1
2022-12-01T09:22:32.161Z [DEBUG] client.driver_mgr.docker: applied labels on the container: driver=docker task_name=connect-proxy-nginx labels=map[com.hashicorp.nomad.alloc_id:fcee5e05-9a9d-7b1e-a411-368402614fd7]
2022-12-01T09:22:32.161Z [DEBUG] client.driver_mgr.docker: setting container name: driver=docker task_name=connect-proxy-nginx container_name=connect-proxy-nginx-fcee5e05-9a9d-7b1e-a411-368402614fd7
2022-12-01T09:22:32.190Z [INFO]  client.driver_mgr.docker: created container: driver=docker container_id=5dfe03f625de57a963b254375cfd6ba1312a146b204299d190aa6d9e6bedfaef
2022-12-01T09:22:32.197Z [DEBUG] client: updated allocations: index=782391 total=1 pulled=0 filtered=1
2022-12-01T09:22:32.197Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
2022-12-01T09:22:32.197Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
2022-12-01T09:22:32.197Z [TRACE] client: next heartbeat: period=16.897047239s
2022-12-01T09:22:32.294Z [INFO]  client.driver_mgr.docker: started container: driver=docker container_id=5dfe03f625de57a963b254375cfd6ba1312a146b204299d190aa6d9e6bedfaef
2022-12-01T09:22:32.294Z [DEBUG] client.driver_mgr.docker.docker_logger: starting plugin: driver=docker path=/var/usrlocal/bin/nomad args=["/var/usrlocal/bin/nomad", "docker_logger"]
2022-12-01T09:22:32.295Z [DEBUG] client.driver_mgr.docker.docker_logger: plugin started: driver=docker path=/var/usrlocal/bin/nomad pid=18017
2022-12-01T09:22:32.295Z [DEBUG] client.driver_mgr.docker.docker_logger: waiting for RPC address: driver=docker path=/var/usrlocal/bin/nomad
2022-12-01T09:22:32.302Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: plugin address: driver=docker @module=docker_logger address=/tmp/plugin2821505708 network=unix timestamp=2022-12-01T09:22:32.302Z
2022-12-01T09:22:32.302Z [DEBUG] client.driver_mgr.docker.docker_logger: using plugin: driver=docker version=2
2022-12-01T09:22:32.303Z [TRACE] client.driver_mgr.docker.docker_logger.stdio: waiting for stdio data: driver=docker
2022-12-01T09:22:32.304Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: using client connection initialized from environment: driver=docker @module=docker_logger timestamp=2022-12-01T09:22:32.304Z
2022-12-01T09:22:32.307Z [TRACE] client.alloc_runner.task_runner: setting task state: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx state=running
2022-12-01T09:22:32.307Z [TRACE] client.alloc_runner.task_runner: appending task event: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=connect-proxy-nginx state=running event=Started
2022-12-01T09:22:32.308Z [TRACE] client.alloc_runner: handling task state update: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 done=false
2022-12-01T09:22:32.308Z [TRACE] client.alloc_runner.task_coordinator: state transition: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 from=prestart to=main
2022-12-01T09:22:32.308Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 client_status=running desired_status=""
2022-12-01T09:22:32.308Z [DEBUG] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=nginx
2022-12-01T09:22:32.310Z [TRACE] client.alloc_runner: handling task state update: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 done=false
2022-12-01T09:22:32.310Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 client_status=running desired_status=""
2022-12-01T09:22:32.312Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=nginx path=/var/usrlocal/bin/nomad args=["/var/usrlocal/bin/nomad", "logmon"]
2022-12-01T09:22:32.313Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=nginx path=/var/usrlocal/bin/nomad pid=18027
2022-12-01T09:22:32.313Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=nginx path=/var/usrlocal/bin/nomad
2022-12-01T09:22:32.322Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=nginx version=2
2022-12-01T09:22:32.322Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=nginx @module=logmon address=/tmp/plugin3109226874 network=unix timestamp=2022-12-01T09:22:32.322Z
2022-12-01T09:22:32.322Z [TRACE] client.alloc_runner.task_runner.task_hook.logmon.stdio: waiting for stdio data: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=nginx
2022-12-01T09:22:32.323Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=nginx @module=logmon path=/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/alloc/logs/.nginx.stdout.fifo timestamp=2022-12-01T09:22:32.323Z
2022-12-01T09:22:32.323Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=nginx path=/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/alloc/logs/.nginx.stderr.fifo @module=logmon timestamp=2022-12-01T09:22:32.323Z
2022-12-01T09:22:32.327Z [INFO]  agent: (runner) creating new runner (dry: false, once: false)
2022-12-01T09:22:32.327Z [DEBUG] agent: (runner) final config: {"Consul":{"Address":"127.0.0.1:8500","Namespace":"","Auth":{"Enabled":false,"Username":""},"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":true},"Token":"","TokenFile":"","Transport":{"CustomDialer":null,"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":5,"TLSHandshakeTimeout":10000000000}},"Dedup":{"Enabled":false,"MaxStale":2000000000,"Prefix":"consul-template/dedup/","TTL":15000000000,"BlockQueryWaitTime":60000000000},"DefaultDelims":{"Left":null,"Right":null},"Exec":{"Command":[],"Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":0},"KillSignal":2,"LogLevel":"WARN","FileLog":{"LogFilePath":"","LogRotateBytes":0,"LogRotateDuration":86400000000000,"LogRotateMaxFiles":0},"MaxStale":2000000000,"PidFile":"","ReloadSignal":1,"Syslog":{"Enabled":false,"Facility":"LOCAL0","Name":"consul-template"},"Templates":[{"Backup":false,"Command":[],"CommandTimeout":30000000000,"Contents":"{{- range nomadVarListSafe \"nomad/jobs\" }}\n  {{- if nomadVarExists .Path }}\n    {{- with nomadVar .Path }}\n      {{- range .Tuples }}\n{{ .K | toUpper }}={{ .V | toJSON }}\n      {{- end }}\n    {{- end }}\n  {{- end }}\n{{- end }}\n","CreateDestDirs":true,"Destination":"/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/nginx/secrets/nomad-variables.env","ErrMissingKey":false,"ErrFatal":true,"Exec":{"Command":[],"Enabled":false,"Env":{"Denylist":[],"Custom":[],"Pristine":false,"Allowlist":[]},"KillSignal":2,"KillTimeout":30000000000,"ReloadSignal":null,"Splay":0,"Timeout":30000000000},"Perms":420,"User":null,"Uid":null,"Group":null,"Gid":null,"Source":"","Wait":{"Enabled":false,"Min":0,"Max":0},"LeftDelim":"{{","RightDelim":"}}","FunctionDenylist":["plugin","writeToFile"],"SandboxPath":"/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/nginx"}],"TemplateErrFatal":null,"Vault":{"Address":"https://active.vault.service.consul:8200","Enabled":true,"Namespace":"","RenewToken":false,"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true},"SSL":{"CaCert":"/etc/nomad.d/tls/vault.crt","CaPath":"","Cert":"","Enabled":true,"Key":"","ServerName":"","Verify":true},"Transport":{"CustomDialer":null,"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":5,"TLSHandshakeTimeout":10000000000},"UnwrapToken":false,"DefaultLeaseDuration":300000000000,"LeaseRenewalThreshold":0.9,"K8SAuthRoleName":"","K8SServiceAccountTokenPath":"/run/secrets/kubernetes.io/serviceaccount/token","K8SServiceAccountToken":"","K8SServiceMountPath":"kubernetes"},"Nomad":{"Address":"http://127.0.0.1:4646","Enabled":true,"Namespace":"default","SSL":{"CaCert":"","CaPath":"","Cert":"","Enabled":false,"Key":"","ServerName":"","Verify":true},"AuthUsername":"","AuthPassword":"","Transport":{"CustomDialer":{},"DialKeepAlive":30000000000,"DialTimeout":30000000000,"DisableKeepAlives":false,"IdleConnTimeout":90000000000,"MaxIdleConns":100,"MaxIdleConnsPerHost":5,"TLSHandshakeTimeout":10000000000},"Retry":{"Attempts":12,"Backoff":250000000,"MaxBackoff":60000000000,"Enabled":true}},"Wait":{"Enabled":false,"Min":0,"Max":0},"Once":false,"ParseOnly":false,"BlockQueryWaitTime":60000000000}
2022-12-01T09:22:32.327Z [INFO]  agent: (runner) creating watcher
2022-12-01T09:22:32.327Z [INFO]  agent: (runner) starting
2022-12-01T09:22:32.327Z [DEBUG] agent: (runner) running initial templates
2022-12-01T09:22:32.327Z [DEBUG] agent: (runner) initiating run
2022-12-01T09:22:32.327Z [DEBUG] agent: (runner) checking template 1548d3a7b1e21a57b8e9bba313608868
2022-12-01T09:22:32.327Z [DEBUG] agent: (runner) missing data for 1 dependencies
2022-12-01T09:22:32.327Z [DEBUG] agent: (runner) missing dependency: nomad.var.list(nomad/[email protected])
2022-12-01T09:22:32.327Z [DEBUG] agent: (runner) add used dependency nomad.var.list(nomad/[email protected]) to missing since isLeader but do not have a watcher
2022-12-01T09:22:32.327Z [DEBUG] agent: (runner) was not watching 1 dependencies
2022-12-01T09:22:32.327Z [DEBUG] agent: (watcher) adding nomad.var.list(nomad/[email protected])
2022-12-01T09:22:32.327Z [TRACE] agent: (watcher) nomad.var.list(nomad/[email protected]) starting
2022-12-01T09:22:32.327Z [DEBUG] agent: (runner) diffing and updating dependencies
2022-12-01T09:22:32.327Z [DEBUG] agent: (runner) watching 1 dependencies
2022-12-01T09:22:32.327Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) starting fetch
2022-12-01T09:22:32.327Z [TRACE] agent: nomad.var.list(nomad/[email protected]): GET /v1/vars/?stale=true&wait=1m0s
2022-12-01T09:22:32.329Z [DEBUG] http: request complete: method=GET path="/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms" duration=1.358722ms
2022-12-01T09:22:32.329Z [TRACE] agent: nomad.var.list(nomad/[email protected]): returned 2 paths
2022-12-01T09:22:32.329Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) marking successful data response
2022-12-01T09:22:32.329Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) successful contact, resetting retries
2022-12-01T09:22:32.329Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) received data
2022-12-01T09:22:32.329Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) starting fetch
2022-12-01T09:22:32.329Z [TRACE] agent: nomad.var.list(nomad/[email protected]): GET /v1/vars/?index=782258&stale=true&wait=1m0s
2022-12-01T09:22:32.329Z [DEBUG] agent: (runner) receiving dependency nomad.var.list(nomad/[email protected])
2022-12-01T09:22:32.329Z [DEBUG] agent: (runner) initiating run
2022-12-01T09:22:32.329Z [DEBUG] agent: (runner) checking template 1548d3a7b1e21a57b8e9bba313608868
2022-12-01T09:22:32.330Z [DEBUG] agent: (runner) missing data for 2 dependencies
2022-12-01T09:22:32.330Z [DEBUG] agent: (runner) missing dependency: nomad.var.get(nomad/[email protected])
2022-12-01T09:22:32.330Z [DEBUG] agent: (runner) missing dependency: nomad.var.get(nomad/jobs/nginx/[email protected])
2022-12-01T09:22:32.330Z [DEBUG] agent: (runner) add used dependency nomad.var.get(nomad/[email protected]) to missing since isLeader but do not have a watcher
2022-12-01T09:22:32.330Z [DEBUG] agent: (runner) add used dependency nomad.var.get(nomad/jobs/nginx/[email protected]) to missing since isLeader but do not have a watcher
2022-12-01T09:22:32.330Z [TRACE] client: next heartbeat: period=17.130014181s
2022-12-01T09:22:32.330Z [DEBUG] agent: (runner) was not watching 2 dependencies
2022-12-01T09:22:32.330Z [DEBUG] agent: (watcher) adding nomad.var.get(nomad/[email protected])
2022-12-01T09:22:32.330Z [TRACE] agent: (watcher) nomad.var.get(nomad/[email protected]) starting
2022-12-01T09:22:32.330Z [DEBUG] agent: (watcher) adding nomad.var.get(nomad/jobs/nginx/[email protected])
2022-12-01T09:22:32.330Z [TRACE] agent: (watcher) nomad.var.get(nomad/jobs/nginx/[email protected]) starting
2022-12-01T09:22:32.330Z [DEBUG] agent: (runner) diffing and updating dependencies
2022-12-01T09:22:32.330Z [DEBUG] agent: (runner) nomad.var.list(nomad/[email protected]) is still needed
2022-12-01T09:22:32.330Z [DEBUG] agent: (runner) watching 3 dependencies
2022-12-01T09:22:32.331Z [TRACE] agent: (view) nomad.var.get(nomad/[email protected]) starting fetch
2022-12-01T09:22:32.331Z [TRACE] agent: nomad.var.get(nomad/[email protected]): GET /v1/var/nomad/jobs?stale=true&wait=1m0s
2022-12-01T09:22:32.331Z [TRACE] agent: (view) nomad.var.get(nomad/jobs/nginx/[email protected]) starting fetch
2022-12-01T09:22:32.331Z [TRACE] agent: nomad.var.get(nomad/jobs/nginx/[email protected]): GET /v1/var/nomad/jobs/nginx/nginx?stale=true&wait=1m0s
2022-12-01T09:22:32.333Z [DEBUG] http: request complete: method=GET path="/v1/var/nomad/jobs?namespace=default&stale=&wait=60000ms" duration=1.17267ms
2022-12-01T09:22:32.333Z [TRACE] agent: nomad.var.get(nomad/[email protected]): returned "nomad/jobs"
2022-12-01T09:22:32.333Z [TRACE] agent: (view) nomad.var.get(nomad/[email protected]) marking successful data response
2022-12-01T09:22:32.333Z [TRACE] agent: (view) nomad.var.get(nomad/[email protected]) successful contact, resetting retries
2022-12-01T09:22:32.333Z [TRACE] agent: (view) nomad.var.get(nomad/[email protected]) received data
2022-12-01T09:22:32.333Z [TRACE] agent: (view) nomad.var.get(nomad/[email protected]) starting fetch
2022-12-01T09:22:32.333Z [TRACE] agent: nomad.var.get(nomad/[email protected]): GET /v1/var/nomad/jobs?index=749651&stale=true&wait=1m0s
2022-12-01T09:22:32.333Z [TRACE] client: next heartbeat: period=17.714416532s
2022-12-01T09:22:32.333Z [DEBUG] agent: (runner) receiving dependency nomad.var.get(nomad/[email protected])
2022-12-01T09:22:32.333Z [DEBUG] agent: (runner) initiating run
2022-12-01T09:22:32.333Z [DEBUG] agent: (runner) checking template 1548d3a7b1e21a57b8e9bba313608868
2022-12-01T09:22:32.335Z [DEBUG] http: request complete: method=GET path="/v1/var/nomad/jobs/nginx/nginx?namespace=default&stale=&wait=60000ms" duration=1.444936ms
2022-12-01T09:22:32.336Z [TRACE] agent: nomad.var.get(nomad/jobs/nginx/[email protected]): returned "nomad/jobs/nginx/nginx"
2022-12-01T09:22:32.336Z [TRACE] agent: (view) nomad.var.get(nomad/jobs/nginx/[email protected]) marking successful data response
2022-12-01T09:22:32.336Z [TRACE] agent: (view) nomad.var.get(nomad/jobs/nginx/[email protected]) successful contact, resetting retries
2022-12-01T09:22:32.336Z [TRACE] agent: (view) nomad.var.get(nomad/jobs/nginx/[email protected]) received data
2022-12-01T09:22:32.336Z [TRACE] agent: (view) nomad.var.get(nomad/jobs/nginx/[email protected]) starting fetch
2022-12-01T09:22:32.336Z [TRACE] agent: nomad.var.get(nomad/jobs/nginx/[email protected]): GET /v1/var/nomad/jobs/nginx/nginx?index=782258&stale=true&wait=1m0s
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) missing data for 2 dependencies
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) missing dependency: nomad.var.block(nomad/[email protected])
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) missing dependency: nomad.var.get(nomad/jobs/nginx/[email protected])
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) add used dependency nomad.var.block(nomad/[email protected]) to missing since isLeader but do not have a watcher
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) was not watching 1 dependencies
2022-12-01T09:22:32.336Z [DEBUG] agent: (watcher) adding nomad.var.block(nomad/[email protected])
2022-12-01T09:22:32.336Z [TRACE] agent: (watcher) nomad.var.block(nomad/[email protected]) starting
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) diffing and updating dependencies
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) nomad.var.list(nomad/[email protected]) is still needed
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) nomad.var.get(nomad/[email protected]) is still needed
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) nomad.var.get(nomad/jobs/nginx/[email protected]) is still needed
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) watching 4 dependencies
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) receiving dependency nomad.var.get(nomad/jobs/nginx/[email protected])
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) initiating run
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) checking template 1548d3a7b1e21a57b8e9bba313608868
2022-12-01T09:22:32.336Z [TRACE] client: next heartbeat: period=17.749626528s
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) missing data for 2 dependencies
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) missing dependency: nomad.var.block(nomad/[email protected])
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) missing dependency: nomad.var.block(nomad/jobs/nginx/[email protected])
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) add used dependency nomad.var.block(nomad/jobs/nginx/[email protected]) to missing since isLeader but do not have a watcher
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) was not watching 1 dependencies
2022-12-01T09:22:32.336Z [DEBUG] agent: (watcher) adding nomad.var.block(nomad/jobs/nginx/[email protected])
2022-12-01T09:22:32.336Z [TRACE] agent: (watcher) nomad.var.block(nomad/jobs/nginx/[email protected]) starting
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) diffing and updating dependencies
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) nomad.var.block(nomad/[email protected]) is still needed
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) nomad.var.get(nomad/jobs/nginx/[email protected]) is still needed
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) nomad.var.list(nomad/[email protected]) is still needed
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) nomad.var.get(nomad/[email protected]) is still needed
2022-12-01T09:22:32.336Z [DEBUG] agent: (runner) watching 5 dependencies
2022-12-01T09:22:32.336Z [TRACE] agent: (view) nomad.var.block(nomad/jobs/nginx/[email protected]) starting fetch
2022-12-01T09:22:32.337Z [TRACE] agent: nomad.var.block(nomad/jobs/nginx/[email protected]): GET /v1/var/nomad/jobs/nginx/nginx?stale=true&wait=1m0s
2022-12-01T09:22:32.337Z [TRACE] agent: (view) nomad.var.block(nomad/[email protected]) starting fetch
2022-12-01T09:22:32.337Z [TRACE] agent: nomad.var.block(nomad/[email protected]): GET /v1/var/nomad/jobs?stale=true&wait=1m0s
2022-12-01T09:22:32.338Z [DEBUG] http: request complete: method=GET path="/v1/var/nomad/jobs/nginx/nginx?namespace=default&stale=&wait=60000ms" duration=1.428294ms
2022-12-01T09:22:32.338Z [TRACE] agent: nomad.var.block(nomad/jobs/nginx/[email protected]): returned "nomad/jobs/nginx/nginx"
2022-12-01T09:22:32.338Z [TRACE] agent: (view) nomad.var.block(nomad/jobs/nginx/[email protected]) marking successful data response
2022-12-01T09:22:32.338Z [TRACE] agent: (view) nomad.var.block(nomad/jobs/nginx/[email protected]) successful contact, resetting retries
2022-12-01T09:22:32.338Z [TRACE] agent: (view) nomad.var.block(nomad/jobs/nginx/[email protected]) received data
2022-12-01T09:22:32.338Z [TRACE] agent: (view) nomad.var.block(nomad/jobs/nginx/[email protected]) starting fetch
2022-12-01T09:22:32.338Z [TRACE] agent: nomad.var.block(nomad/jobs/nginx/[email protected]): GET /v1/var/nomad/jobs/nginx/nginx?index=782258&stale=true&wait=1m0s
2022-12-01T09:22:32.339Z [TRACE] client: next heartbeat: period=18.576400531s
2022-12-01T09:22:32.339Z [DEBUG] http: request complete: method=GET path="/v1/var/nomad/jobs?namespace=default&stale=&wait=60000ms" duration=1.974996ms
2022-12-01T09:22:32.339Z [DEBUG] agent: (runner) receiving dependency nomad.var.block(nomad/jobs/nginx/[email protected])
2022-12-01T09:22:32.339Z [DEBUG] agent: (runner) initiating run
2022-12-01T09:22:32.339Z [DEBUG] agent: (runner) checking template 1548d3a7b1e21a57b8e9bba313608868
2022-12-01T09:22:32.339Z [TRACE] agent: nomad.var.block(nomad/[email protected]): returned "nomad/jobs"
2022-12-01T09:22:32.339Z [TRACE] agent: (view) nomad.var.block(nomad/[email protected]) marking successful data response
2022-12-01T09:22:32.339Z [TRACE] agent: (view) nomad.var.block(nomad/[email protected]) successful contact, resetting retries
2022-12-01T09:22:32.339Z [TRACE] agent: (view) nomad.var.block(nomad/[email protected]) received data
2022-12-01T09:22:32.339Z [TRACE] agent: (view) nomad.var.block(nomad/[email protected]) starting fetch
2022-12-01T09:22:32.339Z [TRACE] agent: nomad.var.block(nomad/[email protected]): GET /v1/var/nomad/jobs?index=749651&stale=true&wait=1m0s
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) missing data for 1 dependencies
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) missing dependency: nomad.var.block(nomad/[email protected])
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) missing data for 1 dependencies
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) diffing and updating dependencies
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) nomad.var.list(nomad/[email protected]) is still needed
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) nomad.var.get(nomad/[email protected]) is still needed
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) nomad.var.block(nomad/[email protected]) is still needed
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) nomad.var.get(nomad/jobs/nginx/[email protected]) is still needed
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) nomad.var.block(nomad/jobs/nginx/[email protected]) is still needed
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) watching 5 dependencies
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) receiving dependency nomad.var.block(nomad/[email protected])
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) initiating run
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) checking template 1548d3a7b1e21a57b8e9bba313608868
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) rendering "(dynamic)" => "/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/nginx/secrets/nomad-variables.env"
2022-12-01T09:22:32.340Z [INFO]  agent: (runner) rendered "(dynamic)" => "/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/nginx/secrets/nomad-variables.env"
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) diffing and updating dependencies
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) nomad.var.list(nomad/[email protected]) is still needed
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) nomad.var.get(nomad/[email protected]) is still needed
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) nomad.var.block(nomad/[email protected]) is still needed
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) nomad.var.get(nomad/jobs/nginx/[email protected]) is still needed
2022-12-01T09:22:32.340Z [DEBUG] agent: (runner) nomad.var.block(nomad/jobs/nginx/[email protected]) is still needed
2022-12-01T09:22:32.341Z [DEBUG] agent: (runner) watching 5 dependencies
2022-12-01T09:22:32.341Z [DEBUG] agent: (runner) all templates rendered
2022-12-01T09:22:32.342Z [TRACE] client.alloc_runner.task_runner: waiting for cgroup to exist for: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=nginx allocID=fcee5e05-9a9d-7b1e-a411-368402614fd7 task="&{nginx docker  map[image:nginx:1.21 security_opt:[label=disable]] map[] [] <nil> [0xc000054a80] [] [] 0xc00012b140 0xc000d1c3f0 <nil> <nil> map[] 5s 0xc000e54c70 [] false 0s [] []   <nil>}"
2022-12-01T09:22:32.344Z [DEBUG] client.driver_mgr.docker: image reference count incremented: driver=docker image_name=nginx:1.21 image_id=sha256:0e901e68141fd02f237cf63eb842529f8a9500636a9419e3cf4fb986b8fe3d5d references=1
2022-12-01T09:22:32.344Z [TRACE] client.driver_mgr.docker: binding volumes: driver=docker task_name=nginx volumes=["/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/alloc:/alloc", "/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/nginx/local:/local", "/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/nginx/secrets:/secrets"]
2022-12-01T09:22:32.344Z [TRACE] client.driver_mgr.docker: no docker log driver provided, defaulting to plugin config: driver=docker task_name=nginx
2022-12-01T09:22:32.344Z [DEBUG] client.driver_mgr.docker: configured resources: driver=docker task_name=nginx memory=268435456 memory_reservation=0 cpu_shares=200 cpu_quota=0 cpu_period=0
2022-12-01T09:22:32.344Z [DEBUG] client.driver_mgr.docker: binding directories: driver=docker task_name=nginx binds="[]string{\"/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/alloc:/alloc\", \"/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/nginx/local:/local\", \"/var/nomad/alloc/fcee5e05-9a9d-7b1e-a411-368402614fd7/nginx/secrets:/secrets\"}"
2022-12-01T09:22:32.344Z [DEBUG] client.driver_mgr.docker: configuring network mode for task group: driver=docker task_name=nginx network_mode=container:e4ce0ce4fb723122883efc23ac3087cd5d8696281385edf9405ecda7ae0b64f1
2022-12-01T09:22:32.344Z [DEBUG] client.driver_mgr.docker: applied labels on the container: driver=docker task_name=nginx labels=map[com.hashicorp.nomad.alloc_id:fcee5e05-9a9d-7b1e-a411-368402614fd7]
2022-12-01T09:22:32.344Z [DEBUG] client.driver_mgr.docker: setting container name: driver=docker task_name=nginx container_name=nginx-fcee5e05-9a9d-7b1e-a411-368402614fd7
2022-12-01T09:22:32.371Z [INFO]  client.driver_mgr.docker: created container: driver=docker container_id=1ef03c8246da43f45a4310f3f348898b41c51be431de1c6d8b42012e64a79c63
2022-12-01T09:22:32.457Z [DEBUG] client: updated allocations: index=782392 total=1 pulled=0 filtered=1
2022-12-01T09:22:32.457Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
2022-12-01T09:22:32.457Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
2022-12-01T09:22:32.457Z [TRACE] client: next heartbeat: period=13.499999202s
2022-12-01T09:22:32.460Z [INFO]  client.driver_mgr.docker: started container: driver=docker container_id=1ef03c8246da43f45a4310f3f348898b41c51be431de1c6d8b42012e64a79c63
2022-12-01T09:22:32.460Z [DEBUG] client.driver_mgr.docker.docker_logger: starting plugin: driver=docker path=/var/usrlocal/bin/nomad args=["/var/usrlocal/bin/nomad", "docker_logger"]
2022-12-01T09:22:32.460Z [DEBUG] client.driver_mgr.docker.docker_logger: plugin started: driver=docker path=/var/usrlocal/bin/nomad pid=18090
2022-12-01T09:22:32.460Z [DEBUG] client.driver_mgr.docker.docker_logger: waiting for RPC address: driver=docker path=/var/usrlocal/bin/nomad
2022-12-01T09:22:32.468Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: plugin address: driver=docker @module=docker_logger address=/tmp/plugin2389038006 network=unix timestamp=2022-12-01T09:22:32.468Z
2022-12-01T09:22:32.469Z [DEBUG] client.driver_mgr.docker.docker_logger: using plugin: driver=docker version=2
2022-12-01T09:22:32.470Z [TRACE] client.driver_mgr.docker.docker_logger.stdio: waiting for stdio data: driver=docker
2022-12-01T09:22:32.470Z [DEBUG] client.driver_mgr.docker.docker_logger.nomad: using client connection initialized from environment: driver=docker @module=docker_logger timestamp=2022-12-01T09:22:32.470Z
2022-12-01T09:22:32.472Z [TRACE] client.alloc_runner.task_runner: setting task state: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=nginx state=running
2022-12-01T09:22:32.472Z [TRACE] client.alloc_runner.task_runner: appending task event: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 task=nginx state=running event=Started
2022-12-01T09:22:32.473Z [TRACE] client.alloc_runner: handling task state update: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 done=false
2022-12-01T09:22:32.473Z [TRACE] client.alloc_runner.task_coordinator: state transition: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 from=main to=poststart
2022-12-01T09:22:32.473Z [TRACE] client.alloc_runner.task_coordinator: state transition: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 from=poststart to=wait_alloc
2022-12-01T09:22:32.473Z [TRACE] client.alloc_runner: sending updated alloc: alloc_id=fcee5e05-9a9d-7b1e-a411-368402614fd7 client_status=running desired_status=""
2022-12-01T09:22:32.713Z [DEBUG] client: updated allocations: index=782393 total=1 pulled=0 filtered=1
2022-12-01T09:22:32.713Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
2022-12-01T09:22:32.713Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
2022-12-01T09:22:32.714Z [TRACE] client: next heartbeat: period=13.217798971s

@ahjohannessen
Copy link
Author

@lgfa29 So my question is, why is it necessary for me to do this:

Environment="NOMAD_ADDR=https://127.0.0.1:4646"
Environment="NOMAD_CACERT=/etc/nomad.d/tls/ca.crt"

when the base.hcl has tls.http set to true. My guess is that the integrated consul-template is not fed correctly from nomad.

@ahjohannessen
Copy link
Author

ahjohannessen commented Dec 1, 2022

Perhaps I was too quick to call it a success. After restarting nomad, I get:

2022-12-01T09:35:07.701Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration="127.907µs"
2022-12-01T09:35:12.343Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) starting fetch
2022-12-01T09:35:12.343Z [TRACE] agent: nomad.var.list(nomad/[email protected]): GET /v1/vars/?stale=true&wait=1m0s
2022-12-01T09:35:12.345Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "https://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": http: server gave HTTP response to HTTPS client (retry attempt 7 after "16s")
2022-12-01T09:35:17.704Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration="513.854µs"
2022-12-01T09:35:17.773Z [TRACE] client: next heartbeat: period=17.388133089s
2022-12-01T09:35:26.587Z [TRACE] consul.sync: execute sync: reason=periodic
2022-12-01T09:35:27.708Z [DEBUG] http: request complete: method=GET path=/v1/agent/health?type=client duration="192.298µs"
2022-12-01T09:35:28.345Z [TRACE] agent: (view) nomad.var.list(nomad/[email protected]) starting fetch
2022-12-01T09:35:28.345Z [TRACE] agent: nomad.var.list(nomad/[email protected]): GET /v1/vars/?stale=true&wait=1m0s
2022-12-01T09:35:28.346Z [WARN]  agent: (view) nomad.var.list(nomad/[email protected]): Get "https://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": http: server gave HTTP response to HTTPS client (retry attempt 8 after "32s")
2022-12-01T09:35:29.291Z [TRACE] client: next heartbeat: period=12.273146649s

@ahjohannessen
Copy link
Author

If I remove Environment="NOMAD_ADDR=https://127.0.0.1:4646" and Environment="NOMAD_CACERT=/etc/nomad.d/tls/ca.crt" from the systemd unit things start to work again. This is odd.

I wonder what is required to get this to work? Is it only the client agent part is the issue or is it also server setup that needs to be looked at?

There has to be something wrong with how I setup Nomad clients :/

@ahjohannessen
Copy link
Author

@lgfa29 Can it be that this is the bug?

@MikeN123
Copy link
Contributor

MikeN123 commented Dec 1, 2022

What I'm seeing in my case is that:

  • Nomad correctly starts the job with right templating
  • After a minute or so it detects a 'change' and misses all the values
  • Then it starts working correctly again

With a nomadVarList this results in a broken job on the auto-restart. With a nomadVarListSafe it does not restart and it stays working, but logs the following:

image

Note that the job is working fine after 16:23:50 already. This is with a fairly simple template block:

      template {
       data = <<EOF
{{- range nomadVarListSafe (print "/nomad/jobs/" (env "NOMAD_JOB_NAME")) }}
  {{- with nomadVar .Path }}
    {{- range .Tuples }}
{{ .K }}={{ .V | toJSON }}
    {{- end }}
  {{- end }}
{{- end }}
       EOF
       destination = "${NOMAD_SECRETS_DIR}/env.vars"
       env = true
      }

I have the feeling this happens when the old task is removed (it's on an update policy with canary = 1 and max_parallel = 1 with a count = 1). I think the removal of the old task may be doing something to the service identities that temporarily causes the new job to loose access to the variables?

@ahjohannessen
Copy link
Author

I manage to get it to work and fail with http. With https I get:

Get "https://127.0.0.1:4646/v1/vars?namespace=default&prefix=nomad%2Fjobs&stale=&wait=60000ms": http: server gave HTTP response to HTTPS client

The odd thing about setting http protocol in NOMAD_ADDR for systemd unit, I can get variables to work (until it starts failing when job is stopped or restarted - or the client is restarted). But if I as a user do: export NOMAD_ADDR=http://127.0.0.1:4646 and nomad monitor -log-level=TRACE I get this:

==> Error starting monitor: Unexpected response code: 400 (Client sent an HTTP request to an HTTPS server.)

Seems like Nomad client cannot make up its mind.

@lgfa29
Copy link
Contributor

lgfa29 commented Dec 1, 2022

Ok, there's a lot going on here so let's try to take a step back 😅

You shouldn't have to set any environment variables. In fact, I suspect we should not load them at all in consul-template but that's something I need to investigate more.

The logs server gave HTTP response to HTTPS client are interesting because they indicate that your Nomad agent is running without mTLS (server here is the Nomad client and client is consul-template). The error

==> Error starting monitor: Unexpected response code: 400 (Client sent an HTTP request to an HTTPS server.)

Is expected since your Nomad servers are running with mTLS and you sent an HTTP request.

Could you try modifying your systemd file to:

  1. remove any of the environment variables.
  2. load only the config files you expect to load instead of the entire directory. You can set the -config flag multiples time for each file. And make sure you only have one acl block across all of them.

Another thing I noticed now is that you have

tls {
  # ...
    verify_server_hostname = "true"
    verify_https_client = "false"
}

set as strings. I don't think it matters, but it would be worth correcting them to actual booleans just in case.

@lgfa29
Copy link
Contributor

lgfa29 commented Dec 1, 2022

@MikeN123 do you get an error messages in your logs?

@MikeN123
Copy link
Contributor

MikeN123 commented Dec 1, 2022

@lgfa29 nothing unexpected on the default log levels, but still need to dig into this a bit more when I find some time.

@ahjohannessen
Copy link
Author

@lgfa29 Those configs are generated by ansible-nomad. I will try your suggestions tomorrow :)

@ahjohannessen
Copy link
Author

@lgfa29 I think I found an issue in my setup with respect to https vs. http.

  1. remove any of the environment variables.

When I initially setup ansible for this I made it such that a file, /etc/profile.d/nomad.sh with export NOMAD_ADDR=... and export NOMAD_CACERT=... was placed on clients. This seemed to cause issues.

Now I have removed environment variables related to NOMAD_ADDR from systemd and /etc/profile.d/ and things "seem" working better now. I guess I need to wait and see how goes.

  1. load only the config files you expect to load instead of the entire directory. You can set the -config flag multiples time for each file. And make sure you only have one acl block across all of them.

I tried this as well. Info in ACL from nomad agent-info -json is the same with explicit loading of config files and directory mode.

...set as strings. I don't think it matters, but it would be worth correcting them to actual booleans just in case.

Seems it does not matter. I did nomad agent-info -json and TLSConfig from it had the same values using strings and actual booleans. However, I will make a PR to ansible-nomad to make it use booleans instead of strings.

@MikeN123
Copy link
Contributor

MikeN123 commented Dec 2, 2022

In my case I don't have any 'weird' configs. mTLS is enabled as well as ACLs, with no special ACLs.

tls {
  http = true
  rpc = true
  ca_file = "/etc/nomad.d/nomad-ca.pem"
  cert_file = "/etc/nomad.d/nomad.pem"
  key_file = "/etc/nomad.d/nomad-key.pem"
  verify_server_hostname = true
  verify_https_client = false
}
consul {
  token = "xxx"
}
acl {
  enabled = true
}

I just deployed a job that does not use a canary/rolling update, and did not see the issue there. Will do some more experimentation to see if it's indeed related to that.

@MikeN123
Copy link
Contributor

MikeN123 commented Dec 2, 2022

The rolling update seems to be the problem. It seems the deployment of the new task finishes successfully, the old task is stopped and then no variables are returned anymore.

My job has the following relevant config parts:

    count = 1
 
    update {
      max_parallel = 1
      canary = 1
      min_healthy_time = "30s"
      health_check = "checks"
      auto_revert = true
      auto_promote = true
    }

[...]

      template {
       data = <<EOF
{{- range nomadVarListSafe (print "/nomad/jobs/" (env "NOMAD_JOB_NAME")) }}
  {{- with nomadVar .Path }}
    {{- range .Tuples }}
{{ .K }}={{ .V | toJSON }}
    {{- end }}
  {{- end }}
{{- end }}
       EOF
       destination = "${NOMAD_SECRETS_DIR}/env.vars"
       env = true
      }

The error shown is:

image

This does not break the app, as I am using nomadVarListSafe, otherwise it would break.

I have a full TRACE log of the client, but am a bit reluctant to post is due to secrets that may be in there. I do think I need a log of the leader though, as I don't see much special in the client logs (it just starts returning 0 instead of 1 paths). Will look into that later today.

@lgfa29
Copy link
Contributor

lgfa29 commented Dec 2, 2022

@ahjohannessen

Now I have removed environment variables related to NOMAD_ADDR from systemd and /etc/profile.d/ and things "seem" working better now. I guess I need to wait and see how goes.

Awesome! I hope it stays that way 😄

I will see if there's anything we can do from the Nomad side to prevent problems like this.

@MikeN123

I have a full TRACE log of the client, but am a bit reluctant to post is due to secrets that may be in there.

That's understandable, if you feel more comfortable you can email the logs to us privately at [email protected].

I will try to reproduce it with an update block, but having the template rendering log and the exact error message would be helpful.

@MikeN123
Copy link
Contributor

MikeN123 commented Dec 3, 2022

@lgfa29 I've sent the logs to the mail address, I could not find anything specific in it. This is with nomad monitor -log-level=TRACE on client and leader.

The message shown in the interface for this run is:

image

Nothing seems to break, as it is using nomadVarListSafe which waits until it is available, and the task is not restarted but just keeps running.

@ahjohannessen
Copy link
Author

@lgfa29 @MikeN123 Should I close this issue as my problems were caused by my own mistakes, happy to leave it open though.

@lgfa29
Copy link
Contributor

lgfa29 commented Dec 5, 2022

Glad it's all working now @ahjohannessen. I opened #15459 to improve our handling of this to prevent the problem from happening in the first place.

@MikeN123 I looked at the logs you sent and I don't see the same signals as what was reported by @ahjohannessen, so I'm fairly confident that it's not the same problem.

I also haven't been able to reproduce what you described and don't see anything particularly wrong there. Here is my minimal reproduction attempt:

  1. start nomad agent -dev
  2. write variable with some content nomad var put nomad/jobs/example key=value
  3. run example job below
  4. modify sleep 3 to sleep 2 and run job again.
job "example" {
  datacenters = ["dc1"]

  group "cache" {
    update {
      max_parallel     = 1
      canary           = 1
      min_healthy_time = "30s"
      auto_revert      = true
      auto_promote     = true
    }

    task "echo" {
      driver = "raw_exec"

      config {
        command = "/bin/bash"
        args    = ["local/script.sh"]
      }

      template {
        data        = <<EOF
while true; do
{{- range nomadVarList (print "/nomad/jobs/" (env "NOMAD_JOB_NAME")) }}
  {{- with nomadVar .Path }}
    {{- range .Tuples }}
  echo {{ .K }}={{ .V | toJSON }}
    {{- end }}
  {{- end }}
{{- end }}
  sleep 3
done
       EOF
        destination = "${NOMAD_TASK_DIR}/script.sh"
      }
    }
  }
}

I have not see any issues when running the job, upgrading it, or waiting around for a bit.

So I'm inclined to close this since the original issue has been figured out, but @MikeN123 if you could try my example in your environment and see if it triggers the issue, or modify it so it's closer to your failing job, and then open a new issue I would appreciate it (and we would also stop spamming @ahjohannessen 😄).

@lgfa29 lgfa29 closed this as completed Dec 5, 2022
@github-actions
Copy link

github-actions bot commented Apr 5, 2023

I'm going to lock this issue because it has been closed for 120 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 5, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants