Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write logs to fifo from a driver run as non-root user #189

Open
rina-spinne opened this issue Jan 29, 2022 · 6 comments
Open

Write logs to fifo from a driver run as non-root user #189

rina-spinne opened this issue Jan 29, 2022 · 6 comments

Comments

@rina-spinne
Copy link

Nomad version

Nomad v1.2.4 (55e5c49b99a6fd2bf925e7fd98d95829776c331f)

Operating system and Environment details

Void Linux

Issue

While trying to setup the podman driver with a non-root user all containers fail to start due to the podman driver not being able to write on the log fifo files. The nomad client is running as root.

This happens due to both the <alloc-dir> and the internal <alloc-dir>/<alloc-id>/alloc/log/.<task>.stdout.fifo being owned by root and having only user permissions. Even if I grant the podman user the permissions to access to the <alloc-dir>, the fifos are being created with the limited permissions.

What I can get from the driver documentation is that drivers are expected to be able to write to the alloc logs fifo files but because of the permission design it looks like it's not possible to run any driver with a non-root user.

This works well when running podman as root.

Reproduction steps

Start the podman service as a non-root user and try to run any job

Expected Result

Containers starting when running with a podman service with a non-root user

Actual Result

Job fails to start due to podman not being able to write to log's fifo

Job file (if appropriate)

https://github.com/hashicorp/nomad-driver-podman/blob/main/examples/jobs/redis_deprecated.nomad

Nomad Client logs (if appropriate)

$ nomad agent -dev -plugin-dir /usr/lib/nomad/plugins/ -bind 0.0.0.0
    2022-01-29T09:20:09.519Z [DEBUG] worker: dequeued evaluation: worker_id=a5f28f46-1dcb-d9d1-0ca8-da421bfe4fc3 eval_id=7c921fcf-641d-9a96-e2f1-7d955cc2fb72 type=service namespace=default job_id=redis node_id="" triggered_by=job-register
    2022-01-29T09:20:09.520Z [DEBUG] worker.service_sched: reconciled current state with desired state: eval_id=7c921fcf-641d-9a96-e2f1-7d955cc2fb72 job_id=redis namespace=default worker_id=a5f28f46-1dcb-d9d1-0ca8-da421bfe4fc3
  results=
  | Total changes: (place 1) (destructive 0) (inplace 0) (stop 0)
  | Created Deployment: "b724ae9c-c092-77dc-06d8-18576acccc8d"
  | Desired Changes for "cache": (place 1) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 0) (canary 0)

    2022-01-29T09:20:09.521Z [DEBUG] http: request complete: method=PUT path=/v1/jobs duration=15.623236ms
    2022-01-29T09:20:09.532Z [DEBUG] worker: submitted plan for evaluation: worker_id=a5f28f46-1dcb-d9d1-0ca8-da421bfe4fc3 eval_id=7c921fcf-641d-9a96-e2f1-7d955cc2fb72
    2022-01-29T09:20:09.533Z [DEBUG] worker.service_sched: setting eval status: eval_id=7c921fcf-641d-9a96-e2f1-7d955cc2fb72 job_id=redis namespace=default worker_id=a5f28f46-1dcb-d9d1-0ca8-da421bfe4fc3 status=complete
    2022-01-29T09:20:09.534Z [DEBUG] http: request complete: method=GET path=/v1/evaluation/7c921fcf-641d-9a96-e2f1-7d955cc2fb72 duration=3.079759ms
    2022-01-29T09:20:09.537Z [DEBUG] client: updated allocations: index=11 total=1 pulled=1 filtered=0
    2022-01-29T09:20:09.538Z [DEBUG] client: allocation updates: added=1 removed=0 updated=0 ignored=0
    2022-01-29T09:20:09.538Z [DEBUG] worker: updated evaluation: worker_id=a5f28f46-1dcb-d9d1-0ca8-da421bfe4fc3 eval="<Eval \"7c921fcf-641d-9a96-e2f1-7d955cc2fb72\" JobID: \"redis\" Namespace: \"default\">"
    2022-01-29T09:20:09.543Z [DEBUG] worker: ack evaluation: worker_id=a5f28f46-1dcb-d9d1-0ca8-da421bfe4fc3 eval_id=7c921fcf-641d-9a96-e2f1-7d955cc2fb72 type=service namespace=default job_id=redis node_id="" triggered_by=job-register
    2022-01-29T09:20:09.560Z [DEBUG] http: request complete: method=GET path=/v1/evaluation/7c921fcf-641d-9a96-e2f1-7d955cc2fb72/allocations duration=2.834149ms
    2022-01-29T09:20:09.572Z [DEBUG] client: allocation updates applied: added=1 removed=0 updated=0 ignored=0 errors=0
    2022-01-29T09:20:09.576Z [DEBUG] client.alloc_runner.task_runner: lifecycle start condition has been met, proceeding: alloc_id=070a3169-6c79-8cc5-32f5-4b80c61d6d10 task=redis
    2022-01-29T09:20:09.579Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: starting plugin: alloc_id=070a3169-6c79-8cc5-32f5-4b80c61d6d10 task=redis path=/usr/bin/nomad args=["/usr/bin/nomad", "logmon"]
    2022-01-29T09:20:09.580Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: plugin started: alloc_id=070a3169-6c79-8cc5-32f5-4b80c61d6d10 task=redis path=/usr/bin/nomad pid=3974
    2022-01-29T09:20:09.580Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: waiting for RPC address: alloc_id=070a3169-6c79-8cc5-32f5-4b80c61d6d10 task=redis path=/usr/bin/nomad
    2022-01-29T09:20:09.617Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon.nomad: plugin address: alloc_id=070a3169-6c79-8cc5-32f5-4b80c61d6d10 task=redis @module=logmon address=/tmp/plugin2044465918 network=unix timestamp=2022-01-29T09:20:09.617Z
    2022-01-29T09:20:09.618Z [DEBUG] client.alloc_runner.task_runner.task_hook.logmon: using plugin: alloc_id=070a3169-6c79-8cc5-32f5-4b80c61d6d10 task=redis version=2
    2022-01-29T09:20:09.628Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=070a3169-6c79-8cc5-32f5-4b80c61d6d10 task=redis path=/tmp/NomadClient4275406614/070a3169-6c79-8cc5-32f5-4b80c61d6d10/alloc/logs/.redis.stdout.fifo @module=logmon timestamp=2022-01-29T09:20:09.627Z
    2022-01-29T09:20:09.628Z [INFO]  client.alloc_runner.task_runner.task_hook.logmon.nomad: opening fifo: alloc_id=070a3169-6c79-8cc5-32f5-4b80c61d6d10 task=redis @module=logmon path=/tmp/NomadClient4275406614/070a3169-6c79-8cc5-32f5-4b80c61d6d10/alloc/logs/.redis.stderr.fifo timestamp=2022-01-29T09:20:09.628Z
    2022-01-29T09:20:09.696Z [DEBUG] client.driver_mgr.nomad-driver-podman: Found imageID: driver=podman 4f97d722a1f0b864608aae4158d08b63d0d9ee301e6c3dec105ff57b21d5c350="for image" @module=podman docker.io/library/redis:3.2="in local storage" timestamp=2022-01-29T09:20:09.695Z
    2022-01-29T09:20:09.770Z [DEBUG] client: updated allocations: index=13 total=1 pulled=0 filtered=1
    2022-01-29T09:20:09.770Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2022-01-29T09:20:09.770Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2022-01-29T09:20:10.572Z [DEBUG] http: request complete: method=GET path=/v1/evaluation/7c921fcf-641d-9a96-e2f1-7d955cc2fb72 duration="968.994µs"
    2022-01-29T09:20:10.582Z [DEBUG] http: request complete: method=GET path=/v1/evaluation/7c921fcf-641d-9a96-e2f1-7d955cc2fb72/allocations duration=1.964284ms
    2022-01-29T09:20:10.582Z [DEBUG] client.driver_mgr.nomad-driver-podman: Cleaning up: driver=podman @module=podman container=8dcc56fa93316cea9dfd7c26cc635eb52f1999824ad143059913d64748c83c10 timestamp=2022-01-29T09:20:10.581Z
    2022-01-29T09:20:10.595Z [DEBUG] http: request complete: method=GET path="/v1/deployment/b724ae9c-c092-77dc-06d8-18576acccc8d?stale=&wait=2000ms" duration=2.787649ms
    2022-01-29T09:20:10.729Z [ERROR] client.alloc_runner.task_runner: running driver failed: alloc_id=070a3169-6c79-8cc5-32f5-4b80c61d6d10 task=redis
  error=
  | rpc error: code = Unknown desc = failed to start task, could not start container: unknown error, status code: 500: {"cause":"exit status 1","message":"exit status 1","response":500}

    2022-01-29T09:20:10.729Z [INFO]  client.alloc_runner.task_runner: not restarting task: alloc_id=070a3169-6c79-8cc5-32f5-4b80c61d6d10 task=redis reason="Error was unrecoverable"
    2022-01-29T09:20:10.730Z [INFO]  client.gc: marking allocation for GC: alloc_id=070a3169-6c79-8cc5-32f5-4b80c61d6d10
    2022-01-29T09:20:10.823Z [DEBUG] nomad.client: adding evaluations for rescheduling failed allocations: num_evals=1
    2022-01-29T09:20:10.828Z [DEBUG] client: updated allocations: index=14 total=1 pulled=0 filtered=1
    2022-01-29T09:20:10.828Z [DEBUG] http: request complete: method=GET path="/v1/deployment/b724ae9c-c092-77dc-06d8-18576acccc8d?index=11&stale=&wait=2000ms" duration=225.896419ms
    2022-01-29T09:20:10.829Z [DEBUG] client: allocation updates: added=0 removed=0 updated=0 ignored=1
    2022-01-29T09:20:10.829Z [DEBUG] client: allocation updates applied: added=0 removed=0 updated=0 ignored=1 errors=0
    2022-01-29T09:20:10.828Z [DEBUG] worker: dequeued evaluation: worker_id=a5f28f46-1dcb-d9d1-0ca8-da421bfe4fc3 eval_id=b99937cf-ac5a-b17f-2d8d-109aac1e39e5 type=service namespace=default job_id=redis node_id="" triggered_by=alloc-failure
    2022-01-29T09:20:10.830Z [DEBUG] worker.service_sched: reconciled current state with desired state: eval_id=b99937cf-ac5a-b17f-2d8d-109aac1e39e5 job_id=redis namespace=default worker_id=a5f28f46-1dcb-d9d1-0ca8-da421bfe4fc3
  results=
  | Total changes: (place 0) (destructive 0) (inplace 0) (stop 0)
  | Desired Changes for "cache": (place 0) (inplace 0) (destructive 0) (stop 0) (migrate 0) (ignore 1) (canary 0)

    2022-01-29T09:20:10.831Z [DEBUG] worker.service_sched: setting eval status: eval_id=b99937cf-ac5a-b17f-2d8d-109aac1e39e5 job_id=redis namespace=default worker_id=a5f28f46-1dcb-d9d1-0ca8-da421bfe4fc3 status=complete

Relevant part of the podman logs

DEBU[0208] Created OCI spec for container 8dcc56fa93316cea9dfd7c26cc635eb52f1999824ad143059913d64748c83c10 at /var/lib/podman/.local/share/containers/storage/overlay-containers/8dcc56fa93316cea9dfd7c26cc635eb52f1999824ad143059913d64748c83c10/userdata/config.json
DEBU[0208] /usr/libexec/podman/conmon messages will be logged to syslog
DEBU[0208] running conmon: /usr/libexec/podman/conmon    args="[--api-version 1 -c 8dcc56fa93316cea9dfd7c26cc635eb52f1999824ad143059913d64748c83c10 -u 8dcc56fa93316cea9dfd7c26cc635eb52f1999824ad143059913d64748c83c10 -r /usr/bin/crun -b /var/lib/podman/.local/share/containers/storage/overlay-containers/8dcc56fa93316cea9dfd7c26cc635eb52f1999824ad143059913d64748c83c10/userdata -p /tmp/podman-run-978/containers/overlay-containers/8dcc56fa93316cea9dfd7c26cc635eb52f1999824ad143059913d64748c83c10/userdata/pidfile -n redis-070a3169-6c79-8cc5-32f5-4b80c61d6d10 --exit-dir /tmp/podman-run-978/libpod/tmp/exits --full-attach -l k8s-file:/tmp/NomadClient4275406614/070a3169-6c79-8cc5-32f5-4b80c61d6d10/alloc/logs/.redis.stdout.fifo --log-level debug --syslog --conmon-pidfile /tmp/podman-run-978/containers/overlay-containers/8dcc56fa93316cea9dfd7c26cc635eb52f1999824ad143059913d64748c83c10/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/podman/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/podman-run-978/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /tmp/podman-run-978/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mount_program=/bin/fuse-overlayfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 8dcc56fa93316cea9dfd7c26cc635eb52f1999824ad143059913d64748c83c10]"
[conmon:e]: Failed to open log file Permission denied
DEBU[0208] Cleaning up container 8dcc56fa93316cea9dfd7c26cc635eb52f1999824ad143059913d64748c83c10
@lgfa29
Copy link
Contributor

lgfa29 commented Feb 2, 2022

Hi @rina-spinne 👋

I don't have a lot of experience with running rootless workloads like in Podman, but I imagine you would have to then collect logs outside Nomad. Have you tried using config.logging.driver = "journald"?

@rina-spinne
Copy link
Author

Collecting the logs outside nomad might be a workaround for this issue. Can't use journald on void but looks like is possible to disable the log collection and use another tool for that.

While this is not a dealbreaker, I wanted to have a quick way of retrieving logs from a single task. Is running the driver as root the only way of achieving this?

@tgross
Copy link
Member

tgross commented Aug 22, 2022

I'm going to transfer this issue over to the podman driver repo, as I don't think this is something we can handle in Nomad itself.

@tgross tgross transferred this issue from hashicorp/nomad Aug 22, 2022
@fred-gb
Copy link

fred-gb commented Jan 4, 2023

Hi,
I not sure to post answer to this issue. I think it's duplicate. Let me know if I need to open another issue.

Nomad 1.4.3
Nomad Podman Driver 0.4.1
Podman 4.3.1 (compil from source and rootless)
Ubuntu 22.04

nomad config podman.hcl

plugin "nomad-driver-podman" {
  config {
    socket_path = "unix:///run/user/1001/podman/podman.sock"
    volumes {
      enabled      = true
    }
  }
}

When I try the example podman job:

job "redis" {
  datacenters = ["dc1"]
  type        = "service"

  group "cache" {
    network {
      port "redis" { to = 6379 }
    }

    task "redis" {
      driver = "podman"

      config {
        image = "docker://redis"
        ports = ["redis"]
      }
    }
  }
}

Task failed:

Jan  4 16:45:09 sandbox2 podman[58278]: time="2023-01-04T17:45:09+01:00" level=info msg="/usr/local/bin/podman filtering at log level info"
Jan  4 16:45:09 sandbox2 podman[58278]: time="2023-01-04T17:45:09+01:00" level=info msg="Setting parallel job count to 7"
Jan  4 16:45:09 sandbox2 podman[58278]: time="2023-01-04T17:45:09+01:00" level=info msg="Using systemd socket activation to determine API endpoint"
Jan  4 16:45:09 sandbox2 podman[58278]: time="2023-01-04T17:45:09+01:00" level=info msg="API service listening on \"/run/user/1001/podman/podman.sock\". URI: \"/run/user/1001/podman/podman.sock\""
Jan  4 16:45:09 sandbox2 podman[58278]: time="2023-01-04T17:45:09+01:00" level=info msg="Request Failed(Not Found): no container with name or ID \"redis-465b2ebb-0b8a-28f5-98a8-96395ac152ee\" found: no such container"
Jan  4 16:45:09 sandbox2 podman[58278]: @ - - [04/Jan/2023:17:45:09 +0100] "GET /v1.0.0/libpod/containers/redis-465b2ebb-0b8a-28f5-98a8-96395ac152ee/json HTTP/1.1" 404 158 "" "Go-http-client/1.1"
Jan  4 16:45:09 sandbox2 podman[58278]: @ - - [04/Jan/2023:17:45:09 +0100] "GET /v1.0.0/libpod/images/docker.io/library/redis:latest/json HTTP/1.1" 200 8205 "" "Go-http-client/1.1"
Jan  4 16:45:09 sandbox2 podman[58278]: @ - - [04/Jan/2023:17:45:09 +0100] "POST /v1.0.0/libpod/containers/create HTTP/1.1" 201 88 "" "Go-http-client/1.1"
Jan  4 16:45:09 sandbox2 podman[58278]: time="2023-01-04T17:45:09+01:00" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup path conmon: open /sys/fs/cgroup/cgroup.subtree_control: permission denied"
Jan  4 16:45:09 sandbox2 podman[58307]: [conmon:e]: Failed to open log file Permission denied
Jan  4 16:45:09 sandbox2 podman[58278]: time="2023-01-04T17:45:09+01:00" level=info msg="Request Failed(Internal Server Error): exit status 1"
Jan  4 16:45:09 sandbox2 podman[58278]: @ - - [04/Jan/2023:17:45:09 +0100] "POST /v1.0.0/libpod/containers/0f1fe7966aa0c593067bdfb4efc107847f05c0260c6c0908987320ddcf8b9e23/start HTTP/1.1" 500 67 "" "Go-http-client/1.1"
Jan  4 16:45:09 sandbox2 podman[58278]: @ - - [04/Jan/2023:17:45:09 +0100] "DELETE /v1.0.0/libpod/containers/0f1fe7966aa0c593067bdfb4efc107847f05c0260c6c0908987320ddcf8b9e23?force=true&v=true HTTP/1.1" 200 154 "" "Go-http-client/1.1"
Jan  4 16:45:09 sandbox2 nomad[47651]:     2023-01-04T17:45:09.700+0100 [ERROR] client.driver_mgr.nomad-driver-podman: failed to clean up from an error in Start: driver=podman @module=podman error="cannot delete container, status code: 200" timestamp="2023-01-04T17:45:09.699+0100"
Jan  4 16:45:09 sandbox2 nomad[47651]: client.driver_mgr.nomad-driver-podman: failed to clean up from an error in Start: driver=podman @module=podman error="cannot delete container, status code: 200" timestamp="2023-01-04T17:45:09.699+0100"
Jan  4 16:45:09 sandbox2 nomad[47651]:     2023-01-04T17:45:09.701+0100 [ERROR] client.alloc_runner.task_runner: running driver failed: alloc_id=465b2ebb-0b8a-28f5-98a8-96395ac152ee task=redis
Jan  4 16:45:09 sandbox2 nomad[47651]:   error=
Jan  4 16:45:09 sandbox2 nomad[47651]:   | rpc error: code = Unknown desc = failed to start task, could not start container: cannot start container, status code: 500: {"cause":"exit status 1","message":"exit status 1","response":500}
Jan  4 16:45:09 sandbox2 nomad[47651]:   
Jan  4 16:45:09 sandbox2 nomad[47651]:     2023-01-04T17:45:09.701+0100 [INFO]  client.alloc_runner.task_runner: not restarting task: alloc_id=465b2ebb-0b8a-28f5-98a8-96395ac152ee task=redis reason="Error was unrecoverable"
Jan  4 16:45:09 sandbox2 nomad[47651]: client.alloc_runner.task_runner: running driver failed: alloc_id=465b2ebb-0b8a-28f5-98a8-96395ac152ee task=redis#012  error=#012  | rpc error: code = Unknown desc = failed to start task, could not start container: cannot start container, status code: 500: {"cause":"exit status 1","message":"exit status 1","response":500}#012  
Jan  4 16:45:09 sandbox2 nomad[47651]:  client.alloc_runner.task_runner: not restarting task: alloc_id=465b2ebb-0b8a-28f5-98a8-96395ac152ee task=redis reason="Error was unrecoverable"
Jan  4 16:45:13 sandbox2 podman[58278]: @ - - [04/Jan/2023:17:45:13 +0100] "GET /libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Jan  4 16:45:13 sandbox2 nomad[47651]:     2023-01-04T17:45:13.707+0100 [WARN]  client.alloc_runner.task_runner.task_hook.logmon.nomad: timed out waiting for read-side of process output pipe to close: alloc_id=465b2ebb-0b8a-28f5-98a8-96395ac152ee task=redis @module=logmon timestamp="2023-01-04T17:45:13.707+0100"
Jan  4 16:45:13 sandbox2 nomad[47651]:  client.alloc_runner.task_runner.task_hook.logmon.nomad: timed out waiting for read-side of process output pipe to close: alloc_id=465b2ebb-0b8a-28f5-98a8-96395ac152ee task=redis @module=logmon timestamp="2023-01-04T17:45:13.707+0100"
Jan  4 16:45:13 sandbox2 nomad[47651]:     2023-01-04T17:45:13.708+0100 [WARN]  client.alloc_runner.task_runner.task_hook.logmon.nomad: timed out waiting for read-side of process output pipe to close: alloc_id=465b2ebb-0b8a-28f5-98a8-96395ac152ee task=redis @module=logmon timestamp="2023-01-04T17:45:13.707+0100"
Jan  4 16:45:13 sandbox2 nomad[47651]:  client.alloc_runner.task_runner.task_hook.logmon.nomad: timed out waiting for read-side of process output pipe to close: alloc_id=465b2ebb-0b8a-28f5-98a8-96395ac152ee task=redis @module=logmon timestamp="2023-01-04T17:45:13.707+0100"
Jan  4 16:45:13 sandbox2 nomad[47651]:     2023-01-04T17:45:13.711+0100 [INFO]  client.gc: marking allocation for GC: alloc_id=465b2ebb-0b8a-28f5-98a8-96395ac152ee
Jan  4 16:45:13 sandbox2 nomad[47651]:  client.gc: marking allocation for GC: alloc_id=465b2ebb-0b8a-28f5-98a8-96395ac152ee

I can run any podman container, it works fine. `podman
As root or as podman rootless user.

root@sandbox2:/tmp/conmon# podman run -dt -p 8080:80 docker.io/nginx
e8c62f3814a2553a747ba30d1548304098402472835a4445250e7e957cc86144
root@sandbox2:/tmp/conmon# podman ps
CONTAINER ID  IMAGE                           COMMAND               CREATED        STATUS            PORTS                 NAMES
e8c62f3814a2  docker.io/library/nginx:latest  nginx -g daemon o...  5 seconds ago  Up 5 seconds ago  0.0.0.0:8080->80/tcp  awesome_northcutt

For me revelant logs:

Jan  4 16:45:09 sandbox2 podman[58278]: time="2023-01-04T17:45:09+01:00" level=info msg="Failed to add conmon to cgroupfs sandbox cgroup: creating cgroup path conmon: open /sys/fs/cgroup/cgroup.subtree_control: permission denied"
Jan  4 16:45:09 sandbox2 podman[58307]: [conmon:e]: Failed to open log file Permission denied
Jan  4 16:45:09 sandbox2 podman[58278]: time="2023-01-04T17:45:09+01:00" level=info msg="Request Failed(Internal Server Error): exit status 1"

I cannot find solution to add conmon in /sys/fs/cgroup/cgroup.subtree_control.

I don't know!

I also try Nomad this podman in root mode, it works!

Does someone already run Nomad with Podman in rootless ?

Thanks!
And Good Year with Hashicorp!

@lgfa29
Copy link
Contributor

lgfa29 commented Mar 11, 2023

Hi @fred-gb 👋

I believe your issue may be related to cgroups v2 support. Can you try running with cgroups v1 and see if it fixes the problem?

And (quite) late but Happy New Year for you as well 😄

@angrycub angrycub self-assigned this May 19, 2023
@Sherloks
Copy link

@rina-spinne Have you found a solution for this?

I'm currently using disable_log_collection = true... While it's not a deal-breaker, having logs inside Nomad would be incredibly useful.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Needs Roadmapping
Development

No branches or pull requests

7 participants