Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CONTAINER_NAME field is missing in journald logs #6290

Closed
ikavalio opened this issue May 20, 2020 · 10 comments
Closed

CONTAINER_NAME field is missing in journald logs #6290

ikavalio opened this issue May 20, 2020 · 10 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@ikavalio
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind feature

Description

I'm using a journald driver for podman containers and the output used to include CONTAINER_NAME field. Unfortunately, all current podman versions (CentOS 8.1, Fedora 32) don't have it that makes the process of finding the actual container harder than necessary (reverse lookup container id, but it may not exist already). I can use --log-opt tag to set custom tags, but it will be nice to have a CONTAINER_NAME too, especially if someone forgets to set a tag. Also, I think that conmon supports container names in journald https://github.com/containers/conmon/blob/master/src/cli.c#L54 and https://github.com/containers/conmon/blob/master/src/ctr_logging.c#L124 but podman doesn't pass --name or -n to the conmon (https://github.com/containers/libpod/blob/master/libpod/oci_conmon_linux.go#L1348). Sorry if it's not an issue and it was done on purpose.

Steps to reproduce the issue:

  1. sudo podman run --log-driver journald -d --rm --name pony alpine echo aaaaaa
  2. sudo journalctl -o json | grep 'echo aaaaaa' | tail -1 | jq

Describe the results you received:

{
  "_MACHINE_ID": "6cb5dfc665014a849f9fe3ed3981981c",
  "_SYSTEMD_USER_SLICE": "-.slice",
  "_SYSTEMD_INVOCATION_ID": "c2e1287ba3f64bf8b0adeb8e4d9ce7ef",
  "_CAP_EFFECTIVE": "3fffffffff",
  "SYSLOG_FACILITY": "10",
  "_SELINUX_CONTEXT": "unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023",
  "_CMDLINE": "sudo podman run --log-driver journald -d --rm --name pony alpine echo aaaaaa",
  "_AUDIT_SESSION": "4",
  "_TRANSPORT": "syslog",
  "__CURSOR": "s=b501792beb6f4dd0b0634c35c8040435;i=1921;b=96bf7384c420412d8f9d073425171b88;m=8328cca1;t=5a613df6d62d7;x=9e7e64757417ca7e",
  "_BOOT_ID": "96bf7384c420412d8f9d073425171b88",
  "_EXE": "/usr/bin/sudo",
  "__MONOTONIC_TIMESTAMP": "2200489121",
  "_SYSTEMD_SESSION": "4",
  "_PID": "7412",
  "_SYSTEMD_CGROUP": "/user.slice/user-1000.slice/session-4.scope",
  "_GID": "0",
  "SYSLOG_IDENTIFIER": "sudo",
  "_SOURCE_REALTIME_TIMESTAMP": "1589979166630155",
  "_SYSTEMD_OWNER_UID": "1000",
  "PRIORITY": "6",
  "MESSAGE": "pam_unix(sudo:session): session closed for user root",
  "_COMM": "sudo",
  "_UID": "0",
  "_SYSTEMD_UNIT": "session-4.scope",
  "__REALTIME_TIMESTAMP": "1589979166630615",
  "_AUDIT_LOGINUID": "1000",
  "_SYSTEMD_SLICE": "user-1000.slice",
  "_HOSTNAME": "localhost.localdomain",
  "SYSLOG_TIMESTAMP": "May 20 12:52:46 "
}

Describe the results you expected:

Same as actual, but with CONTAINER_NAME set to pony

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

(it's also true for 1.6.4 on CentOS 8.1)

Version:            1.9.1
RemoteAPI Version:  1
Go Version:         go1.14.2
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  gitCommit: ""
  goVersion: go1.14.2
  podmanVersion: 1.9.1
host:
  arch: amd64
  buildahVersion: 1.14.8
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.15-1.fc32.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.15, commit: 33da5ef83bf2abc7965fc37980a49d02fdb71826'
  cpus: 1
  distribution:
    distribution: fedora
    version: "32"
  eventLogger: file
  hostname: localhost.localdomain
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.6.6-300.fc32.x86_64
  memFree: 160178176
  memTotal: 1020424192
  ociRuntime:
    name: crun
    package: crun-0.13-2.fc32.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.13
      commit: e79e4de4ac16da0ce48777afb72c6241de870525
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 0
  swapTotal: 0
  uptime: 42m 5.14s
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 2
  runRoot: /var/run/containers/storage
  volumePath: /var/lib/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

podman-1.9.1-1.fc32.x86_64

Additional environment details (AWS, VirtualBox, physical, etc.):

Both AWS (with custom CentOS 8.1 image) and virtualbox (fedora/32-cloud-base)

@mheon
Copy link
Member

mheon commented May 20, 2020

@haircommander This is mostly a Conmon thing, IIRC - we'd need a flag to pass container name in, and then the journald log formatter would need to include it?

@mheon mheon added the kind/feature Categorizes issue or PR as related to a new feature. label May 20, 2020
@haircommander
Copy link
Collaborator

conmon already has this capability
https://github.com/containers/conmon/blob/82e9358196126b4183e393e4517428f2cc22dfc7/src/ctr_logging.c#L229
but podman wasn't passing the name down. cooking up a PR to fix now

@mheon
Copy link
Member

mheon commented May 20, 2020

Awesome!

@haircommander
Copy link
Collaborator

oops, we need a conmon fix too containers/conmon#154

@haircommander
Copy link
Collaborator

and the podman fix: #6291
@ikavalio can you try these two fixes to see if they fix your problem?

@ikavalio
Copy link
Author

@haircommander sure, thanks very much for your help and quick reply!

@ikavalio
Copy link
Author

@haircommander I'm sorry, I wasn't able to build fully functional podman from sources, but I built and tested them separately and everything seems to work fine. Thanks!

DEBU[0000] running conmon: /usr/local/libexec/podman/conmon  args="[--api-version 1 -c 36bd50ce6bbcb4081f39d84d9d279d6441603c7ce2dcdf65fffcaf4be5099a23 -u 36bd50ce6bbcb4081f39d84d9d279d6441603c7ce2dcdf65fffcaf4be5099a23 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/36bd50ce6bbcb4081f39d84d9d279d6441603c7ce2dcdf65fffcaf4be5099a23/userdata -p /var/run/containers/storage/overlay-containers/36bd50ce6bbcb4081f39d84d9d279d6441603c7ce2dcdf65fffcaf4be5099a23/userdata/pidfile -n pony --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket -s -l journald --log-level debug --syslog --conmon-pidfile /var/run/containers/storage/overlay-containers/36bd50ce6bbcb4081f39d84d9d279d6441603c7ce2dcdf65fffcaf4be5099a23/userdata/conmon.pid --exit-command /home/vagrant/work/libpod/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 36bd50ce6bbcb4081f39d84d9d279d6441603c7ce2dcdf65fffcaf4be5099a23]"
... -n pony ...

and from a separate conmon test

{
  "MESSAGE": "drwxr-xr-x    1 root     root          4096 May 20 16:41 etc\n",
  "CONTAINER_ID_FULL": "8ec354cdaf5f15f7cb360b96aa71428939f8ed8b68587eb45edd868260bc98e6",
  "_EXE": "/usr/local/libexec/podman/conmon",
  "_HOSTNAME": "localhost.localdomain",
  "CONTAINER_ID": "8ec354cdaf5f",
  "_COMM": "conmon",
  "CONTAINER_NAME": "pony",
  "_SELINUX_CONTEXT": "unconfined_u:system_r:container_runtime_t:s0",
  "__MONOTONIC_TIMESTAMP": "4809983391",
  "_UID": "0",
  "CODE_FILE": "src/ctr_logging.c",
  "__REALTIME_TIMESTAMP": "1589992870132476",
  "CODE_FUNC": "write_journald",
  "_AUDIT_SESSION": "6",
  "_BOOT_ID": "5c533461a4034b9f9af849be25d3e7f0",
  "PRIORITY": "6",
  "_AUDIT_LOGINUID": "1000",
  "SYSLOG_IDENTIFIER": "conmon",
  "_PID": "18334",
  "_CAP_EFFECTIVE": "3fffffffff",
  "_TRANSPORT": "journal",
  "CODE_LINE": "236",
  "_SYSTEMD_INVOCATION_ID": "a4df0f27153f40d9ad782d31ad30648a",
  "_SYSTEMD_SLICE": "machine.slice",
  "__CURSOR": "s=b501792beb6f4dd0b0634c35c8040435;i=51df;b=5c533461a4034b9f9af849be25d3e7f0;m=11eb2859f;t=5a61710383afc;x=be1cc3f955c7372a",
  "_MACHINE_ID": "6cb5dfc665014a849f9fe3ed3981981c",
  "_GID": "0",
  "_SOURCE_REALTIME_TIMESTAMP": "1589992870132063",
  "_SYSTEMD_UNIT": "libpod-conmon-8ec354cdaf5f15f7cb360b96aa71428939f8ed8b68587eb45edd868260bc98e6.scope",
  "_CMDLINE": "/usr/local/libexec/podman/conmon --api-version 1 -s -c 8ec354cdaf5f15f7cb360b96aa71428939f8ed8b68587eb45edd868260bc98e6 -u 8ec354cdaf5f15f7cb360b96aa71428939f8ed8b68587eb45edd868260bc98e6 -r /usr/bin/crun -b /var/lib/containers/storage/overlay-containers/8ec354cdaf5f15f7cb360b96aa71428939f8ed8b68587eb45edd868260bc98e6/userdata -p /var/run/containers/storage/overlay-containers/8ec354cdaf5f15f7cb360b96aa71428939f8ed8b68587eb45edd868260bc98e6/userdata/pidfile -n pony -l journald --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog --conmon-pidfile /var/run/containers/storage/overlay-containers/8ec354cdaf5f15f7cb360b96aa71428939f8ed8b68587eb45edd868260bc98e6/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg overlay --exit-command-arg --storage-opt --exit-command-arg overlay.mountopt=nodev,metacopy=on --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 8ec354cdaf5f15f7cb360b96aa71428939f8ed8b68587eb45edd868260bc98e6",
  "_SYSTEMD_CGROUP": "/machine.slice/libpod-conmon-8ec354cdaf5f15f7cb360b96aa71428939f8ed8b68587eb45edd868260bc98e6.scope"
}

@rhatdan rhatdan closed this as completed Jun 9, 2020
@phake24
Copy link

phake24 commented Feb 9, 2022

This is still unfixed in OL8, isn't it? I have the same problem

@vrothberg
Copy link
Member

This is still unfixed in OL8, isn't it? I have the same problem

I think it's best to reach out to Oracle Linux.

@phake24
Copy link

phake24 commented Feb 9, 2022

This is still unfixed in OL8, isn't it? I have the same problem

I think it's best to reach out to Oracle Linux.

I am using podman 3.3.1, which should already be fixed. I assume that OL can not help me.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/feature Categorizes issue or PR as related to a new feature. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

No branches or pull requests

6 participants