Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding dnsname plugin breaks starting rootless containers with podman socket and docker-compose #10855

Closed
lbeltrame opened this issue Jul 4, 2021 · 10 comments · Fixed by #10865
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. network Networking related issue or feature rootless

Comments

@lbeltrame
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

When using a minimal docker-compose:

version: "3.3"
services:
  nginx:
    image: nginx:latest
    container_name: nginx
    ports:
      - 1111:80

starting rootless containers works as long as dnsname plugin is not installed:

(after starting podman system service -t 0 unix:///tmp/podman.sock)

docker-compose -H unix:///tmp/podman.sock up
Starting nginx ... done
Attaching to nginx
nginx    | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
nginx    | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
nginx    | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
nginx    | 10-listen-on-ipv6-by-default.sh: info: IPv6 listen already enabled
nginx    | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
nginx    | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
nginx    | /docker-entrypoint.sh: Configuration complete; ready for start up
nginx    | 2021/07/04 08:20:17 [notice] 1#1: using the "epoll" event method
nginx    | 2021/07/04 08:20:17 [notice] 1#1: nginx/1.21.0
nginx    | 2021/07/04 08:20:17 [notice] 1#1: built by gcc 8.3.0 (Debian 8.3.0-6) 
nginx    | 2021/07/04 08:20:17 [notice] 1#1: OS: Linux 5.12.9-1-default
nginx    | 2021/07/04 08:20:17 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 65536:65536
nginx    | 2021/07/04 08:20:17 [notice] 1#1: start worker processes
nginx    | 2021/07/04 08:20:17 [notice] 1#1: start worker process 19
nginx    | 2021/07/04 08:20:17 [notice] 1#1: start worker process 20
nginx    | 2021/07/04 08:20:17 [notice] 1#1: start worker process 21
nginx    | 2021/07/04 08:20:17 [notice] 1#1: start worker process 22

However, after adding the dnsname plugin, everything breaks:

podman rm nginx
podman network rm example_default
docker-compose -H unix:///tmp/podman.sock up
Creating network "example_default" with the default driver
Creating nginx ... error

ERROR: for nginx  error preparing container 26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6 for attach: error configuring network namespace for container 26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6: exit status 5

ERROR: for nginx  error preparing container 26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6 for attach: error configuring network namespace for container 26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6: exit status 5
ERROR: Encountered errors while bringing up the project.

The problem seems to lie in dnsmasq, as the journal says:

lug 04 10:23:56 sasara.private.heavensinferno.net dnsmasq[5299]: directory /etc/resolv.conf for resolv-file is missing, cannot poll
lug 04 10:23:56 sasara.private.heavensinferno.net dnsmasq[5299]: FAILED to start up

But I've ruled out AppArmor (which I run), because aa-complain /usr/sbin/dnsmasq followed by checking /var/log/audit/audit.log does not produce any indication of denied operations. Unfortunately searches did not yield useful results on what "exit status 5" means.

Logs from podman system service:

INFO[1724] APIHandler(59c28db2-7b3f-47f5-b633-16d608a0bacb) -- POST /v1.40/containers/26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6/attach?logs=0&stdout=1&stderr=1&stream=1 BEGIN 
DEBU[1724] APIHandler(59c28db2-7b3f-47f5-b633-16d608a0bacb) -- Header: Upgrade=[tcp] 
DEBU[1724] APIHandler(59c28db2-7b3f-47f5-b633-16d608a0bacb) -- Header: Content-Length=[0]      
DEBU[1724] APIHandler(59c28db2-7b3f-47f5-b633-16d608a0bacb) -- Header: Authorization=[Basic dHJhbnNtaXNzaW9uOmhtLEtkNm9sYVA=]  
DEBU[1724] APIHandler(59c28db2-7b3f-47f5-b633-16d608a0bacb) -- Header: User-Agent=[docker-compose/1.28.5 docker-py/4.4.4 Linux/5.12.9-1-default] 
DEBU[1724] APIHandler(59c28db2-7b3f-47f5-b633-16d608a0bacb) -- Header: Accept-Encoding=[gzip, deflate] 
DEBU[1724] APIHandler(59c28db2-7b3f-47f5-b633-16d608a0bacb) -- Header: Accept=[*/*]                                                                                                                                                                         
DEBU[1724] APIHandler(59c28db2-7b3f-47f5-b633-16d608a0bacb) -- Header: Connection=[Upgrade] 
DEBU[1724] overlay: mount_data=,lowerdir=/home/einar/.local/share/containers/storage/overlay/l/6QH7A57DSNQAWNHTUK4ZYP5BDI:/home/einar/.local/share/containers/storage/overlay/l/4XQHPATLMX7DDJLHYL45UHUM5M:/home/einar/.local/share/containers/storage/overl
ay/l/MRVQAGY4VSOQTW2RHLVD3NVE4E:/home/einar/.local/share/containers/storage/overlay/l/VZAPQ2LIVKXIMMBM7RHNSMOIZC:/home/einar/.local/share/containers/storage/overlay/l/OLOTNNPZWH3KT62JOHKZYIEY2C:/home/einar/.local/share/containers/storage/overlay/l/KI2F
6HTHBZ4Q45FVLOIS3U2Y6M,upperdir=/home/einar/.local/share/containers/storage/overlay/9d40a7d9922dbcd785deb7625952c4c1fc8ff9168968b380fe52094b9932cf95/diff,workdir=/home/einar/.local/share/containers/storage/overlay/9d40a7d9922dbcd785deb7625952c4c1fc8ff9
168968b380fe52094b9932cf95/work,userxattr                                                                                     
DEBU[1724] mounted container "26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6" at "/home/einar/.local/share/containers/storage/overlay/9d40a7d9922dbcd785deb7625952c4c1fc8ff9168968b380fe52094b9932cf95/merged" 
DEBU[1724] Created root filesystem for container 26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6 at /home/einar/.local/share/containers/storage/overlay/9d40a7d9922dbcd785deb7625952c4c1fc8ff9168968b380fe52094b9932cf95/merged 
DEBU[1724] Made network namespace at /run/user/1000/netns/cni-a553c1c4-c549-6fe9-c7b4-ff5b3ddcea1e for container 26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6 
INFO[1724] Got pod network &{Name:nginx Namespace:nginx ID:26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6 NetNS:/run/user/1000/netns/cni-a553c1c4-c549-6fe9-c7b4-ff5b3ddcea1e Networks:[{Name:example_default Ifname:eth0}] RuntimeConfig:
map[example_default:{IP: MAC: PortMappings:[{HostPort:1111 ContainerPort:80 Protocol:tcp HostIP:}] Bandwidth:<nil> IpRanges:[]}] Aliases:map[example_default:[nginx]]} 
INFO[1724] About to add CNI network example_default (type=bridge) 
ERRO[1724] Error adding network: exit status 5          
ERRO[1724] Error while adding pod to CNI network "example_default": exit status 5 
INFO[1724] Got pod network &{Name:nginx Namespace:nginx ID:26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6 NetNS:/run/user/1000/netns/cni-a553c1c4-c549-6fe9-c7b4-ff5b3ddcea1e Networks:[{Name:example_default Ifname:eth0}] RuntimeConfig:
map[example_default:{IP: MAC: PortMappings:[{HostPort:1111 ContainerPort:80 Protocol:tcp HostIP:}] Bandwidth:<nil> IpRanges:[]}] Aliases:map[example_default:[nginx]]} 
ERRO[1724] error loading cached network config: network "example_default" not found in CNI cache 
WARN[1724] falling back to loading from existing plugins on disk 
INFO[1724] About to del CNI network example_default (type=bridge) 
DEBU[1724] unmounted container "26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6" 
DEBU[1724] Network is already cleaned up, skipping...   
DEBU[1724] Cleaning up container 26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6 
DEBU[1724] Network is already cleaned up, skipping...   
DEBU[1724] Container 26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6 storage is already unmounted, skipping... 
INFO[1724] Request Failed(Conflict): error preparing container 26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80ad6 for attach: error configuring network namespace for container 26fbd222ad2ab7dcf2457f2852c9d72d75736a90b0faf8278802f5a8c9c80a
d6: exit status 5 

Steps to reproduce the issue:

  1. Use provided docker-compose.yml

  2. Install the CNI dnsname plugin

  3. Start the podman socket as user

  4. Use docker-compose to start the container

Describe the results you received:

Container does not start with the output provided above.

Describe the results you expected:

Container starts.

Additional information you deem important (e.g. issue happens only occasionally):

Reproduced on two different distributions: openSUSE Leap and openSUSE Tumbleweed. Happens all the time.

Output of podman version:

Version:      3.2.1
API Version:  3.2.1
Go Version:   go1.13.15
Built:        Mon Jun 14 02:00:00 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.21.0
  cgroupControllers: []
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.27-1.3.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.27, commit: unknown'
  cpus: 4
  distribution:
    distribution: '"opensuse-tumbleweed"'
    version: "20210623"
  eventLogger: journald
  hostname: sasara.private.heavensinferno.net
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 100
      size: 1
    - container_id: 1
      host_id: 20000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 20000
      size: 65536
  kernel: 5.12.9-1-default
  linkmode: dynamic
  memFree: 277372928
  memTotal: 7211077632
  ociRuntime:
    name: crun
    package: crun-0.18-1.3.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.18
      commit: 808420efe3dc2b44d6db9f1a3fac8361dde42a95
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_AUDIT_WRITE,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_MKNOD,CAP_NET_BIND_SERVICE,CAP_NET_RAW,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.8-1.6.x86_64
    version: |-
      slirp4netns version 1.1.8
      commit: unknown
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.1
  swapFree: 1836220416
  swapTotal: 2147647488
  uptime: 471h 9m 36.27s (Approximately 19.62 days)
registries:
  search:
  - registry.opensuse.org
  - docker.io
store:
  configFile: /home/einar/.config/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 0
    stopped: 4
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/einar/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 3
  runRoot: /run/user/1000/containers
  volumePath: /home/einar/.local/share/containers/storage/volumes
version:
  APIVersion: 3.2.1
  Built: 1623628800
  BuiltTime: Mon Jun 14 02:00:00 2021
  GitCommit: ""
  GoVersion: go1.13.15
  OsArch: linux/amd64
  Version: 3.2.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.2.1-1.1.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes (also happens with podman 3.2.2)

Additional environment details (AWS, VirtualBox, physical, etc.):

The tests were done on openSUSE Tumbleweed but were also replicated on openSUSE Leap. In both cases, bare metal machines with no virtualization.

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 4, 2021
@lbeltrame
Copy link
Author

lbeltrame commented Jul 4, 2021

I'll build the plugin with containers/dnsname#60 and see what's happening.

EDIT: unfortunately the error message is uninformative

INFO[0012] Request Failed(Conflict): error preparing container 973c9e552cc30363b04fefc466955862ec5624a12a235ba6d799c3243d7b7c0c for attach: error configuring network namespace for container 973c9e552cc30363b04fefc466955862ec5624a12a235ba6d799c3243d7b7c0c: error adding pod nginx_nginx to CNI network "example_default": dnsname error: dnsmasq failed with "\ndnsmasq: directory /etc/resolv.conf for resolv-file is missing, cannot poll\n": exit status 5

@lbeltrame
Copy link
Author

Ok, it looks like some kind of dnsmasq bug.

/etc/resolv.conf is normally a symlink on openSUSE, as it's handled by netconfig:

ll /etc/resolv.conf
lrwxrwxrwx 1 root root 30 14 feb  2019 /etc/resolv.conf -> /var/run/netconfig/resolv.conf

If I replace it with an actual file:

ll /etc/resolv.conf
-rw-r--r-- 1 root root 692  4 lug 10.59 /etc/resolv.conf

the container starts. I'll try to see if this bug has been filed somewhere.

@Luap99
Copy link
Member

Luap99 commented Jul 4, 2021

@lbeltrame Thanks for the report. This is definitely a bug in podman. To make rootless cni possible we mount in a new empty /run in rootless cni namespace to make it writeable for the cni plugins.

@lbeltrame
Copy link
Author

Thanks for the explanation. Do you think it warrants a line in the troubleshooting docs until this is eventually taken care of? I can prepare a PR if need be.

@Luap99
Copy link
Member

Luap99 commented Jul 4, 2021

@lbeltrame Sure you can open a PR to document this.

I will try to fix this next week.

@Luap99 Luap99 self-assigned this Jul 4, 2021
@Luap99 Luap99 added network Networking related issue or feature rootless labels Jul 4, 2021
@Luap99
Copy link
Member

Luap99 commented Jul 6, 2021

PR #10865 should fix this. Could test it it works for you? You can download a static compiled binary from here https://api.cirrus-ci.com/v1/artifact/task/4684786385027072/binary/bin/podman.

@lbeltrame
Copy link
Author

@Luap99 Many thanks. I'll have a go tonight when I'm at home.

Luap99 added a commit to Luap99/libpod that referenced this issue Jul 6, 2021
The rootless cni namespace needs a valid /etc/resolv.conf file. On some
distros is a symlink to somewhere under /run. Because the kernel will
follow the symlink before mounting, it is not possible to mount a file
at exactly /etc/resolv.conf. We have to ensure that the link target will
be available in the rootless cni mount ns.

Fixes containers#10855

Also fixed a bug in the /var/lib/cni directory lookup logic. It used
`filepath.Base` instead of `filepath.Dir` and thus looping infinitely.

Fixes containers#10857

[NO TESTS NEEDED]

Signed-off-by: Paul Holzinger <[email protected]>
@lbeltrame
Copy link
Author

Tested and confirmed working on one machine. I'll do a second test later on another.

@lbeltrame
Copy link
Author

Also confirmed working on the other machine.

Luap99 added a commit to Luap99/libpod that referenced this issue Jul 15, 2021
The rootless cni namespace needs a valid /etc/resolv.conf file. On some
distros is a symlink to somewhere under /run. Because the kernel will
follow the symlink before mounting, it is not possible to mount a file
at exactly /etc/resolv.conf. We have to ensure that the link target will
be available in the rootless cni mount ns.

Fixes containers#10855

Also fixed a bug in the /var/lib/cni directory lookup logic. It used
`filepath.Base` instead of `filepath.Dir` and thus looping infinitely.

Fixes containers#10857

[NO TESTS NEEDED]

Signed-off-by: Paul Holzinger <[email protected]>
@mcejp
Copy link

mcejp commented Mar 6, 2022

FWIW, I found this issue when googling for problems with DNS resolution among my containers. Turns out I was just missing the podman-plugins package -- perhaps this helps somebody.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. network Networking related issue or feature rootless
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants