Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation on slirp4netns IPv6 is misleading #13914

Closed
ankon opened this issue Apr 19, 2022 · 7 comments · Fixed by #13929
Closed

Documentation on slirp4netns IPv6 is misleading #13914

ankon opened this issue Apr 19, 2022 · 7 comments · Fixed by #13929
Assignees
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. macos MacOS (OSX) related remote Problem is in podman-remote

Comments

@ankon
Copy link
Contributor

ankon commented Apr 19, 2022

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

  1. Documentation describing the default value for the enable_ipv6 setting of the slirp4netns network mode specifies the wrong default, see for example
    - **enable_ipv6=true|false**: Enable IPv6. Default is false. (Required for `outbound_addr6`).
  2. It is non-obvious how to change the value, possibly it doesn't work.
    I tried setting network_cmd_options = ["enable_ipv6=false"], and even after destroying and recreating my "podman machine" and all networks I got the same behavior: slirp4netns is running with --enable_ipv6.

The default was changed with #10889, but weirdly enough even though the issue was created by pointing to the docs the docs themselves didn't get updated for it (neither the default, nor anything how to get the old behavior back).

Steps to reproduce the issue:

  1. Check man pages for enable_ipv6
  2. Check documentation on how to enable/disable IPv6

Describe the results you received:

Describe the results you expected:

  1. The specified default should match the actual default. podman-play-kube(1) podman-pod-create(1) podman-create(1) podman-run(1)
  2. Documentation is easy enough to find

Output of podman version:

Client:       Podman Engine
Version:      4.0.3
API Version:  4.0.3
Go Version:   go1.18
Built:        Fri Apr  1 17:28:59 2022
OS/Arch:      darwin/amd64

Server:       Podman Engine
Version:      4.0.2
API Version:  4.0.2
Go Version:   go1.16.14
Built:        Thu Mar  3 15:56:56 2022
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.24.1
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.0-2.fc35.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: '
  cpus: 1
  distribution:
    distribution: fedora
    variant: coreos
    version: "35"
  eventLogger: journald
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
    uidmap:
    - container_id: 0
      host_id: 501
      size: 1
    - container_id: 1
      host_id: 100000
      size: 1000000
  kernel: 5.15.18-200.fc35.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 1369186304
  memTotal: 2061381632
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.4.2-1.fc35.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.4.2
      commit: f6fbc8f840df1a414f31a60953ae514fa497c748
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/501/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-2.fc35.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 0
  swapTotal: 0
  uptime: 3m 34.38s
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /var/home/core/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /var/home/core/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 0
  runRoot: /run/user/501/containers
  volumePath: /var/home/core/.local/share/containers/storage/volumes
version:
  APIVersion: 4.0.2
  Built: 1646319416
  BuiltTime: Thu Mar  3 15:56:56 2022
  GitCommit: ""
  GoVersion: go1.16.14
  OsArch: linux/amd64
  Version: 4.0.2

Package info (e.g. output of rpm -q podman or apt list podman):

N/A

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/main/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

macOS Montery 12.3.1 on Intel.

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Apr 19, 2022
@github-actions github-actions bot added macos MacOS (OSX) related remote Problem is in podman-remote labels Apr 19, 2022
@ankon
Copy link
Contributor Author

ankon commented Apr 19, 2022

I pondered about making the trivial PR to the docs, but I wasn't able to actually get IPv6 disabled on my machine.

FWIW: I am using podman through podman-compose, but looking at the output of the commands + results inside the machine indicate indeed a problem somewhere in podman.

@Luap99
Copy link
Member

Luap99 commented Apr 19, 2022

Where did you set network_cmd_options = ["enable_ipv6=false"]? This must be set under the [engine] section in containers.conf inside the VM.

@Luap99
Copy link
Member

Luap99 commented Apr 19, 2022

podman run --network slirp4netns:enable_ipv6=false should also work.

@ankon
Copy link
Contributor Author

ankon commented Apr 19, 2022

Where did you set network_cmd_options = ["enable_ipv6=false"]? This must be set under the [engine] section in containers.conf inside the VM.

Good point! I assumed the config would get translated from the host into the VM (as there was a configuration file), but I hadn't actually checked that. Indeed putting into the machine itself seems to remove the IPv6 flag from the running slirp4netns process. The container still has v6 addresses though:

$ cat docker-compose.yml 
services:
  test:
    image: nginx:alpine

networks:
  default:
    ipam:
      driver: default
      config:
        - subnet: "172.38.0.0/16"

$ podman-compose up
podman-compose version: 1.0.4
['podman', '--version', '']
using podman version: 4.0.3
** excluding:  set()
['podman', 'ps', '--filter', 'label=io.podman.compose.project=podman-networks', '-a', '--format', '{{ index .Labels "io.podman.compose.config-hash"}}']
podman pod create --name=pod_podman-networks --infra=false --share=
ff0a1122090752d7f62c74bef692505b1ad0c753a9e710cfa4ae7b724ea480b4
exit code: 0
['podman', 'network', 'exists', 'podman-networks_default']
Creating network podman-networks_default
{'ipam': {'driver': 'default', 'config': [{'subnet': '172.38.0.0/16'}]}}
['create', '--label', 'io.podman.compose.project=podman-networks', '--label', 'com.docker.compose.project=podman-networks', '--subnet', '172.38.0.0/16', 'podman-networks_default']
['podman', 'network', 'create', '--label', 'io.podman.compose.project=podman-networks', '--label', 'com.docker.compose.project=podman-networks', '--subnet', '172.38.0.0/16', 'podman-networks_default']
['podman', 'network', 'exists', 'podman-networks_default']
podman create --name=podman-networks_test_1 --pod=pod_podman-networks --label io.podman.compose.config-hash=efcc702de3d386451b543eb266e9f5a87d60bfd92c1e35f77065ade8b7cca077 --label io.podman.compose.project=podman-networks --label io.podman.compose.version=1.0.4 --label com.docker.compose.project=podman-networks --label com.docker.compose.project.working_dir=/Users/andreas/project/experiments/podman-networks --label com.docker.compose.project.config_files=docker-compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=test --net podman-networks_default --network-alias test nginx:alpine
Resolving "nginx" using unqualified-search registries (/etc/containers/registries.conf.d/999-podman-machine.conf)
Trying to pull docker.io/library/nginx:alpine...
Getting image source signatures
Copying blob sha256:4071be97c256d6f5ab0e05ebdebcfec3d0779a5e199ad0d71a5fccba4b3e2ce4
Copying blob sha256:5867cba5fcbd3ae827c5801e76d20e7dc91cbb626ac5c871ec6c4d04eb818b16
Copying blob sha256:4b639e65cb3ba47e77db93f93c6625a62ba1b9eec99160b254db380115ae009d
Copying blob sha256:061ed9e2b9762825b9869a899a696ce8b56e7e0ec1e1892b980969bf7bcda56a
Copying blob sha256:bc19f3e8eeb1bb75268787f8689edec9a42deda5cdecdf2f95b3c6df8eb57a48
Copying blob sha256:df9b9388f04ad6279a7410b85cedfdcb2208c0a003da7ab5613af71079148139
Copying blob sha256:df9b9388f04ad6279a7410b85cedfdcb2208c0a003da7ab5613af71079148139
Copying blob sha256:4b639e65cb3ba47e77db93f93c6625a62ba1b9eec99160b254db380115ae009d
Copying blob sha256:061ed9e2b9762825b9869a899a696ce8b56e7e0ec1e1892b980969bf7bcda56a
Copying blob sha256:bc19f3e8eeb1bb75268787f8689edec9a42deda5cdecdf2f95b3c6df8eb57a48
Copying blob sha256:5867cba5fcbd3ae827c5801e76d20e7dc91cbb626ac5c871ec6c4d04eb818b16
Copying blob sha256:4071be97c256d6f5ab0e05ebdebcfec3d0779a5e199ad0d71a5fccba4b3e2ce4
Copying config sha256:51696c87e77e4ff7a53af9be837f35d4eacdb47b4ca83ba5fd5e4b5101d98502
Writing manifest to image destination
Storing signatures
15e896713de6e05d42cca826e88f6637c8df6a4419295dda3b8a58b5034d3217
exit code: 0
podman start -a podman-networks_test_1
[test] | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
[test] | /docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
[test] | /docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
[test] | 10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
[test] | 10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
[test] | /docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
[test] | /docker-entrypoint.sh: Launching /docker-entrypoint.d/30-tune-worker-processes.sh
[test] | /docker-entrypoint.sh: Configuration complete; ready for start up
2022/04/19 09:49:59 [notice] 1#1: using the "epoll" event method
2022/04/19 09:49:59 [notice] 1#1: nginx/1.21.6
2022/04/19 09:49:59 [notice] 1#1: built by gcc 10.3.1 20211027 (Alpine 10.3.1_git20211027) 
2022/04/19 09:49:59 [notice] 1#1: OS: Linux 5.15.18-200.fc35.x86_64
2022/04/19 09:49:59 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 524288:524288
2022/04/19 09:49:59 [notice] 1#1: start worker processes
2022/04/19 09:49:59 [notice] 1#1: start worker process 27

(In another terminal)

$ podman machine ssh ps ax|grep slirp4netns
   1890 ?        S      0:00 /usr/bin/slirp4netns --disable-host-loopback --mtu=65520 --enable-sandbox --enable-seccomp -c -r 3 --netns-type=path /run/user/501/netns/rootless-netns-729b0852549462c6ebe1 tap0

$ podman-compose exec test ifconfig
podman-compose version: 1.0.4
['podman', '--version', '']
using podman version: 4.0.3
podman exec --interactive --tty podman-networks_test_1 ifconfig
eth0      Link encap:Ethernet  HWaddr 96:E1:78:36:EA:54  
          inet addr:172.38.0.2  Bcast:172.38.255.255  Mask:255.255.0.0
          inet6 addr: fe80::94e1:78ff:fe36:ea54/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:23 errors:0 dropped:0 overruns:0 frame:0
          TX packets:11 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1866 (1.8 KiB)  TX bytes:866 (866.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

exit code: 0

Still: Would be good if podman-on-mac could leave a comment in the host .conf files that changes are likely not going to do anything (apart from the engine.active_service and related ones, I guess)

@Luap99
Copy link
Member

Luap99 commented Apr 19, 2022

What is your goal here?

The ipv6 address is a link local address which is automatically assigned by the kernel. Podman does not add this.
see containers/netavark#340

@ankon
Copy link
Contributor Author

ankon commented Apr 19, 2022

The underlying problem I'm trying to solve is that I have a docker-compose setup consisting of a nginx proxying to a bunch of microservices that works fine in docker-for-mac, but that requires changes to work with podman in nginx.conf: I need to fix up all the places that specify a resolver to use ipv6=off1, because otherwise it seems that nginx resolves IPv6 addresses. Now, I'm still trying to understand whether there are other ways here (for example check what exactly nginx is resolving), but for the purpose of this issue: I want to disable ipv6, so that I see the same behavior as in docker-for-mac.

It looks like containers/netavark#340 actually touches this goal.

That being said though: For this issue still open is the aspect that the 4 man pages describe the wrong default.


1 ... and replace 127.0.0.11 with the correct IP for the network; this is IMHO part of a different compatibility issue

@Luap99
Copy link
Member

Luap99 commented Apr 20, 2022

Technically speaking the documentation is not wrong. The default is in fact false when you have no network_cmd_options set in containers.conf, network_cmd_options = [] should turn it of right now.
However as default in cotnainers.conf network_cmd_options = ["enable_ipv6=true"] is set so we always use ipv6.

I totally agree that this super confusing and it should be clarified in the documentation.

@Luap99 Luap99 self-assigned this Apr 20, 2022
@Luap99 Luap99 added the In Progress This issue is actively being worked by the assignee, please do not work on this at this time. label Apr 20, 2022
Luap99 added a commit to Luap99/libpod that referenced this issue Apr 20, 2022
We already have ipv6 enabled as default via the containers.conf setting.
However the documentation did not reflect this. Also if no options were
set in contianers.conf it would have ipv6 disabled.

We can now remove the extra option from containers.conf.

Also fix another outdated option description for host.containers.internal
and add that the options can also be set in contianers.conf.

[NO NEW TESTS NEEDED]

Fixes containers#13914

Signed-off-by: Paul Holzinger <[email protected]>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 20, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 20, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
In Progress This issue is actively being worked by the assignee, please do not work on this at this time. kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. macos MacOS (OSX) related remote Problem is in podman-remote
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants