Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rootless kube play creates new network in bridge mode instead of using default slirp4netns #16940

Open
E1k3 opened this issue Dec 25, 2022 · 19 comments
Labels
documentation Issue or fix is in project documentation kind/bug Categorizes issue or PR as related to a bug. kube

Comments

@E1k3
Copy link

E1k3 commented Dec 25, 2022

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
When creating a pod by running podman kube play rootless, a bridge network podman-default-kube-network is created instead of using the default podman network.
This can be changed by running podman kube play --network slirp4netns instead, but according to the documentation, slirp4netns should already be the default for rootless containers.

Steps to reproduce the issue:

  1. Write a pod specification or generate one from a running pod via podman generate kube

  2. Play that pod via podman kube play

  3. Check the network of the pod via podman pod inspect --format "{{.InfraConfig.Networks}}" <podname>

Describe the results you received:

$ podman pod inspect --format "{{.InfraConfig.Networks}}" podname
[podman-default-kube-network]

$ podman generate spec podname | grep -A 1 netns
 "netns": {
  "nsmode": "bridge"

Describe the results you expected:

$ podman pod inspect --format "{{.InfraConfig.Networks}}" podname
[podman]

$ podman generate spec podname | grep -A 1 netns
 "netns": {
  "nsmode": "slirp4netns"
 },

Additional information you deem important (e.g. issue happens only occasionally):
When generating the specgen of such a pod, the netns is specified as bridge, but for other pods (not run via kube play or via kube play --network slirp4netns) it is specified as slirp4netns.

Output of podman version:

podman version 4.3.1

Output of podman info:

host:
  arch: amd64
  buildahVersion: 1.28.0
  cgroupControllers:
  - cpu
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: /usr/bin/conmon is owned by conmon 1:2.1.5-1
    path: /usr/bin/conmon
    version: 'conmon version 2.1.5, commit: c9f7f19eb82d5b8151fc3ba7fbbccf03fdcd0325'
  cpuUtilization:
    idlePercent: 92.78
    systemPercent: 1.42
    userPercent: 5.81
  cpus: 4
  distribution:
    distribution: arch
    version: unknown
  eventLogger: journald
  hostname: archlaptop
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.1.1-arch1-1
  linkmode: dynamic
  logDriver: journald
  memFree: 1924919296
  memTotal: 8144642048
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: /usr/bin/crun is owned by crun 1.7.2-1
    path: /usr/bin/crun
    version: |-
      crun version 1.7.2
      commit: 0356bf4aff9a133d655dc13b1d9ac9424706cac4
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: /usr/bin/slirp4netns is owned by slirp4netns 1.2.0-1
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.4
  swapFree: 0
  swapTotal: 0
  uptime: 3h 4m 37.00s (Approximately 0.12 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries: {}
store:
  configFile: /home/eike/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/eike/.local/share/containers/storage
  graphRootAllocated: 379385671680
  graphRootUsed: 142289690624
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 3
  runRoot: /run/user/1000/containers
  volumePath: /home/eike/.local/share/containers/storage/volumes
version:
  APIVersion: 4.3.1
  Built: 1668983565
  BuiltTime: Sun Nov 20 23:32:45 2022
  GitCommit: 814b7b003cc630bf6ab188274706c383f9fb9915-dirty
  GoVersion: go1.19.3
  Os: linux
  OsArch: linux/amd64
  Version: 4.3.1

Package info (e.g. output of rpm -q podman or apt list podman or brew info podman):

$ pacman -Qi podman
Name            : podman
Version         : 4.3.1-2
Description     : Tool and library for running OCI-based containers in pods
Architecture    : x86_64
URL             : https://github.com/containers/podman
Licenses        : Apache
Groups          : None
Provides        : None
Depends On      : catatonit  conmon  containers-common  crun  iptables  libdevmapper.so=1.02-64  libgpgme.so=11-64  libseccomp.so=2-64  slirp4netns
Optional Deps   : apparmor: for AppArmor support
                  btrfs-progs: support btrfs backend devices [installed]
                  cni-plugins: for an alternative container-network-stack implementation [installed]
                  podman-compose: for docker-compose compatibility
                  podman-docker: for Docker-compatible CLI
Required By     : None
Optional For    : None
Conflicts With  : None
Replaces        : None
Installed Size  : 66.95 MiB
Packager        : David Runge <[email protected]>
Build Date      : Sun 20 Nov 2022 23:32:45
Install Date    : Tue 06 Dec 2022 10:51:49
Install Reason  : Explicitly installed
Install Script  : No
Validated By    : Signature

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):
None.

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Dec 25, 2022
@E1k3 E1k3 changed the title Rootless kube play creates network in bridge mode instead of using slirp4netns Rootless kube play creates network in bridge mode instead of using default slirp4netns Dec 25, 2022
@E1k3 E1k3 changed the title Rootless kube play creates network in bridge mode instead of using default slirp4netns Rootless kube play creates new network in bridge mode instead of using default slirp4netns Jan 3, 2023
@Luap99
Copy link
Member

Luap99 commented Jan 4, 2023

From the man page:

When no network option is specified and host network mode is not configured in the YAML file, a new network stack is created and pods are attached to it making possible pod to pod communication.

this was added in to make communication across different pods started by play kube possible: #16029

@Luap99 Luap99 closed this as completed Jan 4, 2023
@E1k3
Copy link
Author

E1k3 commented Jan 4, 2023

Is there a way to disable this behavior for rootless containers via config? Otherwise using the default slirp4netns network is impossible with the new [email protected] without changing the service file.

@Luap99
Copy link
Member

Luap99 commented Jan 4, 2023

I don't think there is way to overwrite it via config file at the moment but I agree it should respect the default set in containers.conf

@Luap99 Luap99 reopened this Jan 4, 2023
@E1k3
Copy link
Author

E1k3 commented Jan 5, 2023

From the man page:

When no network option is specified and host network mode is not configured in the YAML file, a new network stack is created and pods are attached to it making possible pod to pod communication.

this was added in to make communication across different pods started by play kube possible: #16029

In the slirp4netns paragraph just above states:

slirp4netns[:OPTIONS,…]: use slirp4netns(1) to create a user network stack.
This is the default for rootless containers. [...]

This should probably be removed (as well as the bridge paragraph for rootful containers) to reflect the actual behavior.
If this is correct, tell me and I will create a pull request for the documentation.

I don't think there is way to overwrite it via config file at the moment but I agree it should respect the default set in containers.conf

For now, I am using a systemd override to add --network=slirp4netns to the [email protected], but a config option would be appreciated, of course.

EDIT:
I just looked at the documentation and saw, that the network option pulls in a documentation fragment that is identical for all commands. In that case, shouldn't that also be the default behavior for kube play?

To be consistent with all other podman commands unless specified otherwise in containers.conf or an additional CLI argument.

@Luap99
Copy link
Member

Luap99 commented Jan 5, 2023

Yeah the problem is we use the same paragraph for podman run/create as well. There slirp4netns is the correct default. Only kube play is special because it should be closer to k8s behaviour.

@edsantiago With your man page include mechanism is there a way to exclude certain things for only one command. I would really not like to copy the full --network paragraph just to remove one sentence.

@edsantiago
Copy link
Member

is there a way to exclude certain things for only one command.

Sorry, no. It's just a simple, trivial preprocessor. I looked into using cpp or m4 and abandoned that very quickly: the complexity cost >> advantages of flexibility.

@edsantiago
Copy link
Member

That said, suggestions welcome. If you can find a decent simple way to implement this, I'd be happy to look into it.

@rhatdan
Copy link
Member

rhatdan commented Jan 5, 2023

You can break it into two or three includes though.
I sometimes just remove a line from the common and then move the line back to the specific man page.

@E1k3
Copy link
Author

E1k3 commented Jan 5, 2023

@Luap99 Just to clarify, I actually meant this the other way around.
The documentation indicates that default network behavior is the same for all commands except kube play.
Wouldn't it be sensible to keep this consistency through kube play as well and add a containers.conf option to allow the current behavior?

@Luap99
Copy link
Member

Luap99 commented Jan 5, 2023

No see #16029, the old behaviour was fine but most users want a close to k8s behavior so it was changed to use a named network by default because this will allow communication between pods.

@E1k3
Copy link
Author

E1k3 commented Jan 5, 2023

I see, in that case, how would you make kube play "respect the default set in containers.conf"?
Currently the only way to get the default k8s behavior is to not specify --net at all, in which case there is always going to be a default set in one of the containers.conf files.
Would you add a config option like enable_kube_network which is enabled by default?

@github-actions
Copy link

github-actions bot commented Feb 5, 2023

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@rhatdan
Copy link
Member

rhatdan commented Mar 15, 2023

@Luap99 What is up with this one?

@nettybun
Copy link

New to podman and trying to thread the needle here - it seems like slirp4netns is no longer the default (I'm on 5.0.2) and pasta is used instead. I managed to bring up a pod via podman kube play pod.yaml (rootless) and it's working fine, however if I run the command in this thread podman generate spec my-pod | grep -A1 netns it says "bridge", but pasta is running in htop as /usr/bin/pasta --config-net --pid /run/user/1000/containers/networks/rootless-netns/rootless-netns-conn.pid --dns-forward 169.254.0.1 -t none -u none -T none -U none --no-map-gw --quiet --netns /run/user/1000/containers/networks/rootless-netns/rootless-netns

Is this issue still relevant with the move away from slirp4netns? Excuse me if I'm misunderstanding :)

@nettybun
Copy link

Oh, I see it is an error in the man podman kube play

This uses the bridge mode for rootful containers and slirp4netns for rootless ones. - slirp4netns[:OP‐
TIONS,...]: use slirp4netns(1) to create a user network stack. This is the default for rootless containers.

I'm not sure why pasta is running then.

@edsantiago Sorry to bother you but can you confirm the man page is supposed to be formatted this way? There are list items - in the paragraph.

image

If this is wrong I can try to take a stab at a fix or open a separate issue.

@Luap99
Copy link
Member

Luap99 commented Apr 27, 2024

New to podman and trying to thread the needle here - it seems like slirp4netns is no longer the default (I'm on 5.0.2) and pasta is used instead. I managed to bring up a pod via podman kube play pod.yaml (rootless) and it's working fine, however if I run the command in this thread podman generate spec my-pod | grep -A1 netns it says "bridge", but pasta is running in htop as /usr/bin/pasta --config-net --pid /run/user/1000/containers/networks/rootless-netns/rootless-netns-conn.pid --dns-forward 169.254.0.1 -t none -u none -T none -U none --no-map-gw --quiet --netns /run/user/1000/containers/networks/rootless-netns/rootless-netns

Is this issue still relevant with the move away from slirp4netns? Excuse me if I'm misunderstanding :)

That is expected because we still have to connect the rootless netns to host (so either via pasta or slirp4netns), bridge means the the containers get the bridge/veth setup similar to root to allow inter container communication.
see podman unshare --rootless-netns and #22467

@Luap99
Copy link
Member

Luap99 commented Apr 27, 2024

The formatting of the man page was fixed in #22372 I think, however yes looks like the default is documented incorrectly after the pasta switch

@Luap99 Luap99 added the documentation Issue or fix is in project documentation label Jun 15, 2024
@contre95
Copy link

Hey, I'm in the same situation as the OP. My goal is to be able to declare Pods in a .yaml and be able to manage these with systemd, mostly to enable the service and run container after a system shutdown.

Would it be a good idea to specify the network in the .yaml via annotations ?

apiVersion: v1
kind: Pod
metadata:
  name: <pod_name>
  annotations:
    io.podman.annotations.infra.name: "vector-infra"
    io.podman.annotations.network.name: "my-custom-network" # This is the bit that I'm proposing
  labels:
    app: <appname>
spec:
  hostNetwork: false # <-- This has to be false otherwise ignore annotation.
  containers:
...

Ofcourse the network should previously exist.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Issue or fix is in project documentation kind/bug Categorizes issue or PR as related to a bug. kube
Projects
None yet
Development

No branches or pull requests

6 participants