Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Connections to the port exposed via --publish are dropped and do not reach the contained process #22959

Open
WhyNotHugo opened this issue Jun 10, 2024 · 34 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. network Networking related issue or feature pasta pasta(1) bugs or features

Comments

@WhyNotHugo
Copy link

WhyNotHugo commented Jun 10, 2024

Issue Description

Exposing a port via --publish on a system with IPv6 doesn't work.

The default network does not have IPv6 enabled by default, but I enabled it manually.

I edited .local/share/containers/storage/networks/podman.json to include "ipv6_enabled": true, before starting the container to enable IPv6. Connections are now received by podman, but immediately dropped, and never reach the container.

Steps to reproduce the issue

Steps to reproduce the issue

  1. podman run --rm --publish 8001:8001 whynothugo/vdirsyncer-devkit-radicale
  2. curl 'http://[::1]:8001'

Describe the results you received

*   Trying [::1]:8001...
* Connected to :: (::1) port 8001
> GET / HTTP/1.1
> Host: [::1]:8001
> User-Agent: curl/8.8.0
> Accept: */*
> 
* Request completely sent off
* Recv failure: Connection reset by peer
* Closing connection
curl: (56) Recv failure: Connection reset by peer

Describe the results you expected

*   Trying [::1]:8001...
* Connected to :: (::1) port 8001
> GET / HTTP/1.1
> Host: [::1]:8001
> User-Agent: curl/8.8.0
> Accept: */*
> 
* Request completely sent off
* HTTP 1.0, assume close after body
< HTTP/1.0 302 Found
< Date: Mon, 10 Jun 2024 18:08:56 GMT
< Server: WSGIServer/0.2 CPython/3.8.10
< Location: .web
< Content-Type: text/plain; charset=utf-8
< Content-Length: 18
< 
* Closing connection

podman info output

> podman info
host:
  arch: amd64
  buildahVersion: 1.35.4
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - pids
  cgroupManager: cgroupfs
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-r0
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: unknown'
  cpuUtilization:
    idlePercent: 97.76
    systemPercent: 1
    userPercent: 1.24
  cpus: 24
  databaseBackend: sqlite
  distribution:
    distribution: alpine
    version: 3.20.0
  eventLogger: file
  freeLocks: 2041
  hostname: hyperion.whynothugo.nl
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.9.1-0-edge
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 11957035008
  memTotal: 67180113920
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.10.0-r0
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.10.0
    package: netavark-1.10.3-r0
    path: /usr/libexec/podman/netavark
    version: netavark 1.10.3
  ociRuntime:
    name: runc
    package: runc-1.1.12-r3
    path: /usr/bin/runc
    version: |-
      runc version 1.1.12
      commit: 51d5e94601ceffbbd85688df1c928ecccbfa4685
      spec: 1.0.2-dev
      go: go1.22.3
      libseccomp: 2.5.5
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-2024.05.23-r0
    version: |
      pasta unknown version
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: false
    path: /run/user-1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 0
  swapTotal: 0
  uptime: 417h 46m 52.00s (Approximately 17.38 days)
  variant: ""
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /home/hugo/.config/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/hugo/.local/share/containers/storage
  graphRootAllocated: 1930587799552
  graphRootUsed: 1737669390336
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1
  runRoot: /run/user-1000/containers
  transientStore: false
  volumePath: /home/hugo/.local/share/containers/storage/volumes
version:
  APIVersion: 5.0.3
  Built: 1716231535
  BuiltTime: Mon May 20 20:58:55 2024
  GitCommit: ""
  GoVersion: go1.22.3
  Os: linux
  OsArch: linux/amd64
  Version: 5.0.3

Podman in a container

No

Privileged Or Rootless

Rootless

Upstream Latest Release

No

Additional environment details

No response

Additional information

Attempting to use localhost instead of a specific IP fail too:

curl -v 'http://localhost:8001'
* Host localhost:8001 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying [::1]:8001...
* Connected to localhost (::1) port 8001
> GET / HTTP/1.1
> Host: localhost:8001
> User-Agent: curl/8.8.0
> Accept: */*
> 
* Request completely sent off
* Recv failure: Connection reset by peer
* Closing connection
curl: (56) Recv failure: Connection reset by peer

I suppose that in some configurations curl might prefer IPv4 and it would work, but that's mostly luck.

@WhyNotHugo WhyNotHugo added the kind/bug Categorizes issue or PR as related to a bug. label Jun 10, 2024
@WhyNotHugo
Copy link
Author

I edited .local/share/containers/storage/networks/podman.json to include "ipv6_enabled": true, before starting the container to enable IPv6. Connections are now received by podman, but immediately dropped, and never reach the container.

Note that before enabling IPv6 support, the result was the same.

@sbrivio-rh sbrivio-rh added the network Networking related issue or feature label Jun 10, 2024
@sbrivio-rh
Copy link
Collaborator

Does something like this:

$ podman run --rm --publish 8001:8001 fedora python3 -m http.server -b ::1 8001
::1 - - [10/Jun/2024 22:40:03] "GET / HTTP/1.1" 200 -
$ curl http://[::1]:8001/ >/dev/null
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   865  100   865    0     0   596k      0 --:--:-- --:--:-- --:--:--  844k

work for you?

@Luap99
Copy link
Member

Luap99 commented Jun 11, 2024

First, changing ipv6_enabled to true doesn't do anything unless you actually add a ipv6 subnet to the config, compare a network created with podman network create --ipv6.

Second, as rootless the default (podman) network isn't even used unless you specify --network bridge as the default for rootless since 5.0 is pasta and before that is was slirp4netns.
Pasta only uses ipv6 when the host has "public" ipv6 routes, you can easily test what pasta by running it interactively:

$ pasta --config-net ip a
No interfaces with usable IPv6 routes
Couldn't pick external interface: disabling IPv6
...

@Luap99
Copy link
Member

Luap99 commented Jun 11, 2024

Also as root we do support forward via ::1 at all, see #14491

@WhyNotHugo
Copy link
Author

@sbrivio-rh Nope:

> podman run --rm -it --publish 8001:8001 fedora python3 -m http.server -b ::1 8001 
Serving HTTP on ::1 port 8001 (http://[::1]:8001/) ...
> curl 'http://[::1]:8001/'
curl: (7) Failed to connect to ::1 port 8001 after 0 ms: Couldn't connect to server

@WhyNotHugo
Copy link
Author

Pasta only uses ipv6 when the host has "public" ipv6 routes, you can easily test what pasta by running it interactively:

I don't really understand how public ips addresses are relevant here; I'm trying to connect to a container running on this same host; no networking is happening across [physical] hosts.

@WhyNotHugo
Copy link
Author

Apparently my router was in some bogus state and had no public IPv6. I restarted it and now I have a public IPv6.

The requirements is still a problem: sometimes I visit regions with no IPv6 connectivity, and I still want to run a container on my laptop and connect to it. Actually, sometimes I'm on a train with no public IPv6 or IPv4 at all.

@sbrivio-rh
Copy link
Collaborator

sbrivio-rh commented Jun 11, 2024

I don't really understand how public ips addresses are relevant here; I'm trying to connect to a container running on this same host; no networking is happening across [physical] hosts.

The configuration of the upstream interface is relevant because pasta, by default, tries to mimic as close as possible the host networking. By doing so, in the bigger picture, you can avoid NAT because the container inherits the addresses that are assigned to the upstream interface on the host. See also: #22771 (comment).

Another advantage is that we don't have to hardcode any address or route (like slirp4netns would do), see also https://bugzilla.redhat.com/show_bug.cgi?id=2277954#c5.

This is just the default: you can override the upstream interface with -i, addresses with -a, and so on.

The requirements is still a problem: sometimes I visit regions with no IPv6 connectivity, and I still want to run a container on my laptop and connect to it. Actually, sometimes I'm on a train with no public IPv6 or IPv4 at all.

Right, we realised just recently this isn't great for containers on trains or busses, see: #22737 (reply in thread) and following comments.

I'm currently looking for a viable solution that doesn't break the whole model. The biggest problem I'm facing is that if we skip configuring addresses and routes because none were present on the host (for a given IP version), we risk making issues like #22197 worse: there, it's actually important that pasta refuses to wait until networking is ready (because, in that case, it will be ready, at some point).

The proper solution for that issue would be in systemd (systemd/systemd#3312), but I'm not sure that will ever be addressed, so we can't plan on ignoring that, either.

@sbrivio-rh sbrivio-rh added the pasta pasta(1) bugs or features label Jun 11, 2024
@WhyNotHugo
Copy link
Author

The address on the host can change during the lifetime of the container. If you want to avoid NAT and inherit the same IP on the container, then you're going to have to update the container's IP every time that the host IP changes.

Perhaps it's feasible to assing non-routable IPv6 addresses (if those are the only available) and update the container with routable addresses when/if those are assigned on the host?

In any case, using non-routable addresses would be better than using none, since currently the container is not reachable when using localhost:8000.

@sbrivio-rh
Copy link
Collaborator

The address on the host can change during the lifetime of the container. If you want to avoid NAT and inherit the same IP on the container, then you're going to have to update the container's IP every time that the host IP changes.

RIght, that was my idea to start with, but it comes with further complications, see #22737 (reply in thread).

Perhaps it's feasible to assing non-routable IPv6 addresses (if those are the only available) and update the container with routable addresses when/if those are assigned on the host?

That might be a good idea nevertheless, I'll need to check. Patches (tested, in this case ;)) are warmly welcome as well.

Copy link

A friendly reminder that this issue had no activity for 30 days.

@Luap99
Copy link
Member

Luap99 commented Jul 12, 2024

Sorry I am not following the conversation here, is there actually a specific work item tracked here in either pasta or podman or can this be closed?

@WhyNotHugo
Copy link
Author

@Luap99 Yes, this is still an issue.

To summarise, if the host doesn't have a publicly routable IPv6 address when a container is started, the container cannot be reached from the host (with the default configuration).

@sbrivio-rh
Copy link
Collaborator

Sorry I am not following the conversation here, is there actually a specific work item tracked here in either pasta or podman or can this be closed?

Kind of, in the sense that loosening start-up checks and admitting IPv6 addresses that are not routable is one of the bits that could improve support for the use case described at #22737 (reply in thread), in the short term.

To summarise, if the host doesn't have a publicly routable IPv6 address when a container is started, the container cannot be reached from the host (with the default configuration).

...via IPv6, that is.

@WhyNotHugo
Copy link
Author

...via IPv6, that is.

This is what Firefox, curl any most other clients try by default. Note that the host is reachable, but refuses the connection, so there is never any reason for clients to retry using IPv4.

@sbrivio-rh
Copy link
Collaborator

Note that the host is reachable, but refuses the connection, so there is never any reason for clients to retry using IPv4.

Ouch, I missed this detail.

@Luap99
Copy link
Member

Luap99 commented Jul 14, 2024

This is what Firefox, curl any most other clients try by default. Note that the host is reachable, but refuses the connection, so there is never any reason for clients to retry using IPv4.

I don't understand this part. A connection will always get connection refused when connecting to a local port where nothing is listening. So why should this ever be reason to not retry for curl, firefox, etc...? And trying this locally I see curl and firefox trying ::1 first and then fall back to 127.0.0.1, so what am I missing here?

@sbrivio-rh
Copy link
Collaborator

Perhaps it's feasible to assing non-routable IPv6 addresses (if those are the only available) and update the container with routable addresses when/if those are assigned on the host?

That might be a good idea nevertheless, I'll need to check. Patches (tested, in this case ;)) are warmly welcome as well.

I looked into this, but if we include link-local addresses in the set "non-routable" ones we might accept to use (if we don't, that won't fix your use case), it might very well mean that we'll assign, or not, a global unicast address to the guest depending on timing (see also #22197).

Should systemd/systemd#3312 ever get fixed, that would be much less critical and I would go ahead with this kind of change, but as long as it's not, it risks causing bigger issues.

So I'd rather implement a more comprehensive fix that involves monitoring host addresses and routes via netlink. We started reporting some ideas and concerns in section 2. of this pad.

@WhyNotHugo
Copy link
Author

I'm not sure that systemd/systemd#3312 would help.

I suppose that you intend to monitor network-online.target. From my experience on laptops with systemd (using iwd), nothing automatically triggers network-online.target; one would need to write some (not-that-trivial) glue code to trigger this unit when a wifi network is connected and has resolved an IP and stop it when the link is down.

Regardless, such a solution would only work on configurations using systemd; the issue would still need a separate portable solution for other distributions.

I'd rather implement a more comprehensive fix that involves monitoring host addresses and routes via netlink.

This sounds a lot more reliable.

@akostadinov
Copy link

I read all comments but I don't understand what's wrong with just binding to ::1 when it is available regardless of routable IPv6 addresses. Whether bindings are later updated because of networking changes or not seems to be a different topic.

@sbrivio-rh
Copy link
Collaborator

sbrivio-rh commented Jul 31, 2024

I read all comments but I don't understand what's wrong with just binding to ::1 when it is available regardless of routable IPv6 addresses.

There is probably nothing wrong or inherently complicated about that, but pasta disables IPv6 altogether if it can't select an upstream interface with IPv6 support.

This is just the simplest approach we went with, but indeed we could (probably should) define several levels of "IPv6 support", which is slightly more complicated because it raises questions such as: what do you advertise via NDP? Perhaps nothing? Should we answer neighbour solicitations at all? What do we map as source from the host in that case? A link-local address, I suppose?

Feel free to send patches for that, by the way, if you have a tentative answer to those questions. :)

@akostadinov
Copy link

Maybe it should do the same as slirp4netns does for local addresses, whatever that is. Starting my containers with slirp4netns, they just work.

@Luap99
Copy link
Member

Luap99 commented Jul 31, 2024

Maybe it should do the same as slirp4netns does for local addresses, whatever that is. Starting my containers with slirp4netns, they just work.

Except that slirp4netns doesn't support ipv6 port forwarding at all, we must use a extra port forwarder process to even have ipv6 support there (rootlessport). And that process is really more of a hack, it remaps ipv6 -> ipv4 in the container which well isn't right either (#14709) and it doesn't preserve the source ip of the original request which is what most user care about.

And yes this stuff really should be documented (#22221)

@sbrivio-rh
Copy link
Collaborator

Starting my containers with slirp4netns, they just work.

Same here with pasta, but my assumption was that if you use IPv6, your setup also has IPv6 connectivity, and this doesn't seem to be the case for many setups, yours included.

No, we don't really have to reintroduce those buggy behaviours that @Luap99 just described. We can keep IPv6 enabled even if the setup has no global IPv6 connectivity, but we need to take care of a few details while doing that.

@akostadinov
Copy link

It is interesting because I just started running containers on a clean Fedora 40. Only changed rootless_storage_path from default config. And it binds to :11 and other non-routable IPv6 addresses present by default on the machine! In fact ss -l6n shows the port listening on :: and there is nothing about it on IPv4 (while still being accessible through host's IPv4 addresses).

So it seems somehow resolved already?
podman-5.1.2-1.fc40.x86_64

P.S. By default containers can't access the host. But reading the man podman run, that seems by design, although --network pasta:--map-gw did NOT help accessing the mapped port (while from host as I previously wrote, it is accessible).. but that's not for this thread.

@sbrivio-rh
Copy link
Collaborator

It is interesting because I just started running containers on a clean Fedora 40. Only changed rootless_storage_path from default config. And it binds to :11 and other non-routable IPv6 addresses present by default on the machine! In fact ss -l6n shows the port listening on :: and there is nothing about it on IPv4 (while still being accessible through host's IPv4 addresses).

So it seems somehow resolved already? podman-5.1.2-1.fc40.x86_64

One thing we changed recently is that, while pasta decides if IPv6 support can be enabled, it now considers any host interface which has any route (not just default routes), see https://passt.top/passt/commit/netlink.c?id=450a6131beabd6537f2460bcc110a9a961697649.

The package for which the version is relevant here is passt, not podman.

P.S. By default containers can't access the host. But reading the man podman run, that seems by design

Correct.

although --network pasta:--map-gw did NOT help accessing the mapped port (while from host as I previously wrote, it is accessible).. but that's not for this thread.

You need to use the address of the default gateway as seen from the container. You can't, of course, connect to localhost, because that's the container itself. The choice of using the default gateway to represent the host is arbitrary (see "Handling of traffic with local destination and source addresses" in pasta(1)) and we're working right now to make that configurable.

@akostadinov
Copy link

akostadinov commented Aug 12, 2024

One thing we changed recently is that, while pasta decides if IPv6 support can be enabled, it now considers any host interface which has any route (not just default routes).

So basically machine has a different setup and that's why it has IPv6 available. Thank you for the explanation!

$ rpm -qa passt
passt-0^20240624.g1ee2eca-1.fc40.x86_64
$ ip -6 route
fd98:ed3a:7e00::c00 dev enp1s0 proto kernel metric 100 pref medium
fd98:ed3a:7e00::/64 dev enp1s0 proto ra metric 100 pref medium
fd98:ed3a:7e00::/48 via fe80::62e3:27ff:fec8:540c dev enp1s0 proto ra metric 100 pref medium
fe80::/64 dev enp1s0 proto kernel metric 1024 pref medium

But still strange that the other machine does not get IPv6 enabled given it doesn't seem to have less ipv6 routes (update: ah, it uses an older passt-0^20240607.g8a83b53-1.fc40.x86_64):

$ ip -6 route
2620:52:0:2de0::/64 dev tun0 proto kernel metric 50 pref medium
2620:52::/48 via 2620:52:0:2de0::2 dev tun0 proto static metric 50 pref medium
2620:52:2::/48 via 2620:52:0:2de0::2 dev tun0 proto static metric 50 pref medium
2620:52:4::/48 via 2620:52:0:2de0::2 dev tun0 proto static metric 50 pref medium
2a05:7640::/33 via 2620:52:0:2de0::2 dev tun0 proto static metric 50 pref medium
fd98:ed3a:7e00::521 dev wlp2s0 proto kernel metric 600 pref medium
fd98:ed3a:7e00::/64 dev wlp2s0 proto ra metric 600 pref medium
fd98:ed3a:7e00::/48 via fe80::62e3:27ff:fec8:540c dev wlp2s0 proto ra metric 600 pref medium
fe80::/64 dev tun0 proto kernel metric 256 pref medium
fe80::/64 dev wlp2s0 proto kernel metric 1024 pref medium

I don't really understand the below also. How do I know default gateway of a container and can I access another container in the default network directly? btw I have stopped the firawalld service just in case it plays some role.

You need to use the address of the default gateway as seen from the container.

$ podman run -it --network pasta:--map-gw --rm mariadb:11-ubi9 bash
[root@f64ea24695de /]# microdnf install -y netcat
...

[root@f64ea24695de /]# cat /etc/hosts 
127.0.0.1       localhost localhost.localdomain localhost4 localhost4.localdomain4
::1     localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.124   f64ea24695de stupefied_ramanujan

[root@f64ea24695de /]# ip route
default via 192.168.1.1 dev enp1s0 proto dhcp metric 100 
192.168.1.0/24 dev enp1s0 proto kernel scope link metric 100 

[root@f64ea24695de /]# nc -v 192.168.1.124 3306
nc: connect to 192.168.1.124 port 3306 (tcp) failed: Connection refused

[root@f64ea24695de /]# nc -v 192.168.1.1 3306
nc: connect to 192.168.1.1 port 3306 (tcp) failed: Connection timed out

While from host:

$ nc 192.168.1.124 3306
R
11.4.2-MariaD9G9'UW8N��-��a~tMa$*D[{ymmysql_native_password^C

@sbrivio-rh
Copy link
Collaborator

One thing we changed recently is that, while pasta decides if IPv6 support can be enabled, it now considers any host interface which has any route (not just default routes).

So basically machine has a different setup and that's why it has IPv6 available. Thank you for the explanation!

$ rpm -qa passt
passt-0^20240624.g1ee2eca-1.fc40.x86_64
$ ip -6 route
fd98:ed3a:7e00::c00 dev enp1s0 proto kernel metric 100 pref medium
fd98:ed3a:7e00::/64 dev enp1s0 proto ra metric 100 pref medium
fd98:ed3a:7e00::/48 via fe80::62e3:27ff:fec8:540c dev enp1s0 proto ra metric 100 pref medium
fe80::/64 dev enp1s0 proto kernel metric 1024 pref medium

But still strange that the other machine does not get IPv6 enabled given it doesn't seem to have less ipv6 routes (update: ah, it uses an older passt-0^20240607.g8a83b53-1.fc40.x86_64):

That's why:

$ git describe --tags 450a6131beabd6537f2460bcc110a9a961697649
2024_06_07.8a83b53-16-g450a613

that is, 2024_06_07.8a83b53 will not enable IPv6 operation (again, by default) if there are no default routes for IPv6.

$ ip -6 route
2620:52:0:2de0::/64 dev tun0 proto kernel metric 50 pref medium
2620:52::/48 via 2620:52:0:2de0::2 dev tun0 proto static metric 50 pref medium
2620:52:2::/48 via 2620:52:0:2de0::2 dev tun0 proto static metric 50 pref medium
2620:52:4::/48 via 2620:52:0:2de0::2 dev tun0 proto static metric 50 pref medium
2a05:7640::/33 via 2620:52:0:2de0::2 dev tun0 proto static metric 50 pref medium
fd98:ed3a:7e00::521 dev wlp2s0 proto kernel metric 600 pref medium
fd98:ed3a:7e00::/64 dev wlp2s0 proto ra metric 600 pref medium
fd98:ed3a:7e00::/48 via fe80::62e3:27ff:fec8:540c dev wlp2s0 proto ra metric 600 pref medium
fe80::/64 dev tun0 proto kernel metric 256 pref medium
fe80::/64 dev wlp2s0 proto kernel metric 1024 pref medium

I don't really understand the below also. How do I know default gateway of a container

ip -j -4 route show | jq -rM ".[] | select(.dst == \"default\").gateway", or like you did, with ip route (192.168.1.1 is correct in your case).

and can I access another container in the default network directly?

It depends on how you published the port.

btw I have stopped the firawalld service just in case it plays some role.

You need to use the address of the default gateway as seen from the container.

$ podman run -it --network pasta:--map-gw --rm mariadb:11-ubi9 bash
[root@f64ea24695de /]# microdnf install -y netcat
...

[root@f64ea24695de /]# cat /etc/hosts 
127.0.0.1       localhost localhost.localdomain localhost4 localhost4.localdomain4
::1     localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.124   f64ea24695de stupefied_ramanujan

[root@f64ea24695de /]# ip route
default via 192.168.1.1 dev enp1s0 proto dhcp metric 100 
192.168.1.0/24 dev enp1s0 proto kernel scope link metric 100 

[root@f64ea24695de /]# nc -v 192.168.1.124 3306
nc: connect to 192.168.1.124 port 3306 (tcp) failed: Connection refused

Right, because 192.168.1.124 is this container itself (f64ea24695de).

[root@f64ea24695de /]# nc -v 192.168.1.1 3306
nc: connect to 192.168.1.1 port 3306 (tcp) failed: Connection timed out

Given your test below, this should work instead (see also a minimal example I posted at https://www.reddit.com/r/podman/comments/1c46q54/comment/kzppmpg/). But how did you publish the port for the second container? Did you bind it to a particular address?

While from host:

$ nc 192.168.1.124 3306
R
11.4.2-MariaD9G9'UW8N��-��a~tMa$*D[{ymmysql_native_password^C

@akostadinov
Copy link

akostadinov commented Aug 13, 2024

The mariadb container was started with the command below. And from the host I can access it on any IPv4 or IPv6 host IP.

$ podman run --name mariadb -p 3306:3306 -e MARIADB_RANDOM_ROOT_PASSWORD=1 -d mariadb:11-ubi9
$ ss -l6n
...
tcp     LISTEN   0        128                                         *:3306                                               *:*

@sbrivio-rh
Copy link
Collaborator

The mariadb container was started with the command below. And from the host I can access it on any IPv4 or IPv6 host IP.

$ podman run --name mariadb -p 3306:3306 -e MARIADB_RANDOM_ROOT_PASSWORD=1 -d mariadb:11-ubi9
$ ss -l6n
...
tcp     LISTEN   0        128                                         *:3306                                               *:*

This works for me:

$ podman pull docker.io/mariadb:11-ubi9
Trying to pull docker.io/library/mariadb:11-ubi9...
Getting image source signatures
Copying blob e40269f1d99b done   | 
Copying blob d377dcf18038 done   | 
Copying blob b411f31673b4 done   | 
Copying blob 4dc01ae45216 done   | 
Copying blob 79e889d27a08 done   | 
Copying blob 247c2d03e948 skipped: already exists  
Copying blob 2709efa92ba1 done   | 
Copying blob 731a75340d22 done   | 
Copying blob 39e846335c7a done   | 
Copying config e9805e8aab done   | 
Writing manifest to image destination
e9805e8aab7709c04c17f69d183beab758624275bf32ed34d81e7800726ab3ca
$ podman run --name mariadb -p 3306:3306 -e MARIADB_RANDOM_ROOT_PASSWORD=1 -d mariadb:11-ubi9
5f5b9115d5853f54c0ddfdc2304a25488130260df203ae8a2987c3c6cd3620a5

Then, as microdnf didn't work:

$ microdnf install -y netcat
error: Failed to create: /var/cache/yum/metadata

I resorted to Alpine Linux and telnet:

$ podman run --net=pasta:--map-gw --rm -ti alpine sh
/ # apk add inetutils-telnet
fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.20/community/x86_64/APKINDEX.tar.gz
(1/4) Installing ncurses-terminfo-base (6.4_p20240420-r0)
(2/4) Installing libncursesw (6.4_p20240420-r0)
(3/4) Installing ncurses (6.4_p20240420-r0)
(4/4) Installing inetutils-telnet (2.5-r0)
Executing busybox-1.36.1-r29.trigger
OK: 9 MiB in 18 packages
/ # telnet 88.198.0.161 3306
Trying 88.198.0.161...
Connected to 88.198.0.161.
Escape character is '^]'.
[
5.5.5-10.11.6-MariaDB-2\�~yxm9sZH;��-.rtpWA"$+Kx>mysql_native_password^CConnection closed by foreign host.

I would suggest you take traffic captures on the loopback interface (tcpdump -i lo, or use Wireshark/tshark) to see what's going on. You can also get container-side traffic captures from pasta with -p / --pcap: --net=pasta:...--pcap,/tmp/my.pcap.

@akostadinov
Copy link

akostadinov commented Aug 29, 2024

Thank you for the replies! Helped a lot. IPv6 appears to work without routable IPv6 IP. Now I observe something else strange with published ports. I don't know if it is a bug or an expected behavior. As a user thought, it is not very appreciated.

network listener (in container) client (on host) result
slirp4netns ipv4 ipv4 pass
slirp4netns ipv4 ipv6 pass
slirp4netns ipv6 ipv4 pass
slirp4netns ipv6 ipv6 pass
pasta ipv4 ipv4 pass
pasta ipv4 ipv6 FAIL
pasta ipv6 ipv4 pass
pasta ipv6 ipv6 pass

To start listener one could (remove network option for pasta and change TCP6 to TCP4 for IP version):

podman run --network slirp4netns -p 1234:1234 -it --rm alpine/socat TCP6-LISTEN:1234,fork,reuseaddr -

As for the client, I used netcat: nc ::1 1234 or nc 127.0.0.1 1234.

Is this something that is expected to be fixed as part of this issue? Is there something I can do to make the ipv4 listener/ipv6 client combination work?

-- this was performed with passt-0^20240821.g1d6142f-1.fc40.x86_64

@sbrivio-rh
Copy link
Collaborator

Now I observe something else strange with published ports. I don't know if it is a bug or an expected behavior.

Sorry, I missed this somehow.

In general, trying to connect an IPv4 client to an IPv6 server, or the other way around, shouldn't work at all, but it (partially) works with loopback addresses (at least with pasta) because we just handle those as "loopback" without really caring about the exact address.

I can reproduce the inconsistent behaviour you see, that's a separate issue I would say. Feel free to report it at bugs.passt.top (it's a pasta issue, you don't need Podman to reproduce it) or on this tracker.

@sbrivio-rh
Copy link
Collaborator

Now I observe something else strange with published ports.

Cc: @dgibson

@sbrivio-rh
Copy link
Collaborator

To summarise, if the host doesn't have a publicly routable IPv6 address when a container is started, the container cannot be reached from the host (with the default configuration).

Maybe it should do the same as slirp4netns does for local addresses, whatever that is. Starting my containers with slirp4netns, they just work.

For the moment being, we implemented a so-called local mode, which, if the network is not connected, assigns/uses link-local addresses for both IPv4 and IPv6. The containers will at least start and they will be addressable.

This is not the full solution, yet, which will be based on the netlink monitor as I mentioned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. network Networking related issue or feature pasta pasta(1) bugs or features
Projects
None yet
Development

No branches or pull requests

4 participants