-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adding IPv6 section to 87-podman-bridge.conflist breaks host's ipv6 network access #6114
Comments
@mccv1r0 Mind taking a look? This one seems very much like a CNI issue. |
Looking... (meetings all a.m.) I just tested using fedora 30 and all this works. I run this setup 24x7. Things breaking on the host interface shouldn't happen, CNI / Podman don't touch it. How good is IPv6 support in Centos &? Possibly a CentoOS 7 issue. |
at some point this happens (output of
so something related to starting the container is triggering dropping the default route.. i don't think this can be attributed to the OS itself.. i'd say ipv6 support is pretty solid in CentOS 7.. it's been in the linux kernel even way before version 3.10, what CentOS 7.x standardized on (as that is heavily backported by RedHat with more modern kernel patches).. i'm wide open on ideas how to diagnose what could be invoking this? it could be a simple case of my setup being broken, plain PEBKAC/user error.. @mccv1r0 that's pretty much the same setup i would like (except ultimately i'd like to do it as rootless).. would you mind sharing how your setup/configs differs from the default podman installation and what versions you're using? |
Is that output from You would have to be running Your output above is from the host:
Another hint is I'm curious, does eth0 on the host have an IPv6 Address? For things related to podman:
That's pretty much all podman sets up. Let's focus on that first. |
yes the ip mon output is from the host.. (i wouldn't see how i could have ran it inside the container if i'm setting the container up in the first place). yes eth0 has several IPv6 address, link-local of course, as well as a routable GUA range i was assigned for ipv6 internet reachability.. and yes, i am giving the same range to the CNI in the 87-podman-bridg.conflist.. the container comes up with a GUA in that same range after starting the container, yes the container can ping the host's cni-podman0 ipv6 address (ending in ::1) and the host can ping the cni generated IPv6 address of the container.. in that sense i guess the bridge config/veth attachment works as expected.. thing is, i'm just kind of taking a stab at using the host's GUA address but even when i tried to give it a ULA range, host ipv6 networking still broke what address space are you use in your setup? is the /48 also shared by the host like in my setup? is it a GUA /48 or a ULA /48? how is routing happening, is it just you enabling one of the forwarding sysctl's (and doing SNAT or NPT in the case of non-GUA addresses) |
So it looks like
IMHO what you are trying to do is the "Right Thing" (tm).
I never NAT IPv6 (unless forced to, e.g. k8s) I do both, obviously not at the same time. Some provides give you a routeable /48 or /56 for use with e.g. For any relevant host interfaces i.e. eth0 (but there may be more), how are the IPv6 addresses obtained? e.g. where did What is the IPv6 address/prefix for eth0 (assuming eth0 is the relevant interface). From above, I don't see any IPv6 address on eth0. Do any of the interfaces receive IPv6 addresses via DHCPv6? Is anything upstream sending RA's? Any of these will get the kernel doing "stuff" depending on how your host is setup (explicitly or defaults.) |
If CNI doesn't claim to configure the host for external v6 reachability of containers, then Podman presently doesn't make any such claims. I'd have to verify against Docker to see if we should be aiming to do so. |
this /64 was given to me by my provider (their router is on 2a03:8c00:1a:6::1), i can ping this fine and it looks like SLAAC is used to configure the 2a03:8c00:8::/64 address range that ended up on my eth0. i statically assigned those IPv6 settings in my /etc/sysconfig/network-scripts/ifcfg-eth0 file (along with IPv4 static address). I'm not at the level of doing port forwarding, but in the IPv4 case, i can run a container and attach to it and instantly have internet connectivity (e.g. the 10.88.0.1 automatically routes to the rest of the internet, doing NAT).. However with IPv6, i can't even ping my eth0's IPv6 (slaac assigned) address.. and i have that forwarding sysctl for ipv6 set to 1.. it's as if the cni-podman0 bridge not actually connected to the host's IPv6 network or something, since i can't ping anything outside of that.. i am not sure if that's possible if the bridge is supposed the be purely layer-2.. as far as the default route being dropped, that's the most crucial thing i'd need to figure out what is causing that.. |
Did your provider give you a static IPv6 address? If not, SLAAC will suffice, but... don't put that in your Did the provider delegate another prefix to you for use on e.g. cni-podman0? Otherwise they will have no idea how to route to you. Check your math. The provider supplied /64 bits
those are different 64 bit prefixes. SLAAC should only come up with the lower 64 bits from the
We might not be there yet, but you'll need to let
Once we know each interface behaves, we'll worry about routes. From the above it looks like the interfaces don't have prefixes assigned properly. |
per @mheon:
this statement seem at odds with: per @mccv1r0:
how can i do this too? :-) what does your config files/versions/etc look like? |
On some systems I manually configured Linux route IPv6 packets... on other I run routing daemons. When I added the second NIC, eth1 and plugged it into a L2 switch (which is analogous to |
Ok good point, i cleaned up my ifcfg-eth0 file (removed address specifics and let SLAAC do it's thing).. only thing in my ifcfg-eth0 related to IPv6 now (per this redhat blog post) is:
and ipv6 verified to work again without hard coding static IPv6 addresses/ranges.. this hasn't had any effect on my default route being dropped when i start a podman container however :-(
no i've only been given one /64 (i'm
the default router i was told was at 2a03:8c00:1a:6::1, and that my range was as below
I think the router is at 6::1 and i'm supposed to be on 8::/64, but i see what you'er saying and you're right, i don't really need to know the router 6::1 addr if i'm using SLAAC since that gave me the default router of the link-local address of the router interface anyway
I've now also set ip6tables policy to ACCEPT on both the INPUT and FORWARD chains, and i still can't ping my eth0's GUA from within the container.. :-(
the interface prefix assignments are now correct (hands off using SLAAC vs hardcoding static entries.. have a lot of IPv4 legacy thinking i need to undo seems).. so what's happening to my default route :-( |
i think i'm being misunderstood.. somehow you are able to accomplish this...
without any configuration file changes or custom settings? :-) that's what i'm asking for please |
There are topology specific setting that need to take place. What works with SLAAC doesn't (necessarily) work with dhcpv6 and/or static or combination. I'm not convinced your current prefixes are right. Regardless, if you used ULA for all your internal traffic, you should be able to get inside container to reach IPv6 address of Assuming that the link to your provider is
You'll need accept_ra=2 so that the kernel doesn't mess with the routing table. |
of those sysctl's, ipv6 on host still stops working if i start a podman container (and i can't ping anything beyond the cni-podman0 LL/GUA addr).. curiously, if i manually re-add the default route, the host ipv6 starts working, so it's definitely that which is causing ipv6 to (what i have been calling) "stop working" on my host.. however, when i manually add back the route that was deleted, then starting the container does not affect ipv6 on the host afterwards.. not sure what to make of that.. is there something wonky with my config in the (reset) "clean slate" that causes the default route to be dropped initially (but not re-dropped after i manually add it back in)? |
negative on the firewall being the problem :-( this is my failed attempt from pinging from container to GUA address that is on eth0
and i've tried different ranges inside the |
If your provider is sending RA's, the kernel should detects the RA and add the default route for you.
If I didn't manually delete it, the RA received by eth0 would have refreshed the timeout of the existing entry. AFAICT there are things not quite right about your setup on This has nothing to do with podman, docker, libvirt. None of them should be touching the host interfaces or the routes. CNI enters the network namespace of the container and runs commands in that namespace. The IPv6 config in |
i've confirmed my firewall is wide-open, and i'm receiving RA's:
i've set up a wrapper script for host-local plugin and ran it through strace, nothing unusual (at least nothing that would lead me to see why we're dropping the default ipv6 route).. it's strange because for some reason with all the redirections i'm doing to capture stdout and stderr in the wrapper script, the host-local process does not end up exitting (even tho i see several exit's in the strace output).. ping continue to work.. ~strangely these assignments are not including the default IPv6 route~ my default route was not a part of this as i've temporarily removed them
in any case, to rule out this being a host or network specific issue, i've also setup another Host (also KVM) at home (so entirely different network), same package versions same OS, same config for 87-podman-bridge.conflist.. also confirmed RA's are coming in (no firewall set up there either).. and same IPv6 default route dropping :-/ |
There are two default routes. At this point I don't know which one we're talking abut.
The output re host-local only pertains to the Does Your conflist doesn't set
What does |
Apologies, you're spot on tho as usually when i refer to the default route it's almost always from the host perspective, the IPv6 one that's getting deleted on the host, when i launch a podman container.. The default route within the container i've never actually checked yet (have done so below), because the host connectivity is the main focus as this has higher outage potential.
this hasn't been my experience..
as you can see above, after my default route somehow gets deleted, the default route is not added back automatically, even though my firewall is wide open AND i confirmed they are coming in via tcpdump, and i've ensured the sysctl's are set as mentioned..
i see, this is all overall a big educational opportunity to learn more about the way CNI functions and its components..
it does have this.. here it is in full
(also here i've tried to set the network range to a ULA (simply with
sorry i haven't included it in full earlier (i included a diff from what is distributed with the podman package), but is also why i was hoping to see what someone's looks like who has this working, in case there was something glaringly wrong with mine.. i haven't found any official example of what this file should look like other than the host-local plugin github page but that one does not talk about how the address space being used relates to how the host is set up (in case it makes a difference?) and as such, really as i'm not super familiar with the intricacies of all the CNI does, what i've been forced to do without a canonical reference, is essentially throwing a bunch of poo at the walls and trying to see what sticks as i try different things and test different theories.. I would also have just blamed the host and the network too, except as mentioned in my last update, i've gotten the same behaviour reproduced on a new blank VM with same versions, same 87-podman-bridge.conflist, everything same EXCEPT the network (at home this time instead of my colo box)..
thanks for that clarification
as you can imagine, i'm pretty much at the end of my rope and out of ideas.. |
Hi, I just wanted to quickly chime in and say a few things.
Please be aware you have to configure ip6tables (or whatever your OS firewall is) for forwarding. Second issue is that I can't statically set an IPv6 address with the --ip option. If I remove the following check, and re-compile, statically setting the IPv6 address works great! That check is; https://github.com/containers/libpod/blob/v1.9/pkg/spec/namespaces.go#L85-L87
Then I can run the following command and get this output;
I'm currently using Fedora CoreOS 31
|
There should be a dedicated flag for static IPv6 addresses ( |
A friendly reminder that this issue had no activity for 30 days. |
@mheon What is the scoop on this one? |
Is there any specific settings need to configured in ip6tables. Because when i tried to remove container having ipv6 ip it is giving bellow error - ERRO[0000] Error deleting network: running [/sbin/ip6tables -t nat -D POSTROUTING -s fd00::1:8:a/112 -j CNI-355124625f5423fd129aa828 -m comment --comment name: "demo" id: "23f508256866835fcedffb2fdd0f1436f3a47e5e8c99115004a8034b68fa62df" --wait]: exit status 1: iptables: Bad rule (does a matching rule exist in that chain?). Please suggest |
A friendly reminder that this issue had no activity for 30 days. |
@rhatdan Could you remove the stale label? |
Done. |
Hi, I had a lot of headache making IPv6 happen with Podman. I started with rootless and gave up, thinking rootful would be quick and simple. But I was wrong. I went into this problem, the IPv6 connectivity of the host machine just broke as the container starts. I can publish the port in IPv6 alright, and connect to the published port from the host alright, but without network access, clients just can't connect. Are there any workarounds right now? Or do I have to ditch it and search for other solutions? Thanks! |
A friendly reminder that this issue had no activity for 30 days. |
@rhatdan Could you remove the stale label? |
A friendly reminder that this issue had no activity for 30 days. |
@rhatdan Could you remove the stale label? |
@MartinX3 you just commenting on it seems to have removed the stale label...:^) |
@rhatdan thank you for enhancing the bot :D |
I would love to take credit for that, but someone else did it, we just take advantage of it. |
How is the state of things here? I'm using CentOS Stream 9 with rootless containers. With IPv6 enabled on the host, containers can't use it. So not "out of the box" yet 😄 |
Rootless networking is more complex. Did you set enable_ipv6 to true for the slirp4netns options? |
@m3nu Did you test it with |
No, didn't try Podman v4 yet. But What's the reason for not having it on by default? |
It will the default for podman v4.0 |
Upstream cni has no interest in supporting this, it has been a legacy issues. Please try with netavark to see if it is fixed there. |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Since I've gotten IPv6 connectivity recently set up to my CentOS 7 host, i wanted to start taking advantage and see how well Podman supports IPv6. This is as root (so not rootless).
After adding the relevant IPv6 section to
/etc/cni/net.d/87-podman-bridge.conflist
per the docs pages, then when starting a container, while a ping running on the host starts responding withping: sendmsg: Network is unreachable
Steps to reproduce the issue:
add a 2nd array to original
/etc/cni/net.d/87-podman-bridge.conflist
as.plugins[0].ipam.ranges[1]
(diff included below)starting a ping on host:
ping6 ipv6.google.com
, responses start coming in as expected..starting a (as root) container:
sudo podman run -it --rm docker.io/library/alpine:3.11
see ping starts failing with errors:
ping: sendmsg: Network is unreachable
additionally, container cannot reach the internet either and the only way to fix host ipv6 connectivity is to
systemctl restart network
Describe the results you received:
a.) the host ipv6 networking is greatly affected (in effect being completely broken), this is really bad as it could cause an outage
b.) the container still has no ipv6 connectivity either
Describe the results you expected:
container should simply be able to reach the ipv6 network and the host ipv6 networking should not be affected at all!
Additional information you deem important (e.g. issue happens only occasionally):
issue always happens
a diff of what the changes i made to 87-podman-bridge.conflist (adding my ipv6 GUA)
note: this happens even if i don't update the routes section (though ultimately i'd like my container reachable on the internet).
output of running podman with --log-level=debug
here are some log entries from /var/log/messages when starting the container
additionally, this is the output of running `ip monitor` that shows all network related changes
for the heck of it, the above three interlaced along with a flood ping (interval-time=10ms) at my gateway, to indicate exactly _when_ the ipv6 functionality on the host breaks
per the above, the part that look of interest to me are when the actual pinging starts failing:
also output of `podman version`:
note: i am running podman 1.9 but patched with #6025 but it should not make any difference as this issue being discussed now is not using rootless mode.
also output of `podman info --debug`:
Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
This is a libvirtd/KVM guest running CentOS 7 (whose hypervisor is a physical rackmount host)
The text was updated successfully, but these errors were encountered: