Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No network at docker container level (whereas everything is OK inside flatcar server). #1552

Closed
SR-G opened this issue Sep 30, 2024 · 26 comments
Labels
kind/bug Something isn't working

Comments

@SR-G
Copy link

SR-G commented Sep 30, 2024

Description

I have a NUC running flatcar, installed one year ago and never rebooted.
With only a docker installation inside it.
After today's reboot, i'm now with the latest FLATCAR images, but i can't access anymore the ports exposed out of my docker containers.

Impact

Running docker containers are unusable.

Environment and steps to reproduce

Not sure it will allow to "reproduce", but here are the symptoms.

FLATCAR :

  • running on a x86 NUC
  • running with a USB key with ipxe.iso, reaching a "netbootd" server exposing the config, manifests, ... for the corresponding MAC management + the mirroring of the FLATCAR image
  • FLATCAR customization is very light (NVME attached, SSH KEYS deployed ... that's all)
  • current version is : Linux FLATCAR-SERVER-1 6.6.48-flatcar #1 SMP PREEMPT_DYNAMIC Wed Sep 4 15:49:08 -00 2024 x86_64 AMD Ryzen 5 3450U with Radeon Vega Mobile Gfx AuthenticAMD GNU/Linux

CONTEXT :

  • Last reboot was one year ago !
  • I rebooted today, now have a newer FLATCAR version - everything has restarted fine after reboot, including the containers

SYMPTOMS :

  • containers are running - example : 3887f9e5c368 victoriametrics/victoria-metrics "/victoria-metrics-p…" 37 minutes ago Up 20 minutes 0.0.0.0:8428->8428/tcp victoria-metrics (note that the "8428" port is exposed and should be accessible from the outside of the container)
  • if i "curl" the exposed port from FLATCAR, it does not work ! : # curl 127.0.0.1:8428 curl: (56) Recv failure: Connection reset by peer
  • of course same if i try from another server on the same network at home
  • same with localhost:8428
  • if i enter into the container docker exec -it victoria-metrics /bin/ash, then i also have NO network there :
/ # ping 1.1.1.1 # <---------- this should have worked ! (it's working from the host = from flatcar)
PING 1.1.1.1 (1.1.1.1): 56 data bytes
^C
--- 1.1.1.1 ping statistics ---
20 packets transmitted, 0 packets received, 100% packet loss
/ # ping 192.168.8.190 # <---------- this is the IP of the host
PING 192.168.8.190 (192.168.8.190): 56 data bytes
^C
--- 192.168.8.190 ping statistics ---
6 packets transmitted, 0 packets received, 100% packet loss
/ # ping 192.168.8.1  # <---------- this is the router
PING 192.168.8.1 (192.168.8.1): 56 data bytes
^C
--- 192.168.8.1 ping statistics ---
3 packets transmitted, 0 packets received, 100% packet loss

(every ping is stuck for a while so i CTRL-C)

What i tryed (without luck) :

  • Reboot FLATCAR (does not help)
  • Export all containers, stop docker, delete the /var/lib/docker content, to re-start docker side from scratch : after reboot, docker recreates everything (including network) but still no luck !
  • Analyze the docker inspect, especially the network part : nothing wrong !
  • Disable (per FLATCAR configuration) IPV6 (just in case), with extra parameter in the list of kernel parameters at boot : IPv6 is correctly disabled, but still no luck
  • Of course i tried with other containers (NGINX, NETDATA, ...) : exact same problem
  • Start the container with --net=host (so no ports exposed) : here of course, everything works (the port is accessible from the host / from other servers + from the inside of the container, i can reach any IP outside it) - but of course, this is not the proper / possible solution

As far as i can tell, i would really say it's somehow FLATCAR related and not DOCKER related ... but i'm really stuck / i can't find any idea of what may be wrong.

Expected behavior

Network to be working

Additional information

IFCONFIG (there is 2 RJ45 port, only one is plugged at this time) :

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255
        ether 02:42:80:98:0d:80  txqueuelen 0  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 1  bytes 333 (333.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.8.190  netmask 255.255.255.0  broadcast 192.168.8.255
        ether 1c:83:41:29:50:a5  txqueuelen 1000  (Ethernet)
        RX packets 14509  bytes 1357301 (1.2 MiB)
        RX errors 0  dropped 310  overruns 0  frame 0
        TX packets 7739  bytes 927912 (906.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

enp4s0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        ether 1c:83:41:29:50:a6  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 28  bytes 2239 (2.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 28  bytes 2239 (2.1 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

veth2c0e65d: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        ether 96:f6:07:07:c9:8f  txqueuelen 0  (Ethernet)
        RX packets 32  bytes 3259 (3.1 KiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32  bytes 10074 (9.8 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

DOCKER INSPECT :

        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "b8a190a887024e9556004186b7b4fc72864ebc296f092c02096764d86cf28640",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {
                "8428/tcp": [
                    {
                        "HostIp": "0.0.0.0",
                        "HostPort": "8428"
                    }
                ]
            },
            "SandboxKey": "/var/run/docker/netns/b8a190a88702",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "a4193978e3cf39f76460767a89325ae9de24265d57e5de0da2eacb21cc394a26",
            "Gateway": "172.17.0.1",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "172.17.0.2",
            "IPPrefixLen": 16,
            "IPv6Gateway": "",
            "MacAddress": "02:42:ac:11:00:02",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "7920200b13641722ddfefba9257ad61f0eb1f87da2daa7d13dd2ed718773d722",
                    "EndpointID": "a4193978e3cf39f76460767a89325ae9de24265d57e5de0da2eacb21cc394a26",
                    "Gateway": "172.17.0.1",
                    "IPAddress": "172.17.0.2",
                    "IPPrefixLen": 16,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "02:42:ac:11:00:02",
                    "DriverOpts": null
                }
            }
        }

(i compared everything from that JSON with another containers in a non-flatcar environment, and no special/structural differences)

ETC/HOSTS (nothing special)

# /etc/hosts: Local Host Database
#
# This file describes a number of aliases-to-address mappings for the for 
# local hosts that share this file.
#
# The format of lines in this file is:
#
# IP_ADDRESS	canonical_hostname	[aliases...]
#
#The fields can be separated by any number of spaces or tabs.
#
# In the presence of the domain name service or NIS, this file may not be 
# consulted at all; see /etc/host.conf for the resolution order.
#

# IPv4 and IPv6 localhost aliases
127.0.0.1	localhost
::1		localhost

#
# Imaginary network.
#10.0.0.2               myname
#10.0.0.3               myfriend
#
# According to RFC 1918, you can use the following IP networks for private 
# nets which will never be connected to the Internet:
#
#       10.0.0.0        -   10.255.255.255
#       172.16.0.0      -   172.31.255.255
#       192.168.0.0     -   192.168.255.255
#
# In case you want to be able to connect directly to the Internet (i.e. not 
# behind a NAT, ADSL router, etc...), you need real official assigned 
# numbers.  Do not try to invent your own network numbers but instead get one 
# from your network provider (if any) or from your regional registry (ARIN, 
# APNIC, LACNIC, RIPE NCC, or AfriNIC.)
#

NETSTAT seems fine ... :

netstat -a -n -p |grep 8428
tcp        0      0 0.0.0.0:8428            0.0.0.0:*               LISTEN      2179/docker-proxy   

and

 ps -ef|grep docker-proxy
root        2179    1391  0 22:34 ?        00:00:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8428 -container-ip 172.17.0.2 -container-port 8428
@jepio
Copy link
Member

jepio commented Oct 1, 2024

Check the output of these on host:

sudo iptables -vnL
sysctl net.ipv4.ip_forward

@SR-G
Copy link
Author

SR-G commented Oct 1, 2024

Yeah i found some info related to "ip_forward" yesterday and also checked it at that time : it seems OK for "ip_forward" (= 1).

I'm not able to interpret the firewall rules ... (but how could them be wrong, as they've not been altered in any way = i've not configured anything or tried anything about this).

Here are the results :

FLATCAR-SERVER-1 ~ # iptables -vnL
Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   46  3864 DOCKER-USER  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
   46  3864 DOCKER-ISOLATION-STAGE-1  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
   23  1932 ACCEPT     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0            ctstate RELATED,ESTABLISHED
    0     0 DOCKER     all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 ACCEPT     all  --  docker0 docker0  0.0.0.0/0            0.0.0.0/0           

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain DOCKER (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     tcp  --  !docker0 docker0  0.0.0.0/0            172.17.0.2           tcp dpt:8428

Chain DOCKER-ISOLATION-STAGE-1 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DOCKER-ISOLATION-STAGE-2  all  --  docker0 !docker0  0.0.0.0/0            0.0.0.0/0           
   46  3864 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-ISOLATION-STAGE-2 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 DROP       all  --  *      docker0  0.0.0.0/0            0.0.0.0/0           
    0     0 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           

Chain DOCKER-USER (1 references)
 pkts bytes target     prot opt in     out     source               destination         
   46  3864 RETURN     all  --  *      *       0.0.0.0/0            0.0.0.0/0           


FLATCAR-SERVER-1 ~ # sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1

@ader1990
Copy link

ader1990 commented Oct 1, 2024

Hello,

I would suggest to test a minimal scenario and see if it works with the new Flatcar image you have.
Maybe there is an issue in some other place, which is not clearly visible.

Can you try to start a new nginx container with an exposed port (12301 - I chose it randomly), and then try to curl it:

docker run -d -p 12301:80 --name test-exposed-ports nginx:latest
sleep 10
curl http://localhost:12301

These steps worked on my clean Flatcar env. If it does not work on yours, we can compare the network details.

Thanks.

@SR-G
Copy link
Author

SR-G commented Oct 1, 2024

Have just tried, and still same behavior :

FLATCAR-SERVER-1 ~ # docker run -d -p 12301:80 --name test-exposed-ports nginx:latest
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
Digest: sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb
Status: Downloaded newer image for nginx:latest
75a7fe4439dac8085de6b8d0ca4ff8263d264003699dd0c27ce91b80ade83dcd


FLATCAR-SERVER-1 ~ # sleep 10


FLATCAR-SERVER-1 ~ # curl http://localhost:12301
curl: (56) Recv failure: Connection reset by peer

@ader1990
Copy link

ader1990 commented Oct 1, 2024

Do you happen to know from which Flatcar version did you upgrade, to help reproduce the issue?

@SR-G
Copy link
Author

SR-G commented Oct 1, 2024

I can post my IGNITION details, but i don't think it's really related (however ... maybe it was OK with that old revision and not anymore with the new versions ? For example : i'm exposing (as it's local network / homelab usage) the 2375 docker port outside of the host) :

Ignition (a bit simplified here about non-related topics like SSH keys) :

passwd:
  users:
    - name: root
      password_hash: "REMOVED"
      ssh_authorized_keys: "REMOVED"

storage:
  links:
    # Set proper timezone
    - path: /etc/localtime
      filesystem: root
      overwrite: true
      target: /usr/share/zoneinfo/Europe/Paris

  files:
    # we force same SSH config all the time
    - path: /etc/hostname
      filesystem: root
      mode: 0644      
      contents:
        inline: FLATCAR-SERVER-1
    - path: /etc/ssh/ssh_host_dsa_key(extra_sections_REMOVED)

systemd:
  units:

    # Mount NVME to /var/lib/docker
    # Has to be partitioned & formatted manually before !
    - name: var-lib-docker.mount
      enabled: true
      contents: |
        [Unit]
        Description=Mount NVME to /var/lib/docker
        Before=local-fs.target
        [Mount]
        What=/dev/nvme0n1p1
        Where=/var/lib/docker
        Type=ext4
        [Install]
        WantedBy=local-fs.target

    # We want docker (service) to be automatically started instead of starting docker
    # when needed through its socket
    # - name: docker.socket
    #   enabled: false

    - name: docker.service
      enabled: true
      dropins:
        - name: 10-wait-docker.conf
          contents: |
            [Unit]
            After=var-lib-docker.mount
            Requires=var-lib-docker.mount            

    - name : containerd.service
      enabled: true

    # Expose docker socket over tcp = over the network
    - name: docker-tcp.socket
      enabled: true
      contents: |
        [Unit]
        Description=Docker Socket for the API

        [Socket]
        ListenStream=2375
        BindIPv6Only=both
        Service=docker.service

        [Install]
        WantedBy=sockets.target

The "manifest" exposing the image (i just added here at the end the disabling of IPv6) :

---
# This example manifest boots Ubuntu 20.04 into ram using tmpfs-mounted root filesystem downloaded over HTTP
id: flatcar-server-1
# ipv4: 192.168.17.103/24
ipv4: 192.168.8.190/24
hostname: flatcar-server-1
domain: test.local
leaseDuration: 1h

# many values are possible because a single machine may have multiple interfaces
# and it may not be known which one boots first
mac:
  - 1c:83:41:29:50:a5
  - 1c:83:41:29:50:a6

# in the "order of preference"
dns:
  - 8.8.8.8
  - 8.8.4.4

# in the "order of preference"
router:
  - 192.168.8.1

# in the "order of preference"
ntp:
  - 192.168.8.1

ipxe: true
bootFilename: install.ipxe

mounts:
  - path: /images
    pathIsPrefix: true
    proxy: http://stable.release.flatcar-linux.net/amd64-usr/current
    appendSuffix: true
    # localDir: /opt/netbootd/images/

  - path: /common
    pathIsPrefix: true
    localDir: /opt/netbootd/configs/common
    appendSuffix: true

  - path: /configs
    # When true, all paths starting with this prefix use this mount.
    pathIsPrefix: true
    # Provides a path on the host to find the files.
    # So that localDir: /tftpboot path: /subdir and client request: /subdir/file.x so that the host
    # path becomes /tfptboot/file.x
    localDir: /opt/netbootd/configs/flatcar-server-1
    # When true, the localDir path defined above gets a suffix to the Path prefix appended to it.
    appendSuffix: true

  - path: /install.ipxe
    content: |
      #!ipxe
      # set base-url http://stable.release.flatcar-linux.net/amd64-usr/current
      set base-url {{ .HttpBaseUrl }}/images
      kernel ${base-url}/flatcar_production_pxe.vmlinuz initrd=flatcar_production_pxe_image.cpio.gz flatcar.first_boot=1 ignition.config.url={{ .HttpBaseUrl.String }}/configs/flatcar.ign ipv6.disable=1
      initrd ${base-url}/flatcar_production_pxe_image.cpio.gz
      boot     

@SR-G
Copy link
Author

SR-G commented Oct 1, 2024

Do you happen to know from which Flatcar version did you upgrade, to help reproduce the issue?

Sadly, no ... i was also wondering the same (i would have been willing to force (i think it's possible) the usage of that old revision, but i don't know which one it was).

@ader1990
Copy link

ader1990 commented Oct 1, 2024

  • ports exposed) : here of course, everything works (the port is accessible from the host / from other servers + from the inside of the container, i can reach any IP outside it) - but of course, this is not the proper / possible solution

maybe if you can point the aproximate time of first install, I see you were using stable channel.

@ader1990
Copy link

ader1990 commented Oct 1, 2024

From the state of the iptables, veth* and port mappings, all looks correct, maybe this is an underlying issue with the containerd (I can only guess). From inside the docker container, can you ping the docker gateway? that if you have a network interface attached. Can you share, from inside the container the output of ip a?

Also, if you can share the full journalctl output, there might be some error / failure log that can pinpoint the issue.

@SR-G
Copy link
Author

SR-G commented Oct 1, 2024

So last time i touched my FLATCAR / IGNITION configuration files (on the (non-flatcar) NAS exposing images & configuration), it was around end of may 2023.
image

I think it has been rebooted one or two times after time (without issues at that time), so i would say the used image was around "July, 2023" (or a bit less / a bit more).

@SR-G
Copy link
Author

SR-G commented Oct 1, 2024

From the state of the iptables, veth* and port mappings, all looks correct, maybe this is an underlying issue with the containerd (I can only guess). From inside the docker container, can you ping the docker gateway? that if you have a network interface attached. Can you share, from inside the container the output of ip a?

Also, if you can share the full journalctl output, there might be some error / failure log that can pinpoint the issue.

From inside a container :

FLATCAR-SERVER-1 ~ # docker exec -it victoria-metrics /bin/ash
/ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
/ # 

Pinging the Docker Gateway is also not routed :

/ # ping 172.17.0.1
PING 172.17.0.1 (172.17.0.1): 56 data bytes
^C
--- 172.17.0.1 ping statistics ---
20 packets transmitted, 0 packets received, 100% packet loss
/ # 

JournalCTL of the host (= flatcar) : https://gist.github.com/SR-G/41fb3d48d728b321d9c5b42967d87e4e
(after a fresh reboot and extracted with journalctl -S today --no-tail)

@ader1990
Copy link

ader1990 commented Oct 1, 2024

So last time i touched my FLATCAR / IGNITION configuration files (on the (non-flatcar) NAS exposing images & configuration), it was around end of may 2023. image

I think it has been rebooted one or two times after time (without issues at that time), so i would say the used image was around "July, 2023" (or a bit less / a bit more).

The USR-B partition should have more information on when the image was created, I think. Can you mount the 4th partition (USR-B), and see the oldest file timestamps?

https://www.flatcar.org/docs/latest/reference/developer-guides/sdk-disk-partitions/

@SR-G
Copy link
Author

SR-G commented Oct 1, 2024

Mhh i'm confused, how should i do that ?

As indeed, in my situation, i'm running in full IPXE : i have nothing installed in local / no flatcar local partitions (i just have one NVME disk, which is mounted on /var/lib/docker (so not really "flatcar related" = it just persists volumes and container images), and nothing more = FLATCAR system is only in memory / served from iPXE without local installation)

@ader1990
Copy link

ader1990 commented Oct 1, 2024

Mhh i'm confused, how should i do that ?

As indeed, in my situation, i'm running in full IPXE : i have nothing installed in local / no flatcar local partitions (i just have one NVME disk, which is mounted on /var/lib/docker (so not really "flatcar related" = it just persists volumes and container images), and nothing more = FLATCAR system is only in memory / served from iPXE without local installation)

Gotcha, in that case, there is no leftover information.

Meanwhile, I have tried to run Flatcar stable 3510.2.2 (from around May 2023) - start the nginx docker container, verified it works, and then upgrade to latest stable 3975.2.1, verified and it works as expected.

@SR-G
Copy link
Author

SR-G commented Oct 1, 2024

And by the way, that's why i'm even more surprised it's not working : after each reboot, as everything is taken "on the fly", i would have expected to be nearly in a "fresh start" mode ...

Also also why i tried to blank the /var/lib/docker partition (to also have docker starting from scratch, in case the docker networks would have been corrupted for any reason).

@ader1990
Copy link

ader1990 commented Oct 1, 2024

Seems I cannot reproduce the issue, I tried to remove the /var/lib/docker/network/files/local-kv.db and then reboot, restart docker, restart container, the issue was not present.

What you can try is to make sure that the environment is cleaned up on the Linux level, I can suggest you the following:

  • stop and disable docker and containerd services
  • cleanup the /var/lib/docker
  • remove the docker bridge, veth interfaces and flush/clean all the iptables
  • reboot
  • restart containerd and docker and see if the issue reproduces

Something similar to (these commands are examples, please be cautious and run them at your own risk):

systemct stop docker
systemct stop containerd

systemct disable docker
systemct disable containerd

# nuke /var/lib/docker

# unmount /var/lib/docker and make sure there is no file in the umounted /var/lib/docker
# there might be files there made before the nvme partition was mounted and that might be an issue

brctl delif docker0 veth*

brctl delbr docker0

ip link delete veth*

iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -t nat -F
iptables -t mangle -F
iptables -F
iptables -X

ip6tables -P INPUT ACCEPT
ip6tables -P FORWARD ACCEPT
ip6tables -P OUTPUT ACCEPT
ip6tables -t nat -F
ip6tables -t mangle -F
ip6tables -F
ip6tables -X

# run reboot

@SR-G
Copy link
Author

SR-G commented Oct 1, 2024

I applied all these commands and ... still (nearly, see below) the same behavior.

Also, as i start to think that the problem may be in my IGN definition (through Butane), i removed nearly everything, especially the "exposure of docker TCP 2375 port over the network" (just in case ...) + re-disable "docker.socket"

systemd:
  units:

    # Mount NVME to /var/lib/docker
    # Has to be partitioned & formatted manually before !
    - name: var-lib-docker.mount
      enabled: true
      contents: |
        [Unit]
        Description=Mount NVME to /var/lib/docker
        Before=local-fs.target
        [Mount]
        What=/dev/nvme0n1p1
        Where=/var/lib/docker
        Type=ext4
        [Install]
        WantedBy=local-fs.target

    # We want docker (service) to be automatically started instead of starting docker
    # when needed through its socket
    - name: docker.socket
      enabled: false

    - name: docker.service
      enabled: true
      dropins:
        - name: 10-wait-docker.conf
          contents: |
            [Unit]
            After=var-lib-docker.mount
            Requires=var-lib-docker.mount            

    - name : containerd.service
      enabled: true

What is now definitely different is that before i was encountering a "connection reset by peer" (as visible before) after a few seconds (1 or 2) :

# curl localhost:12301
curl: (56) Recv failure: Connection reset by peer

Whereas now it takes way more time before failing :

# time curl -vvv http://localhost:12301
* Host localhost:12301 was resolved.
* IPv6: ::1
* IPv4: 127.0.0.1
*   Trying 127.0.0.1:12301...
* Connected to localhost (127.0.0.1) port 12301
> GET / HTTP/1.1
> Host: localhost:12301
> User-Agent: curl/8.7.1
> Accept: */*
> 
* Request completely sent off
* Recv failure: Connection reset by peer
* Closing connection
curl: (56) Recv failure: Connection reset by peer

real	2m13.497s
user	0m0.006s
sys	0m0.013s

(so now 2 minutes of timeout ...)

So it's obvious that this is only happening in a very specific situation (mine) but i really start to struggle understanding from where it could come ...

@SR-G
Copy link
Author

SR-G commented Oct 1, 2024

So i had many issues to test with older FLATCAR versions (detailled at the end of this post (in case it would help, but i think it's not really related, i would say)).

In the end, i've been able to revert successfully to an older one : 3374.2.4 (which is probably older than what i was on before !)

And guess what : with that older version, everything works perfectly out of the box (with the snippet you provided before + the "cleaning" of everything to start from scratch just before rebooting on that version) :

FLATCAR-SERVER-1 ~ # docker run -d -p 12301:80 --name test-exposed-ports nginx:latest
Unable to find image 'nginx:latest' locally
latest: Pulling from library/nginx
Digest: sha256:b5d3f3e104699f0768e5ca8626914c16e52647943c65274d8a9e63072bd015bb
Status: Downloaded newer image for nginx:latest
4fc7efe1b3d53ba9210b644d11623d1b88d9137ead4622ad65d09a03b6ec7c5a

sleep 10

FLATCAR-SERVER-1 ~ # curl http://localhost:12301
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

And it's not working on :

  • "current"
  • "3975.2.0" (i've also been able to test that one)

I tried a few intermediate versions, but they are not booting, i don't know why yet (no HDMI monitor is plugged in the NUC), like :

  • "3815.2.0" => does not boot (not even after clearing everything in /var/lib/docker, ...)
  • "3815.2.5" => same

So : i have no idea what, but "something" has changed in FLATCAR between 3374.2.4 (everything is working) and 3975.2.0 (i have the issue) and is generating the issue in my specific situation (as the issue is not reproductible : it may be related to my specific hardware, or my ignition configuration, or ...) . May be also the upgrade of the docker daemon to a newer version between these two FLATCAR versions, of course.


Regarding the "downgrade" issues (even after having "blanked" everything) :

  • if i'm using a tool on my NAS to "proxyfy" the HTTPS URL of the flatcar repository (= this is what i was doing until now) :
    • the download from stable.release.flatcar-linux.net IS working with "current/"
    • but if i test with any older version, then i have this error when the iPXE tries to get the images : 2024/10/01 19:45:57 http: proxy error: x509: certificate signed by unknown authority => i have to admit that i don't see any reason / i really don't know why ... (it's the same URL so it should not differ ...) (but it's probably more related to the tool that i use to expose (through proxyfication) the images + the ignition configuration = the tool is "netbootd" and is running on my NAS in a docker image)
  • so i had to revert to a some "manual downloads" (wget ...) = then, the images are then not remotely retrieved during boot , but are taken from the local disks of my NAS (= the "netbood" software is not proxifying anymore the remote URLs, but is serving local files) (just to solve the x509 error encountered in netwboot ...)
    • and this is where i encountered the "stuck during boot" errors with some versions

@ader1990
Copy link

ader1990 commented Oct 2, 2024

Hello, I have tried to reproduce the issue using the 3374.2.4 as base and then 3975, but all worked fine.

It is also possible that you do not need to disable the docker.socket if you enable the docker service, as the docker service will start the docker.socket anyways.

It might also be the case something happens on the hardware layer, which is at times impossible to troubleshoot. I would suggest to boot a live Ubuntu on the NUC and do a fwupdmgr update && fwupdmgr update, also maybe check if there is a BIOS update available. It happened to me several times that random issues were appearing and only firmware upgrades solved them.

The proxy certificate issue might be due to an outdated ca-certificates package on that proxy host.

"3815.2.0" not booting, would be great to see some logs .

I think I can get a NUC to test things out, as I need to also check the #1306 and can do it at the same time. Can you share the NUC type, if possible? Oh, from the AMD Ryzen 5 3450U with Radeon Vega Mobile Gfx AuthenticAMD GNU/Linux - this is not an Intel original NUC.

Thanks.

@SR-G
Copy link
Author

SR-G commented Oct 2, 2024

So a few updates.

I tested on a second NUC (not exactly the same model, but still rather close) (i'll put the models in a next post).

And i have the EXACT same behavior / same problem.

About the "won't boot" issue

I don't have it anymore.
As, regarding another point (the X509 certificate issue) you were suggesting that maybe it was something outdated, i rebuilt the "netbootd" docker image hosted on my NAS (exposing ignition config, images, ...).
And i think that has a consequence, it has solved the issue about images not booting, as now all old boot images are loading fine (as far as i can tell).

  • Except if i'm wrong and if it's coming from something else (the fact that i launched first a very old version, maybe ?)
  • For now, except if it happens again, i fear i won't be able to reproduce / take any pictures of a HDMI monitor frozen during boot (and as a reminder, when the boot was not working, i was not able to SSH into the boxes)

About the exact versions not working

Per previous point, as it seems that now i'm able to boot on any older version, i've been able to confirm (on the two NUC) that the problem is definitely coming since 3815.2.0 (and that the previous version, 3760.2.0, is working / does not have any issue.

  • With version 3760.2.0, you can see both the version + that it's working (still with same "procedure" to test) :
% ssh [email protected]
Flatcar Container Linux by Kinvolk stable 3760.2.0
Update Strategy: No Reboots
 _____ _        _  _____ ____    _    ____  
|  ___| |      / \|_   _/ ___|  / \  |  _ \ 
| |_  | |     / _ \ | || |     / _ \ | |_) |
|  _| | |___ / ___ \| || |___ / ___ \|  _ < 
|_|   |_____/_/   \_\_| \____/_/   \_\_| \_
 BESSTAR TECH LIMITED UM250/UM250, BIOS 5.13 01/07/2021
 18:53:31 up 0 min,  1 user,  load average: 0.15, 0.03, 0.01
FLATCAR-SERVER-2 ~ # 
FLATCAR-SERVER-2 ~ # 
FLATCAR-SERVER-2 ~ # docker stop test-exposed-ports && docker rm test-exposed-ports
docker run -d -p 12301:80 --name test-exposed-ports nginx:latest
sleep 10
curl http://localhost:12301
test-exposed-ports
test-exposed-ports
c15d8c40fa9d6c4361d3283d97d6ed63a3b76cefd8af5c3892718033fa7738f6
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
FLATCAR-SERVER-2 ~ # 
FLATCAR-SERVER-2 ~ # docker --version
Docker version 20.10.24, build e78084afe5
  • With version 3815.2.0, it's not working :
% ssh [email protected]
Flatcar Container Linux by Kinvolk stable 3815.2.0
Update Strategy: No Reboots
 _____ _        _  _____ ____    _    ____  
|  ___| |      / \|_   _/ ___|  / \  |  _ \ 
| |_  | |     / _ \ | || |     / _ \ | |_) |
|  _| | |___ / ___ \| || |___ / ___ \|  _ < 
|_|   |_____/_/   \_\_| \____/_/   \_\_| \_
 BESSTAR TECH LIMITED UM250/UM250, BIOS 5.13 01/07/2021
 18:50:06 up 0 min,  1 user,  load average: 0.23, 0.05, 0.02
FLATCAR-SERVER-2 ~ # 
FLATCAR-SERVER-2 ~ # docker stop test-exposed-ports && docker rm test-exposed-ports
docker run -d -p 12301:80 --name test-exposed-ports nginx:latest
sleep 10
curl http://localhost:12301
test-exposed-ports
test-exposed-ports
7229d60742d3631158149a50dda2a9c7336e40218186446d0763d21a144fc2d0
curl: (56) Recv failure: Connection reset by peer
FLATCAR-SERVER-2 ~ # docker --version
Docker version 24.0.9, build 2936816130

This is :

  • Always like that (= this is reproductible on my side / in my exact same context / with my hardware)
  • This is exactly the same behavior on two different machines (again, very similar but not exactly the same model)

Interestingly enough, the docker daemon has been updated between these two FLATCAR version : from 20.10.24 to 24.0.9

Next steps

  1. I still have to test booting on a LiveCD and testing the firmware upgrade
    (and i'm already on the latest BIOS versions)
  2. Is there a FLATCAR "DEV" version with a newer docker version and that i could test ?

@SR-G
Copy link
Author

SR-G commented Oct 2, 2024

About the NUC i'm using

They are indeed not INTEL "NUC".
It's (cheap) "Miniforum" ones, probably not orderable anymore as they have been bought ~2 years ago.

  1. The first one is a "Miniforum UM340" :
  1. The second one (tested today) is a "Miniforum UM250" :

u9362j0

@jepio
Copy link
Member

jepio commented Oct 3, 2024

Can you compare the output of networkctl from host on working and broken version?

One of the comments showed ip link output from inside the container that shows the link has a M-DOWN flag which I've never seen before and is entirely unexpected.

I'm going to say the problem is that the networkd config that we use for pxe is trying to manage the veth device on the host, and that's causing your problem. If you want to confirm quickly, then create the following file through ignition:
/etc/systemd/network/yy-pxe.network:

[Match]
Name=*
KernelCommandLine=!root
Type=!loopback bridge tunnel vxlan wireguard
Driver=!veth dummy

[Network]
DHCP=yes
KeepConfiguration=dhcp-on-stop
IPv6AcceptRA=true

[DHCP]
ClientIdentifier=mac
UseMTU=true
UseDomains=true

If this fixes it then i'm right and we need to include the fix.

@ader1990
Copy link

ader1990 commented Oct 3, 2024

Can you compare the output of networkctl from host on working and broken version?

One of the comments showed ip link output from inside the container that shows the link has a M-DOWN flag which I've never seen before and is entirely unexpected.

I'm going to say the problem is that the networkd config that we use for pxe is trying to manage the veth device on the host, and that's causing your problem. If you want to confirm quickly, then create the following file through ignition: /etc/systemd/network/yy-pxe.network:

[Match]
Name=*
KernelCommandLine=!root
Type=!loopback bridge tunnel vxlan wireguard
Driver=!veth dummy

[Network]
DHCP=yes
KeepConfiguration=dhcp-on-stop
IPv6AcceptRA=true

[DHCP]
ClientIdentifier=mac
UseMTU=true
UseDomains=true

If this fixes it then i'm right and we need to include the fix.

If that is the case, it looks similar to systemd/systemd#28626 and #1515.

I am linking this issues so that we have more insight in the future with testing systemd upgrades - maybe by adding some Mantle tests for these scenarios.

@ader1990
Copy link

ader1990 commented Oct 3, 2024

@jepio From the logs @SR-G shared: https://gist.github.com/SR-G/41fb3d48d728b321d9c5b42967d87e4e#file-gistfile1-txt-L2786 ->

Oct 01 13:11:22 FLATCAR-SERVER-1 systemd-networkd[1122]: veth0dbe79e: Configuring with /usr/lib/systemd/network/yy-pxe.network.

@SR-G
Copy link
Author

SR-G commented Oct 3, 2024

So today's updates.

About networkctl in working / non-working situation

  • Working situation ("old" FLATCAR version / network is working) :
FLATCAR-SERVER-2 /etc # cat /etc/os-release | grep VERSION_ID
VERSION_ID=3760.2.0
FLATCAR-SERVER-2 /etc # 
FLATCAR-SERVER-2 /etc # networkctl 
IDX LINK            TYPE     OPERATIONAL SETUP      
  1 lo              loopback carrier     configured 
  2 enp1s0          ether    routable    configured 
  3 enp2s0          ether    no-carrier  configuring
  4 docker_gwbridge bridge   no-carrier  unmanaged
  5 docker0         bridge   routable    unmanaged
  7 vethe963b34     ether    enslaved    unmanaged
  9 vethebe56aa     ether    enslaved    unmanaged

7 links listed.
  • Non-working situation ("latest / current" FLATCAR version / network is NOT working inside containers) :
FLATCAR-SERVER-2 ~ # cat /etc/os-release | grep VERSION_ID
VERSION_ID=3975.2.1
FLATCAR-SERVER-2 /etc # 
FLATCAR-SERVER-2 ~ # networkctl
IDX LINK            TYPE     OPERATIONAL SETUP      
  1 lo              loopback carrier     configured 
  2 enp1s0          ether    routable    configured 
  3 enp2s0          ether    no-carrier  configuring
  4 docker_gwbridge bridge   no-carrier  configuring
  5 docker0         bridge   no-carrier  configuring
  7 vethba97ce9     ether    carrier     configuring

6 links listed.

About the suggested solution

I have applied as suggested that Butane configuration :

Screenshot From 2024-10-03 18-23-29

And after rebooting (i'm of course on the latest / current version, and just before it was not working), networkctl gives :

FLATCAR-SERVER-2 ~ # cat /etc/os-release | grep VERSION_ID
VERSION_ID=3975.2.1
FLATCAR-SERVER-2 ~ # 
FLATCAR-SERVER-2 ~ # networkctl
IDX LINK            TYPE     OPERATIONAL SETUP      
  1 lo              loopback carrier     unmanaged
  2 enp1s0          ether    routable    configured 
  3 enp2s0          ether    no-carrier  configuring
  4 docker_gwbridge bridge   no-carrier  unmanaged
  5 docker0         bridge   routable    unmanaged
  7 vethc86e6ac     ether    enslaved    unmanaged

6 links listed.

And then the good news : from there, everything works perfectly !

FLATCAR-SERVER-2 ~ # docker stop test-exposed-ports && docker rm test-exposed-ports
docker run -d -p 12301:80 --name test-exposed-ports nginx:latest
sleep 10
curl http://localhost:12301
test-exposed-ports
test-exposed-ports
1f0af7f061fb9859a9b53d400278ef8d60df04e5d1e4d8b713b89c9737b289fc
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

@SR-G
Copy link
Author

SR-G commented Oct 6, 2024

So i think we can close this, and that you'll (at some point) include by default the provided patch.

Thanks for the valuable help.

@SR-G SR-G closed this as completed Oct 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants