Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sys-apps/systemd: upgrade from version 252 to version 255 #1679

Merged
merged 7 commits into from
Mar 14, 2024

Conversation

ader1990
Copy link
Contributor

@ader1990 ader1990 commented Feb 19, 2024

Upgrades systemd 252 to systemd 255.

This is WoIP branch, the following issues are still to be resolved:

  • during initrd stage - dracut-cmdline-ask.service fails because systemd-vconsole-setup.service fails (/usr/bin/loadkeys fails) - fixed by adding the dracut internationalization module
  • check if there are some unnecessary code changes for the kbd module - resolved by sticking with the i18n dracut module - fixed by adding a stripped down patch to dracut i18n module

Fixes: flatcar/Flatcar#1269
Requires: flatcar/bootengine#87
Requires a mantle fix or systemd-resolved MulticastDNS disabled: https://bugs.launchpad.net/cloud-images/+bug/2038894/comments/4. Fixed by disabling MulticastDNS.

@ader1990 ader1990 changed the title upgrade to systemd 255 v2 sys-apps/systemd: upgrade to systemd 255 Feb 19, 2024
Copy link

github-actions bot commented Feb 19, 2024

Test report for 3908.0.0+nightly-20240313-2100 / amd64 arm64

Platforms tested : qemu_uefi-amd64 qemu_update-amd64 qemu_uefi-arm64 qemu_update-arm64

ok bpf.execsnoop 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok bpf.local-gadget 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cgroupv1 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.multipart-mime 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.script 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.discovery 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _harness.go:588: Found systemd unit failed to start (?[0;1;39metcd-member.servic????[0mtcd (System Application Container).  ) on machine 81b7b3f4-2def-4e9c-8962-f52d39e40b34 console"
    L3: "harness.go:588: Found systemd unit failed to start (?[0;1;39metcd-member.servic????[0mtcd (System Application Container).  ) on machine 48e7ed63-fbb0-448b-9478-d50728249ec1 console_"
    L4: " "

ok cl.etcd-member.etcdctlv3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.v2-backup-restore 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.filesystem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.flannel.udp 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.flannel.vxlan 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.instantiated.enable-unit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.kargs 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.luks 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.reuse 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.wipe 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.partition_on_boot_disk 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.symlink 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.translation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.ext4checkexisting 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.swap 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.vfat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.install.cloudinit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.internet 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.locksmith.cluster 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.misc.falco 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.network.initramfs.second-boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.listeners 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.wireguard 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.omaha.ping 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.osreset.ignition-rerun 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.overlay.cleanup 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.swap_activation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.fallbackdownload # SKIP 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.toolbox.dnf-install 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.badverity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.grubnop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok cl.update.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _update.go:91: couldn_t reboot: machine __49aa3c80-48be-483f-92b2-cf2ac5cbcf69__ failed basic checks: some systemd units failed:"
    L3: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L4: "status: "
    L5: "journal:-- No entries --"
    L6: "harness.go:588: Found systemd unit failed to start (?[0;1;39mldconfig.service?[0m - Rebuild Dynamic Linker Cache.  ) on machine 49aa3c80-48be-483f-92b2-cf2ac5cbcf69 console_"
    L7: " "

ok cl.users.shells 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.verity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.auth.verify 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.local 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.s3.versioned 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.security.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.systemd.enable-service 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.boolean 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.enforce 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.tls.fetch-urls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.update.badusr 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.systemd-nspawn 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.btrfs-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.containerd-restart 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.devicemapper-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.enable-service.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.lib-coreos-dockerd-compat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.network 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.selinux 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.userns 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok extra-test.[first_dual].cl.update.docker-btrfs-compat 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok extra-test.[first_dual].cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok kubeadm.v1.27.2.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.calico.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:125: I0314 14:04:24.469299    1791 version.go:256] remote version is much newer: v1.29.2; falling back to: stable-1.27"
    L2: "cluster.go:125: W0314 14:04:24.594509    1791 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.11, falling back to the nearest etcd version (3.5.7-0)"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.11"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.11"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.11"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.11"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.7-0"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L10: "cluster.go:125: I0314 14:04:33.818511    1944 version.go:256] remote version is much newer: v1.29.2; falling back to: stable-1.27"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.27.11"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: W0314 14:04:34.180789    1944 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.8?6]"
    L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L35: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L37: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L42: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L43: "cluster.go:125: [apiclient] All control plane components are healthy after 4.501565 seconds"
    L44: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L45: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L46: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L47: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L48: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L49: "cluster.go:125: [bootstrap-token] Using token: e9kzgy.jomqrfqj8i5rppl6"
    L50: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L54: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L55: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L56: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L57: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L58: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L61: "cluster.go:125: "
    L62: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L63: "cluster.go:125: "
    L64: "cluster.go:125:   mkdir -p $HOME/.kube"
    L65: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L66: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L67: "cluster.go:125: "
    L68: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L69: "cluster.go:125: "
    L70: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L71: "cluster.go:125: "
    L72: "cluster.go:125: You should now deploy a pod network to the cluster."
    L73: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L74: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L77: "cluster.go:125: "
    L78: "cluster.go:125: kubeadm join 10.0.0.86:6443 --token e9kzgy.jomqrfqj8i5rppl6 _"
    L79: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:2c1a076aeae40bd9e89c6a913dcda8b63a9d5c52ffb2e205dcfa733aee724a3e "
    L80: "cluster.go:125: i  Using Cilium version 1.12.5"
    L81: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
    L82: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
    L83: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
    L84: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L85: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L86: "cluster.go:125: ? Created CA in secret cilium-ca"
    L87: "cluster.go:125: ? Generating certificates for Hubble..."
    L88: "cluster.go:125: ? Creating Service accounts..."
    L89: "cluster.go:125: ? Creating Cluster roles..."
    L90: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
    L91: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L92: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L93: "cluster.go:125: ? Creating Agent DaemonSet..."
    L94: "cluster.go:125: ? Creating Operator Deployment..."
    L95: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
    L96: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L97: "cluster.go:125: ?[33m    /??_"
    L98: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L99: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L100: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L101: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L102: "cluster.go:125: ?[34m    ___/"
    L103: "cluster.go:125: ?[0m"
    L104: "cluster.go:125: Deployment       cilium-operator    "
    L105: "cluster.go:125: DaemonSet        cilium             "
    L106: "cluster.go:125: Containers:      cilium             "
    L107: "cluster.go:125:                  cilium-operator    "
    L108: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
    L109: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L110: "--- FAIL: kubeadm.v1.27.2.cilium.base/IPSec_encryption (65.04s)"
    L111: "cluster.go:125: Error: Unable to determine status:  timeout while waiting for status to become successful: context deadline exceeded"
    L112: "cluster.go:145: __/opt/bin/cilium status --wait --wait-duration 1m__ failed: output ?[33m    /????_"
    L113: "?[36m /?????[33m___/?[32m????_?[0m    Cilium:         ?[32mOK?[0m"
    L114: "?[36m ___?[31m/????_?[32m__/?[0m    Operator:       ?[31m1 errors?[0m, ?[33m1 warnings?[0m"
    L115: "?[32m /?????[31m___/?[35m????_?[0m    Hubble:         ?[36mdisabled?[0m"
    L116: "?[32m ___?[34m/????_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L117: "?[34m    ___/"
    L118: "?[0m"
    L119: "Deployment        cilium-operator    Desired: 1, Unavailable: ?[31m1/1?[0m"
    L120: "DaemonSet         cilium             Desired: 2, Ready: ?[32m2/2?[0m, Available: ?[32m2/2?[0m"
    L121: "Containers:       cilium             Running: ?[32m2?[0m"
    L122: "cilium-operator    Pending: ?[32m1?[0m"
    L123: "Cluster Pods:     5/5 managed by Cilium"
    L124: "Image versions    cilium             quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: 2"
    L125: "cilium-operator    quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: 1"
    L126: "Errors:           cilium-operator    cilium-operator                     1 pods of Deployment cilium-operator are not ready"
    L127: "Warnings:         cilium-operator    cilium-operator-574c4bb98d-bkcdd    pod is pending, status Process exited with status 1_"
    L128: " "
    L129: "  "

ok kubeadm.v1.27.2.cilium.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.flannel.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.1.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.1.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.1.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.29.2.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v4 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.ntp 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok misc.fips 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok packages 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-docker.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-oem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-containerd 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.simple 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.user 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysusers.gshadow 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

@ader1990 ader1990 marked this pull request as ready for review February 19, 2024 16:12
@ader1990
Copy link
Contributor Author

Failed test case cl.network.listeners needs to be fixed in mantle https://github.com/flatcar/mantle/blob/bd2e8e962592dac502d46d998808fd50b60365d2/kola/tests/misc/network.go#L182, as systemd-resolved now also listens on UDP port 5353.
If we do not want this, we can remove the listener: https://bugs.launchpad.net/cloud-images/+bug/2038894/comments/4

@ader1990
Copy link
Contributor Author

Required to be merged first: flatcar/bootengine#87.
@pothos suggested to add a new dracut module to only install the required binary loadkeys, and not all the i18n dracut module.

@ader1990
Copy link
Contributor Author

Note on the size increase: amd64-generic-image .zip increased by 5MB and vmlinuz increased by 3MB.

@ader1990
Copy link
Contributor Author

Failed test case cl.network.listeners needs to be fixed in mantle https://github.com/flatcar/mantle/blob/bd2e8e962592dac502d46d998808fd50b60365d2/kola/tests/misc/network.go#L182, as systemd-resolved now also listens on UDP port 5353. If we do not want this, we can remove the listener: https://bugs.launchpad.net/cloud-images/+bug/2038894/comments/4

https://man.archlinux.org/man/resolved.conf.5#OPTIONS

MulticastDNS=

Takes a boolean argument or "resolve". Controls Multicast DNS support (RFC 6762[2]) on the local host.
If true, enables full Multicast DNS responder and resolver support.
 If false, disables both. If set to "resolve", only resolution support is enabled, but responding is disabled.
 Note that [systemd-networkd.service(8)](https://man.archlinux.org/man/systemd-networkd.service.8.en) 
also maintains per-link Multicast DNS settings. Multicast DNS will be enabled on a link only if the per-link 
and the global setting is on.
Added in version 234.

@ader1990 ader1990 force-pushed the ader1990/upgrade-to-systemd-255-v2 branch from 987ab33 to 37746a4 Compare February 20, 2024 08:20
@jepio
Copy link
Member

jepio commented Feb 20, 2024

Note on the size increase: amd64-generic-image .zip increased by 5MB and vmlinuz increased by 3MB.

Thats a lot for vmlinuz - what exactly is responsible for the size increase in vmlinuz?

@jepio
Copy link
Member

jepio commented Feb 20, 2024

now also listens on UDP port 5353.
If we do not want this, we can remove the listener

I would disable the listener. 99% of Flatcar users have no use for multicast dns, and homelabbers can enable it manually.

@ader1990
Copy link
Contributor Author

now also listens on UDP port 5353.
If we do not want this, we can remove the listener

I would disable the listener. 99% of Flatcar users have no use for multicast dns, and homelabbers can enable it manually.

37746a4 -> this should disable it, tests are running.

@ader1990
Copy link
Contributor Author

The Flatcar image built by https://github.com/flatcar/scripts/actions/runs/7970403527/job/21757956307?pr=1679 was tested on arm64 using https://github.com/cloudbase/BMK/tree/flatcar_sysext, the deployment was succesful:

  • k8s cluster deployed using k8s sysext on two Ampere nodes
  • cillium
  • rook / ceph (2 node ceph cluster)
  • kubevirt (tested with Fedora VMs using ceph pvc)

@ader1990
Copy link
Contributor Author

Note on the size increase: amd64-generic-image .zip increased by 5MB and vmlinuz increased by 3MB.

Thats a lot for vmlinuz - what exactly is responsible for the size increase in vmlinuz?

Hello @jepio,

According to my comparison from here: #1684 (comment), systemd and kbd/i8in dracut module are the responsible. Makes sense, as the newer version added more systemd binaries / services in the initrd (like systemd-executor, systemd-vconsole-setup).

The vmlinuz from https://github.com/flatcar/scripts/actions/runs/7970403527/job/21757956101?pr=1679 is 57236 KB vs 56868 KB the i18n one, which is only 0.5 MB more.

Basically 1.5 MB more for the systemd upgrade 252 -> 255 (vanilla one is 55860 KB).

This size increase can be reduced by adding a dracut special module for the loadkeys binary instead of installing the dracut i18n module (with the caveat that the new module might introduce technical debt for the future ). @jepio @pothos should I try with a dracut special module especially for this?

Thank you.

@ader1990
Copy link
Contributor Author

Note on the size increase: amd64-generic-image .zip increased by 5MB and vmlinuz increased by 3MB.

Thats a lot for vmlinuz - what exactly is responsible for the size increase in vmlinuz?

Hello @jepio,

According to my comparison from here: #1684 (comment), systemd and kbd/i8in dracut module are the responsible. Makes sense, as the newer version added more systemd binaries / services in the initrd (like systemd-executor, systemd-vconsole-setup).

The vmlinuz from https://github.com/flatcar/scripts/actions/runs/7970403527/job/21757956101?pr=1679 is 57236 KB vs 56868 KB the i18n one, which is only 0.5 MB more.

Basically 1.5 MB more for the systemd upgrade 252 -> 255 (vanilla one is 55860 KB).

This size increase can be reduced by adding a dracut special module for the loadkeys binary instead of installing the dracut i18n module (with the caveat that the new module might introduce technical debt for the future ). @jepio @pothos should I try with a dracut special module especially for this?

Thank you.

Please see #1684 (comment). I suggest to go with the upstream tested versions of systemd + dracut implementation in the case of systemd-vconsole-setup. I am afraid that if I continue to keep fixing the stripped down implementation, I will reach to a functional implementation that is the same as the upstream one.

@pothos
Copy link
Member

pothos commented Feb 26, 2024

Please see #1684 (comment). I suggest to go with the upstream tested versions of systemd + dracut implementation in the case of systemd-vconsole-setup. I am afraid that if I continue to keep fixing the stripped down implementation, I will reach to a functional implementation that is the same as the upstream one.

Makes sense - I guess we look for other tricks to reduce the size. E.g., we could remove unused kernel modules or we could compress the kernel more (there are new options in 6.x).

@ader1990
Copy link
Contributor Author

Please see #1684 (comment). I suggest to go with the upstream tested versions of systemd + dracut implementation in the case of systemd-vconsole-setup. I am afraid that if I continue to keep fixing the stripped down implementation, I will reach to a functional implementation that is the same as the upstream one.

Makes sense - I guess we look for other tricks to reduce the size. E.g., we could remove unused kernel modules or we could compress the kernel more (there are new options in 6.x).

I think we can revisit this part on the next overhaul of the Flatcar initrd dracut modules, as the tmpfiles dev early services and maybe more modules need some attention (removal). As it seems right now, systemd will just add more and more binaries / size as time passes because of the feature set increase.

@ader1990 ader1990 self-assigned this Feb 27, 2024
@ader1990
Copy link
Contributor Author

PR is ready to be reviewed.

@ader1990 ader1990 force-pushed the ader1990/upgrade-to-systemd-255-v2 branch from b7b807c to 37746a4 Compare February 27, 2024 14:30
@pothos
Copy link
Member

pothos commented Mar 13, 2024

Can you squash the smaller commits that touch the systemd ebuild folder into one "Apply Flatcar changes" patch? When doing so it would be good to split the one that also changes the profiles folder. That way we can (with resolving some conflicts) apply the "Apply Flatcar changes" commit again after syncing from upstream.

@@ -572,7 +562,7 @@ multilib_src_install_all() {
# Flatcar: Use an empty preset file, because systemctl
# preset-all puts symlinks in /etc, not in /usr. We don't use
# /etc, because it is not autoupdated. We do the "preset" above.
rm "${ED}$(usex split-usr '' /usr)/lib/systemd/system-preset/90-systemd.preset" || die
# rm "${ED}$(usex split-usr '' /usr)/lib/systemd/system-preset/90-systemd.preset" || die
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this removed? This could be the reason why some enablement symlinks end up in the /etc upperdir

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess we could make use of presets if we add a step in the image postprocessing where we evaluate the presets to create symlinks from them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lib/systemd/system-preset/90-systemd.preset does not exist anymore on this version of systemd.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, I think the path is incorrect due to the split-usr usex, need to check again.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rm: cannot remove '/var/tmp/portage/sys-apps/systemd-255.3/image/lib/systemd/system-preset/90-systemd.preset': No such file or directory - this is the log message.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/var/tmp/portage/sys-apps/systemd-255.3/image/usr/lib/systemd/system-preset/90-systemd.preset exists.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pothos testing now locally with

diff --git a/sdk_container/src/third_party/coreos-overlay/sys-apps/systemd/systemd-255.3.ebuild b/sdk_container/src/third_party/coreos-overlay/sys-apps/systemd/systemd-255.3.ebuild
index 7e8266d7d2..b1584034c7 100644
--- a/sdk_container/src/third_party/coreos-overlay/sys-apps/systemd/systemd-255.3.ebuild
+++ b/sdk_container/src/third_party/coreos-overlay/sys-apps/systemd/systemd-255.3.ebuild
@@ -295,7 +295,7 @@ src_configure() {

 # Flatcar: Our function, we use it in some places below.
 get_rootprefix() {
-       usex split-usr "${EPREFIX:-/}" "${EPREFIX}/usr"
+       "${EPREFIX}/usr"
 }
 multilib_src_configure() {
        local myconf=(
@@ -562,8 +562,8 @@ multilib_src_install_all() {
        # Flatcar: Use an empty preset file, because systemctl
        # preset-all puts symlinks in /etc, not in /usr. We don't use
        # /etc, because it is not autoupdated. We do the "preset" above.
-       # rm "${ED}$(usex split-usr '' /usr)/lib/systemd/system-preset/90-systemd.preset" || die
-       insinto $(usex split-usr '' /usr)/lib/systemd/system-preset
+       rm "${ED}/usr/lib/systemd/system-preset/90-systemd.preset" || die
+       insinto /usr/lib/systemd/system-preset
        doins "${FILESDIR}"/99-default.preset

        # Flatcar: Do not ship distro-specific files (nsswitch.conf

@ader1990 ader1990 force-pushed the ader1990/upgrade-to-systemd-255-v2 branch from 15db025 to f009704 Compare March 13, 2024 10:37
@ader1990 ader1990 force-pushed the ader1990/upgrade-to-systemd-255-v2 branch from f009704 to 47a7148 Compare March 13, 2024 10:41
@ader1990 ader1990 force-pushed the ader1990/upgrade-to-systemd-255-v2 branch from 47a7148 to 8aa194c Compare March 13, 2024 10:43
@ader1990
Copy link
Contributor Author

Can you squash the smaller commits that touch the systemd ebuild folder into one "Apply Flatcar changes" patch? When doing so it would be good to split the one that also changes the profiles folder. That way we can (with resolving some conflicts) apply the "Apply Flatcar changes" commit again after syncing from upstream.

I left the 3 commits separate to have a better idea on what the process was. The changes to the profiles folder is related to the kbd package requirement. I will update the PR with the 3 squashed commits and split the kbd part into two: kbd enable and systemd-vconsole-setup fixes after the systemd preset issue is resolved.

@ader1990 ader1990 force-pushed the ader1990/upgrade-to-systemd-255-v2 branch from cff259c to fd2df6b Compare March 13, 2024 13:39
@@ -0,0 +1 @@
- Upgraded systemd from version 252 to version 255 ([flatcar/scripts#1679](https://github.com/flatcar/scripts/pull/1679)).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That entry would rather go into changelog/updates/ as - systemd ([255](https://github.com/systemd/systemd/releases/tag/v255))

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, will do the update. as the two changes are cosmetic, I will wait for the CI to finish before I do the force push.

@ader1990 ader1990 force-pushed the ader1990/upgrade-to-systemd-255-v2 branch from fd2df6b to 0de7fff Compare March 13, 2024 15:50
Copy link
Member

@pothos pothos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot, looks good!

@krnowak Can you also do a quick review? I wonder why usex split-usr '' /usr didn't work.

sayanchowdhury and others added 7 commits March 14, 2024 12:07
It's from Gentoo commit c923eb13e743b615782a2000cdeafc84db07e533.
It's from Gentoo commit 1367a1498225bc2636c875c8b3c3e7a66d82c000.
systemd-vconsole-setup needs the dracut i18n module so that
the binary loadkeys is present. The binary loadkeys comes from
the kbd package.

A custom dracut module patch for i18n was created, so that only the
default `us` keymap and font are installed, leading the size
increase to the minimum of around a few KB instead of 3MB.

Signed-off-by: Adrian Vladu <[email protected]>
systemd-vconsole-setup unit needs sys-apps/kbd loadkeys binary.

Signed-off-by: Adrian Vladu <[email protected]>
@ader1990 ader1990 force-pushed the ader1990/upgrade-to-systemd-255-v2 branch from 425119d to 11449d2 Compare March 14, 2024 12:08
@ader1990 ader1990 merged commit 8b63d99 into main Mar 14, 2024
7 checks passed
@pothos pothos deleted the ader1990/upgrade-to-systemd-255-v2 branch March 15, 2024 19:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

systemd: Update to 255
5 participants