Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve git worktree handling when creating SDK containers #1351

Open
wants to merge 5 commits into
base: main
Choose a base branch
from

Conversation

krnowak
Copy link
Member

@krnowak krnowak commented Nov 7, 2023

This mounts all related worktrees of the scripts repo and of any extra git repo passed through the -g option into the SDK container, so:

  • git can be used on the worktree without errors
  • worktrees won't be treated as stale and eventually pruned

For more details, please see the commit messages, especially the one that adds the git worktree handling library.

Tested locally.

The library has the following functionality:

- Discover the git repository layout. This figures out where is the
  main repository (whose `.git` entry is a directory and contains all
  the git configuration and objects) and all its linked worktrees
  (their `.git` entries are just files with a path to their respective
  worktree metadata directory inside the main repo's `.git`). The
  layout is basically two things: a path to the main repo and a map of
  linked worktree name to the path of the linked worktree. Name of a
  worktree is the `<name>` part in the path
  `<main-repo>/.git/worktrees/<name>`.

- Map the discovered git repository layout into possible git
  repository layout inside the SDK container. The worktrees (both main
  one and linked ones) are put into a chosen base directory inside SDK
  so the main worktree is in `<base>/main-repo`, while linked
  worktrees are in `<base>/linked/<name>`. It's also possible to
  override the path for any worktree to be put elsewhere (for example,
  we may want to put the worktree being in current working directory
  to `/mnt/host/source/src/scripts`, while the rest of the worktrees
  go somewhere under `/mnt/host/all-worktrees/scripts` directory).

- Generate Docker volume options based on both git repository
  layouts. It results in an array of strings in form of `"-v"
  "<path-to-worktree:path-to-worktree-inside-SDK"` that can be passed
  directly to docker/podman.

- Generate replacements based on git repository layout inside the SDK
  container. Replacements is a bash file to be sourced. It defines a
  map of file paths to the new contents that those files should
  have. Currently the generated replacements are for `.git` files in
  the linked worktrees and `gitdir` files inside main worktree's
  `.git/worktrees/<name>` directories. Such replacement file should be
  mounted inside the SDK container with -v option (just like the
  worktrees).

- Map the replacements to a set of bind-mounts. This action should be
  done inside the SDK container - it sources the replacements files,
  creates temporary files with the contents being the values in the
  map from the replacements file and bind-mounts them into the
  locations being keys of the map.
This is something I have done, because I needed it in early drafts of
the work. The work has evolved and the need disappeared, but maybe it
makes sense to keep it.
This makes sure that the worktrees mounted into the SDK container
properly reference other worktrees, so they won't be treated by git as
stale and get cleaned up.
Nothing changes if the scripts repo is standalone and has no linked
worktrees.
They get the same treatment like the scripts repo.
@krnowak krnowak marked this pull request as ready for review November 7, 2023 15:16
Copy link

github-actions bot commented Nov 7, 2023

Test report for 3780.0.0+nightly-20231106-2100 / amd64 arm64

Platforms tested : qemu_uefi-amd64 qemu_update-amd64 qemu_uefi-arm64 qemu_update-arm64

ok bpf.execsnoop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok bpf.local-gadget 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cgroupv1 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.multipart-mime 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.script 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _raid.go:259: could not reboot machine: machine __6a6c08ac-7727-4a33-a32e-1684d4a5acad__ failed basic checks: some systemd units failed:"
    L3: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L4: "status: "
    L5: "journal:-- No entries --"
    L6: "harness.go:583: Found systemd unit failed to start (?[0;1;39mldconfig.s???0m - Rebuild Dynamic Linker Cache. ) on machine 6a6c08ac-7727-4a33-a32e-1684d4a5acad console_"
    L7: " "

ok cl.disk.raid1.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.discovery 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.etcdctlv3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.v2-backup-restore 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.filesystem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.flannel.udp 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.flannel.vxlan 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.instantiated.enable-unit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.kargs 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.luks 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _oem.go:199: Couldn_t reboot machine: machine __070a51d0-5c91-4c10-977a-05617a96b0a8__ failed basic checks: some systemd units failed:"
    L3: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L4: "status: "
    L5: "journal:-- No entries --"
    L6: "harness.go:583: Found systemd unit failed to start (?[0;1;39mldconfig.s???0m - Rebuild Dynamic Linker Cache. ) on machine 070a51d0-5c91-4c10-977a-05617a96b0a8 console_"
    L7: " "

ok cl.ignition.oem.indirect.new 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _oem.go:199: Couldn_t reboot machine: machine __65cd480f-c4f6-4e79-b54c-f1a460d22a6a__ failed basic checks: some systemd units failed:"
    L2: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L3: "status: "
    L4: "journal:-- No entries --"
    L5: "harness.go:583: Found systemd unit failed to start (?[0;1;39mldconfig.s???0m - Rebuild Dynamic Linker Cache. ) on machine 65cd480f-c4f6-4e79-b54c-f1a460d22a6a console_"
    L6: " "
    L7: "  "

ok cl.ignition.oem.regular 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.reuse 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.wipe 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.symlink 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.translation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.ext4checkexisting 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.swap 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.vfat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.install.cloudinit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.internet 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.locksmith.cluster 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.misc.falco 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.network.initramfs.second-boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.listeners 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.wireguard 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.omaha.ping 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.osreset.ignition-rerun 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.overlay.cleanup 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.swap_activation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.fallbackdownload # SKIP 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.toolbox.dnf-install 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.badverity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.grubnop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (2) ❌ Failed: qemu_update-arm64 (1)

                Diagnostic output for qemu_update-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:125: Created symlink /etc/systemd/system/locksmithd.service ??? /dev/null."
    L3: "update.go:324: Triggering update_engine"
    L4: "update.go:343: Rebooting test machine"
    L5: "update.go:324: Triggering update_engine"
    L6: "update.go:343: Rebooting test machine"
    L7: "update.go:346: reboot failed: machine __614d53b7-f436-49b9-b369-956fd48b3b88__ failed basic checks: some systemd units failed:"
    L8: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L9: "status: "
    L10: "journal:-- No entries --"
    L11: "harness.go:583: Found systemd unit failed to start (?[0;1;39mldconfig.s???0m - Rebuild Dynamic Linker Cache. ) on machine 614d53b7-f436-49b9-b369-956fd48b3b88 console_"
    L12: " "

ok cl.update.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.users.shells 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.verity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.auth.verify 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.local 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.s3.versioned 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.security.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.systemd.enable-service 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.boolean 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.enforce 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.tls.fetch-urls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.update.badusr 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.systemd-nspawn 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.btrfs-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.containerd-restart 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.devicemapper-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.enable-service.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.lib-coreos-dockerd-compat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.network 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.selinux 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.userns 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok extra-test.[first_dual].cl.update.docker-btrfs-compat 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok extra-test.[first_dual].cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok kubeadm.v1.25.10.calico.base 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:125: I1107 18:48:46.779558    1621 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.25.15"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.25.15"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.25.15"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.25.15"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.8"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L9: "cluster.go:125: I1107 18:48:58.477361    1785 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L10: "cluster.go:125: [init] Using Kubernetes version: v1.25.15"
    L11: "cluster.go:125: [preflight] Running pre-flight checks"
    L12: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?12]"
    L19: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:125: [apiclient] All control plane components are healthy after 5.008199 seconds"
    L42: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:125: [bootstrap-token] Using token: 75386y.ew7wzqlsvuzsu5kp"
    L48: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:125: "
    L58: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:125: "
    L62: "cluster.go:125:   mkdir -p $HOME/.kube"
    L63: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:125: "
    L66: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:125: "
    L68: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:125: "
    L70: "cluster.go:125: You should now deploy a pod network to the cluster."
    L71: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:125: "
    L74: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: kubeadm join 10.0.0.112:6443 --token 75386y.ew7wzqlsvuzsu5kp _"
    L77: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:fcf58719faa97af41fd230dc84d7429d890c313300a81d2aa514be31c302caea "
    L78: "cluster.go:125: namespace/tigera-operator created"
    L79: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L80: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L101: "cluster.go:125: serviceaccount/tigera-operator created"
    L102: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L104: "cluster.go:125: deployment.apps/tigera-operator created"
    L105: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L106: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L107: "cluster.go:125: installation.operator.tigera.io/default created"
    L108: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L109: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L110: "--- FAIL: kubeadm.v1.25.10.calico.base/nginx_deployment (182.21s)"
    L111: "kubeadm.go:320: nginx is not deployed: ready replicas should be equal to 1: null_"
    L112: " "
    L113: "  "

ok kubeadm.v1.25.10.calico.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.cilium.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.flannel.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.26.5.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (3) ❌ Failed: qemu_uefi-arm64 (1, 2)

                Diagnostic output for qemu_uefi-arm64, run 2
    L1: " Error: _kubeadm.go:285: unable to setup cluster: unable to create master node: machine __2f7e8806-815b-4a70-b9d7-ca89b0fc32ba__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10?.0.0.7:22: connect: connection refused_"
    L2: " "
                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:125: I1107 18:44:08.529333    1565 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.26.10"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.26.10"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.26.10"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.26.10"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L10: "cluster.go:125: I1107 18:44:40.768873    1754 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.26.10"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L17: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L18: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L19: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.4?8]"
    L20: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L28: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L29: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L30: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L35: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L36: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L41: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L42: "cluster.go:125: [apiclient] All control plane components are healthy after 15.004508 seconds"
    L43: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L44: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L45: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L47: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L48: "cluster.go:125: [bootstrap-token] Using token: 86ex4z.79bcj4w2opfo84og"
    L49: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L54: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L55: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L56: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L57: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L58: "cluster.go:125: "
    L59: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L60: "cluster.go:125: "
    L61: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L62: "cluster.go:125: "
    L63: "cluster.go:125:   mkdir -p $HOME/.kube"
    L64: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L65: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L66: "cluster.go:125: "
    L67: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L68: "cluster.go:125: "
    L69: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L70: "cluster.go:125: "
    L71: "cluster.go:125: You should now deploy a pod network to the cluster."
    L72: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L73: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L74: "cluster.go:125: "
    L75: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L76: "cluster.go:125: "
    L77: "cluster.go:125: kubeadm join 10.0.0.48:6443 --token 86ex4z.79bcj4w2opfo84og _"
    L78: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:c871213de1327cedfc48778cae628c236782c23adb6e7682af348fe49c16bc49 "
    L79: "cluster.go:125: namespace/tigera-operator created"
    L80: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L101: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L102: "cluster.go:125: serviceaccount/tigera-operator created"
    L103: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L104: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L105: "cluster.go:125: deployment.apps/tigera-operator created"
    L106: "cluster.go:125: error: .status.conditions accessor error: <nil_ is of the type <nil_, expected []interface{}"
    L107: "kubeadm.go:285: unable to setup cluster: unable to run master script: Process exited with status 1_"
    L108: " "

ok kubeadm.v1.26.5.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.26.5.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.1.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.1.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.1.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.3"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.3"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.3"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.28.3"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.9-0"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L9: "cluster.go:125: [init] Using Kubernetes version: v1.28.3"
    L10: "cluster.go:125: [preflight] Running pre-flight checks"
    L11: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L12: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L13: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L14: "cluster.go:125: W1107 18:52:53.698902    1711 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L15: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.7?0]"
    L19: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L34: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L35: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L36: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L37: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L38: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L39: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L40: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:125: [apiclient] All control plane components are healthy after 7.002837 seconds"
    L42: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:125: [bootstrap-token] Using token: owb3vv.gfacybjvbk0mn0yk"
    L48: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:125: "
    L58: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:125: "
    L62: "cluster.go:125:   mkdir -p $HOME/.kube"
    L63: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:125: "
    L66: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:125: "
    L68: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:125: "
    L70: "cluster.go:125: You should now deploy a pod network to the cluster."
    L71: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:125: "
    L74: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: kubeadm join 10.0.0.70:6443 --token owb3vv.gfacybjvbk0mn0yk _"
    L77: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:c501ba6aed02a0b20ca3a9a8bfd4571939b7023734e5f4832db2ed628a53d892 "
    L78: "cluster.go:125: namespace/kube-flannel created"
    L79: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/flannel created"
    L80: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/flannel created"
    L81: "cluster.go:125: serviceaccount/flannel created"
    L82: "cluster.go:125: configmap/kube-flannel-cfg created"
    L83: "cluster.go:125: daemonset.apps/kube-flannel-ds created"
    L84: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L85: "harness.go:583: Found emergency shell on machine 3bcef38a-a912-4060-a227-ad4aa8888cc4 console"
    L86: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine 3bcef38a-a912-4060-a227-ad4aa8888cc4 console"
    L87: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???te.target?[0m - Ignition Complete. ) on machine 3bcef38a-a912-4060-a227-ad4aa8888cc4 console_"
    L88: " "

ok linux.nfs.v3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v4 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.ntp 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok misc.fips 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok packages 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-docker.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-oem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-containerd 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.simple 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.user 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysusers.gshadow 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Testing / in Review
Development

Successfully merging this pull request may close these issues.

1 participant