-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can not create a cluster when running on BTRFS + LUKS encryption #2411
Comments
Can you share I'm wondering if we have an opportunity to detect that LUKS is in use from podman/docker, if we can we can mount /dev/dm-0 in the same way we do with detecting btrfs and mounting /dev/mapper If not, we could alternatively maybe finish detection of podman version + detect if remote or not + if not remote inspect the host filesystem from the |
Er actually it seems these are LVM devices (and not LUKS specific?) in which case we have a bit of a worse problem, for the remote case in particular I'm not sure if we can enumerate these cleanly but we'll need to mount all |
At the very least this warrants a https://kind.sigs.k8s.io/docs/user/known-issues/ entry to start, with the workaround. |
So here is the {
"host": {
"arch": "amd64",
"buildahVersion": "1.21.3",
"cgroupManager": "systemd",
"cgroupVersion": "v2",
"cgroupControllers": [],
"conmon": {
"package": "conmon-2.0.29-2.fc34.x86_64",
"path": "/usr/bin/conmon",
"version": "conmon version 2.0.29, commit: "
},
"cpus": 8,
"distribution": {
"distribution": "fedora",
"version": "34"
},
"eventLogger": "journald",
"hostname": "fedora",
"idMappings": {
"gidmap": [
{
"container_id": 0,
"host_id": 1000,
"size": 1
},
{
"container_id": 1,
"host_id": 100000,
"size": 65536
}
],
"uidmap": [
{
"container_id": 0,
"host_id": 1000,
"size": 1
},
{
"container_id": 1,
"host_id": 100000,
"size": 65536
}
]
},
"kernel": "5.13.8-200.fc34.x86_64",
"memFree": 246652928,
"memTotal": 16473628672,
"ociRuntime": {
"name": "crun",
"package": "crun-0.20.1-1.fc34.x86_64",
"path": "/usr/bin/crun",
"version": "crun version 0.20.1\ncommit: 0d42f1109fd73548f44b01b3e84d04a279e99d2e\nspec: 1.0.0\n+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL"
},
"os": "linux",
"remoteSocket": {
"path": "/run/user/1000/podman/podman.sock"
},
"serviceIsRemote": false,
"security": {
"apparmorEnabled": false,
"capabilities": "CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT",
"rootless": true,
"seccompEnabled": true,
"seccompProfilePath": "/usr/share/containers/seccomp.json",
"selinuxEnabled": true
},
"slirp4netns": {
"executable": "/usr/bin/slirp4netns",
"package": "slirp4netns-1.1.9-1.fc34.x86_64",
"version": "slirp4netns version 1.1.8+dev\ncommit: 6dc0186e020232ae1a6fcc1f7afbc3ea02fd3876\nlibslirp: 4.4.0\nSLIRP_CONFIG_VERSION_MAX: 3\nlibseccomp: 2.5.0"
},
"swapFree": 6640889856,
"swapTotal": 8589930496,
"uptime": "2h 44m 10.18s (Approximately 0.08 days)",
"linkmode": "dynamic"
},
"store": {
"configFile": "/home/florian/.config/containers/storage.conf",
"containerStore": {
"number": 3,
"paused": 0,
"running": 0,
"stopped": 3
},
"graphDriverName": "overlay",
"graphOptions": {
},
"graphRoot": "/home/florian/.local/share/containers/storage",
"graphStatus": {
"Backing Filesystem": "btrfs",
"Native Overlay Diff": "false",
"Supports d_type": "true",
"Using metacopy": "false"
},
"imageStore": {
"number": 17
},
"runRoot": "/run/user/1000/containers",
"volumePath": "/home/florian/.local/share/containers/storage/volumes"
},
"registries": {
"search": [
"registry.fedoraproject.org",
"registry.access.redhat.com",
"docker.io",
"quay.io"
]
},
"version": {
"APIVersion": "3.2.3",
"Version": "3.2.3",
"GoVersion": "go1.16.6",
"GitCommit": "",
"BuiltTime": "Mon Aug 2 21:39:21 2021",
"Built": 1627933161,
"OsArch": "linux/amd64"
}
} I don't think there should be any LVM devices (to be honest I only used the installation defaults and selected encryption AFAIR, but it has been a while) - I think those should just be the btrfs subvolumes:
(Neither I was thinking that it might be enough to check if there is a symlink inside /dev/mapper and if so, follow it and mount the target as well? |
I hit the same issue on btrfs without LUKS. Using the workaround describe by @bergmannf worked for me as well. |
We can only do this if we take care to ensure that podman/docker is not running on another host (which people unfortunately do depend on for e.g. CI and so on) else we're inspecting the wrong machine / filesystem which would be breaking if they differ (we'll try to mount the wrong things). (Discussion about mounting lv devices above applies to following symlinks instead). |
I dug a little deeper since in my case there is no symlink missing, but my btrfs device is just not mounted automatically into the node. I use btrfs without LUKS therefore there are no # /dev/nvme0n1p2
UUID=3e04c83b-1d81-4159-9411-b4ad5bdef790 / btrfs rw,relatime,discard=async,ssd,space_cache,subvolid=256,subvol=/@,subvol=@ 0 0
# /dev/nvme0n1p2
UUID=3e04c83b-1d81-4159-9411-b4ad5bdef790 /home btrfs rw,relatime,discard=async,ssd,space_cache,subvolid=257,subvol=/@home,subvol=@home 0 0 Therefore the solution worked out in #1416 does not work in that setup. I'm using |
I don't think there's a good way to discover these paths, and docker already is responsible for mounting If we had very high confidence that the cluster was running against a local runtime and not a remote node we could have the kind binary attempt to inspect /dev for this, but right now we do not have that confidence and we'd risk breaking remote users by trying to add mounts to the nodes based on inspecting the wrong filesystem. It's also worth noting that Kubernetes only tests on ext4/overlayfs, and kubernetes itself has had bugs with other filesystems. |
Seeing the same thing as @dahrens ... a stock Fedora installation with BTRFS everywhere. Using the following config file seems to have worked.
I appreciate that this may be hard to resolve automatically, but it would be good to document it. What would it take to get this added to the "known issues" page? And can someone perhaps explain the nature of the problem? I get that it's failing because something inside the control plane wants access to the host filesystem, but I don't understand why it cares what's happening at the device layer? |
Just a pr to this file https://github.com/kubernetes-sigs/kind/blob/main/site/content/docs/user/known-issues.md , contributions are welcome 😁 |
It fails because kubelet (Kubernetes' node agent) is trying to determine filesystem stats (free space) and can't find the underlying disk. Since last looking at this it someone brought up that it appears to be possible to disable the entire disk isolation system with a feature gate. I'm not sure this is a great answer either though ... |
Ok, so the essential points seem to be:
If someone can confirm that those basic facts are correct, I'd be happy to put something together. |
Following discussions under issue kubernetes-sigs#2411, documenting problem with finding rootfs device with BTRFS (and maybe other unrecognised filesystems), along with the workaround of adding devices as extra mounts. Also threw in a quick reminder at the top of the page about how to obtain logs if cluster creation fails.
I think #2584 is the best we can do for now |
Happy to close it - as I just retested this on my Fedora 37 (with kind 0.17.0) and even with LUKS encrypted volumes I can't reproduce it. |
@bergmannf I can confirm that. Something somewhere done by somebody fixed this. I did
|
What happened:
When starting a kind cluster on an encrypted
btrfs
root partition thecontrol-plane
won't start up, because of an error in thekubelet
:On the host the
luks
path is a symlink:As this path is not available in the container it fails.
What you expected to happen:
All paths required inside kind should be mapped into the node.
How to reproduce it (as minimally and precisely as possible):
Attempt to create a cluster on an encrypted root partition - in my case I simply installed Fedora and chose to encrypt the system in the installer.
Anything else we need to know?:
The issue is quite simple to fix, by just also mounting the missing path into the container.
With the following configuration it will work:
Environment:
kind version: (use
kind version
):kind v0.11.1 go1.16.4 linux/amd64
Kubernetes version: (use
kubectl version
):docker info
): not running docker, but rootless podman/etc/os-release
):The text was updated successfully, but these errors were encountered: