-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volume mounting changed in podman 3.0.0 #9354
Comments
ping @mheon |
Reverting 9de1a68 makes it work again, also with
As I was just trying out kubernetes/minikube#10458 for it (it is also possible to use CRI-O, as the Container Runtime) |
Self-assigning. I'll take this one on Monday. |
I'm getting deeper into this and I'm starting to think it's a copier bug. The offending line seems to be:
Mountpoint is the container's mountpoint from c/storage, the @nalind @vrothberg Thoughts? |
Based on a quick read of 9de1a68, it looks like the destination path isn't being given to Put(). Get() archives directories and names the contents in the archive with paths relative to the directory being archived, omitting the directory itself, so if that archive needs to be extracted into the same location relative to the destination rootfs that it's in relative to the source rootfs, you might want to add a call to copier.Mkdir() to ensure that the destination directory is going to be there, and switch to passing filepath.Join(volMount, v.Dest) to Put() as either the destination or the root. |
#9415 should fix |
I think you can use
$ sudo ls /var/lib/containers/storage/volumes/old/_data
backups cache lib local lock log mail opt run spool tmp
$ sudo ls /var/lib/containers/storage/volumes/new/_data
bin boot dev etc home lib lib32 lib64 libx32 media mnt opt proc root run sbin srv sys tmp usr var Confirm that b79e1c6a66eaaa530482a8afdcc2e8b4f4c442ea seems to fix the issue as well. |
And for the record, I'm not sure that mounting a volume on We inherited from KIND, the original version on the VM was to mount under Also my "clever" test failed, since it was matching against Should have tested |
@afbjorklund It looks like any image with a populated |
Instead of using the container's mountpoint as the base of the chroot and indexing from there by the volume directory, instead use the full path of what we want to copy as the base of the chroot and copy everything in it. This resolves the bug, ends up being a bit simpler code-wise (no string concatenation, as we already have the full path calculated for other checks), and seems more understandable than trying to resolve things on the destination side of the copy-up. Fixes containers#9354 Signed-off-by: Matthew Heon <[email protected]>
Instead of using the container's mountpoint as the base of the chroot and indexing from there by the volume directory, instead use the full path of what we want to copy as the base of the chroot and copy everything in it. This resolves the bug, ends up being a bit simpler code-wise (no string concatenation, as we already have the full path calculated for other checks), and seems more understandable than trying to resolve things on the destination side of the copy-up. Fixes containers#9354 Signed-off-by: Matthew Heon <[email protected]>
@mheon Any idea when podman 3.0.1 will show up in https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ ? The GitHub Actions seems to have upgraded the Ubuntu 20.04 image to podman 3.0.0 so there is currently no way to run podman-based tests. |
@lsm5 Poke - is 3.0.1 building? |
will build now ... sorry, I've disabled autobuilds until i'm certain the recent bunch of packaging changes are stabilized :| |
Building now, will land soon, https://build.opensuse.org/package/show/devel:kubic:libcontainers:stable/podman |
@adelton just curious, is GitHub actions using the kubic repo by default, or did you add the kubic repo to your github actions or something? |
In https://github.com/adelton/freeipa-container/runs/1932753092?check_suite_focus=true we did not install any specific podman which previously gave us podman 2 and today the job failed with this issue (copier: get: globs [/log/journal] matched nothing (0 filtered out): no such file or directory), indicating that by default there's podman 3 now in the image. I have no idea where GitHub Actions gets the packages from. |
Thanks @adelton!! So looks like the 3.0.1 package should be ready for 20.04. Let me know if any issues. |
and https://bugzilla.redhat.com/show_bug.cgi?id=1928643." This reverts commit 54d464f. Package dependencies got fixed. Podman 3.0.1 fixes 1928643, a.k.a. containers/podman#9354.
I can confirm that minikube now works again, with podman 3.0.1. $ apt list podman
Listing... Done
podman/unknown,now 100:3.0.1-1 amd64 [installed]
podman/unknown 100:3.0.1-1 arm64
podman/unknown 100:3.0.1-1 armhf
podman/unknown 100:3.0.1-1 s390x
$ podman version
Version: 3.0.1
API Version: 3.0.0
Go Version: go1.15.2
Built: Thu Jan 1 01:00:00 1970
OS/Arch: linux/amd64
$ minikube start --driver=podman
😄 minikube v1.17.1 on Ubuntu 20.04
✨ Using the podman (experimental) driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🔥 Creating podman container (CPUs=2, Memory=7900MB) ...
🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
▪ Generating certificates and keys ...
▪ Booting up control plane ...
▪ Configuring RBAC rules ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass
💡 kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default Not sure yet if we need a special warning for podman 3.0.0 or not. We will check that user is running podman 2.1.0 (or later), though... |
Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
There was a regression between
v3.0.0-rc3
andv3.0.0
, causing minikube start to not work with podman driver.Possibly related to 9de1a68, there are now important files missing from the volume that were supposed to be copied.
Steps to reproduce the issue:
minikube start --driver=podman
Basically, what happens under the hood is something like:
sudo -n podman run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.17 -d /var/lib
sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/abjorkl5/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.17 -I lz4 -xf /preloaded.tar -C /extractDir &
sudo -n podman run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec --cpus=2 -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17 &
The first step 1 was supposed to copy the files, before 2 and 3.
However, now the contents of the created volume looks very different. Like it has copied all of
/
rather than just/var
?It seems like the contents are actually the same, but since they ended up in a subdirectory it is not able to find them now.
Describe the results you received:
The node container now fails to boot, script is failing.
stat: cannot stat '/var/lib/dpkg/alternatives/iptables': No such file or directory
Describe the results you expected:
The node container started successfully, like before.
Additional information you deem important (e.g. issue happens only occasionally):
Happens every time, with podman 3.0.0.
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
No
Additional environment details (AWS, VirtualBox, physical, etc.):
Ubuntu 20.04
Upgrading the deb package is broken, as reported elsewhere. (#9345)
The text was updated successfully, but these errors were encountered: