Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Volume mounting changed in podman 3.0.0 #9354

Closed
afbjorklund opened this issue Feb 13, 2021 · 17 comments · Fixed by #9415
Closed

Volume mounting changed in podman 3.0.0 #9354

afbjorklund opened this issue Feb 13, 2021 · 17 comments · Fixed by #9415
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@afbjorklund
Copy link
Contributor

afbjorklund commented Feb 13, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

There was a regression between v3.0.0-rc3 and v3.0.0, causing minikube start to not work with podman driver.

Possibly related to 9de1a68, there are now important files missing from the volume that were supposed to be copied.

Steps to reproduce the issue:

  1. minikube start --driver=podman

Basically, what happens under the hood is something like:

sudo -n podman run --rm --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.17 -d /var/lib

sudo -n podman run --rm --entrypoint /usr/bin/tar --security-opt label=disable -v /home/abjorkl5/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v8-v1.20.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.17 -I lz4 -xf /preloaded.tar -C /extractDir &

sudo -n podman run --cgroup-manager cgroupfs -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var:exec --cpus=2 -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 gcr.io/k8s-minikube/kicbase:v0.0.17 &

The first step 1 was supposed to copy the files, before 2 and 3.

However, now the contents of the created volume looks very different. Like it has copied all of / rather than just /var ?

sudo podman2 run --rm --entrypoint /usr/bin/test -v old:/var gcr.io/k8s-minikube/kicbase:v0.0.17 -d /var/lib
sudo podman3 run --rm --entrypoint /usr/bin/test -v new:/var gcr.io/k8s-minikube/kicbase:v0.0.17 -d /var/lib
8,4M	/var/lib/containers/storage/volumes/old/_data
953M	/var/lib/containers/storage/volumes/new/_data

It seems like the contents are actually the same, but since they ended up in a subdirectory it is not able to find them now.

Describe the results you received:

The node container now fails to boot, script is failing.

stat: cannot stat '/var/lib/dpkg/alternatives/iptables': No such file or directory

Describe the results you expected:

The node container started successfully, like before.

Additional information you deem important (e.g. issue happens only occasionally):

Happens every time, with podman 3.0.0.

Output of podman version:

Version:      3.0.0
API Version:  3.0.0
Go Version:   go1.15.5
Git Commit:   5b2585f5e91ca148f068cefa647c23f8b1ade622
Built:        Fri Feb 12 16:38:44 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

(paste your output here)

Package info (e.g. output of rpm -q podman or apt list podman):

podman/unknown 100:3.0.0-1 amd64 [upgradable from: 2.2.1~4]

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

No

Additional environment details (AWS, VirtualBox, physical, etc.):

Ubuntu 20.04


Upgrading the deb package is broken, as reported elsewhere. (#9345)

dpkg: error processing archive /tmp/apt-dpkg-install-DB68iM/1-containers-common_100%3a1-7_all.deb (--unpack):
 trying to overwrite '/usr/share/man/man5/containers-auth.json.5.gz', which is also in package containers-image 5.8.1~1
@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Feb 13, 2021
@afbjorklund
Copy link
Contributor Author

ping @mheon

@afbjorklund
Copy link
Contributor Author

afbjorklund commented Feb 13, 2021

Reverting 9de1a68 makes it work again, also with v3.0.0.

😄  minikube v1.17.1 on Ubuntu 20.04
✨  Using the podman driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4
🌟  Enabled addons: storage-provisioner, default-storageclass
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

As I was just trying out kubernetes/minikube#10458 for it

(it is also possible to use CRI-O, as the Container Runtime)

@mheon
Copy link
Member

mheon commented Feb 14, 2021

Self-assigning. I'll take this one on Monday.

@mheon
Copy link
Member

mheon commented Feb 16, 2021

I'm getting deeper into this and I'm starting to think it's a copier bug. The offending line seems to be:

copier.Get(mountpoint, "", getOptions, []string{v.Dest + "/."}, writer)

Mountpoint is the container's mountpoint from c/storage, the merged dir. v.Dest in this case is /var so the final source string is /var/. (no extra slashes or similar) - both of these confirmed by looking at what Podman is passing in. This all seems reasonable. However, instead of copying /var within the container, we're grabbing all of it. It looks like the source string is being ignored?

@nalind @vrothberg Thoughts?

@nalind
Copy link
Member

nalind commented Feb 16, 2021

Based on a quick read of 9de1a68, it looks like the destination path isn't being given to Put(). Get() archives directories and names the contents in the archive with paths relative to the directory being archived, omitting the directory itself, so if that archive needs to be extracted into the same location relative to the destination rootfs that it's in relative to the source rootfs, you might want to add a call to copier.Mkdir() to ensure that the destination directory is going to be there, and switch to passing filepath.Join(volMount, v.Dest) to Put() as either the destination or the root.

@mheon
Copy link
Member

mheon commented Feb 17, 2021

#9415 should fix

@afbjorklund
Copy link
Contributor Author

afbjorklund commented Feb 17, 2021

I think you can use ubuntu:20.04 rather than gcr.io/k8s-minikube/kicbase:v0.0.17 if you want a smaller image.

sudo podman2 run --rm --entrypoint /usr/bin/test -v old:/var ubuntu:20.04 -d /var/lib
sudo podman3 run --rm --entrypoint /usr/bin/test -v new:/var ubuntu:20.04 -d /var/lib
4,5M	/var/lib/containers/storage/volumes/old/_data
78M	/var/lib/containers/storage/volumes/new/_data
$ sudo ls /var/lib/containers/storage/volumes/old/_data
backups  cache	lib  local  lock  log  mail  opt  run  spool  tmp
$ sudo ls /var/lib/containers/storage/volumes/new/_data
bin  boot  dev	etc  home  lib	lib32  lib64  libx32  media  mnt  opt  proc  root  run	sbin  srv  sys	tmp  usr  var

Confirm that b79e1c6a66eaaa530482a8afdcc2e8b4f4c442ea seems to fix the issue as well.

@afbjorklund
Copy link
Contributor Author

afbjorklund commented Feb 17, 2021

And for the record, I'm not sure that mounting a volume on /var is a great idea with all the races and everything...

We inherited from KIND, the original version on the VM was to mount under /mnt and use symlinks or bind mounts.

Also my "clever" test failed, since it was matching against /lib.

Should have tested /var/lib/dpkg instead, if I would have known...

@mheon
Copy link
Member

mheon commented Feb 17, 2021

@afbjorklund It looks like any image with a populated /var works, so I'm just going to go with fedora-minimal since we cache it for use in CI.

mheon added a commit to mheon/libpod that referenced this issue Feb 17, 2021
Instead of using the container's mountpoint as the base of the
chroot and indexing from there by the volume directory, instead
use the full path of what we want to copy as the base of the
chroot and copy everything in it. This resolves the bug, ends up
being a bit simpler code-wise (no string concatenation, as we
already have the full path calculated for other checks), and
seems more understandable than trying to resolve things on the
destination side of the copy-up.

Fixes containers#9354

Signed-off-by: Matthew Heon <[email protected]>
mheon added a commit to mheon/libpod that referenced this issue Feb 18, 2021
Instead of using the container's mountpoint as the base of the
chroot and indexing from there by the volume directory, instead
use the full path of what we want to copy as the base of the
chroot and copy everything in it. This resolves the bug, ends up
being a bit simpler code-wise (no string concatenation, as we
already have the full path calculated for other checks), and
seems more understandable than trying to resolve things on the
destination side of the copy-up.

Fixes containers#9354

Signed-off-by: Matthew Heon <[email protected]>
@adelton
Copy link
Contributor

adelton commented Feb 19, 2021

@mheon Any idea when podman 3.0.1 will show up in https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ ? The GitHub Actions seems to have upgraded the Ubuntu 20.04 image to podman 3.0.0 so there is currently no way to run podman-based tests.

@mheon
Copy link
Member

mheon commented Feb 19, 2021

@lsm5 Poke - is 3.0.1 building?

@lsm5
Copy link
Member

lsm5 commented Feb 19, 2021

will build now ... sorry, I've disabled autobuilds until i'm certain the recent bunch of packaging changes are stabilized :|

@lsm5
Copy link
Member

lsm5 commented Feb 19, 2021

@lsm5
Copy link
Member

lsm5 commented Feb 19, 2021

@mheon Any idea when podman 3.0.1 will show up in https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_20.04/ ? The GitHub Actions seems to have upgraded the Ubuntu 20.04 image to podman 3.0.0 so there is currently no way to run podman-based tests.

@adelton just curious, is GitHub actions using the kubic repo by default, or did you add the kubic repo to your github actions or something?

@adelton
Copy link
Contributor

adelton commented Feb 19, 2021

In https://github.com/adelton/freeipa-container/runs/1932753092?check_suite_focus=true we did not install any specific podman which previously gave us podman 2 and today the job failed with this issue (copier: get: globs [/log/journal] matched nothing (0 filtered out): no such file or directory), indicating that by default there's podman 3 now in the image. I have no idea where GitHub Actions gets the packages from.

@lsm5
Copy link
Member

lsm5 commented Feb 19, 2021

Thanks @adelton!! So looks like the 3.0.1 package should be ready for 20.04. Let me know if any issues.

adelton added a commit to adelton/freeipa-container that referenced this issue Feb 20, 2021
and https://bugzilla.redhat.com/show_bug.cgi?id=1928643."

This reverts commit 54d464f.

Package dependencies got fixed.
Podman 3.0.1 fixes 1928643, a.k.a. containers/podman#9354.
@afbjorklund
Copy link
Contributor Author

I can confirm that minikube now works again, with podman 3.0.1.

$ apt list podman
Listing... Done
podman/unknown,now 100:3.0.1-1 amd64 [installed]
podman/unknown 100:3.0.1-1 arm64
podman/unknown 100:3.0.1-1 armhf
podman/unknown 100:3.0.1-1 s390x
$ podman version
Version:      3.0.1
API Version:  3.0.0
Go Version:   go1.15.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64
$ minikube start --driver=podman
😄  minikube v1.17.1 on Ubuntu 20.04
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating podman container (CPUs=2, Memory=7900MB) ...
🐳  Preparing Kubernetes v1.20.2 on Docker 20.10.2 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default

Not sure yet if we need a special warning for podman 3.0.0 or not.

We will check that user is running podman 2.1.0 (or later), though...

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants