-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"fill out specgen: conflict at mount destination /.../: duplicate mount destination" #18454
Comments
Please try this on the latest release 4.5. |
Yeah, same issue on:
|
@Luap99 - is there any way to work around this (hacks accepted...) for older versions of podman? |
The old issue should be fixed in 4.4. I tried you reproducer against main and it still fails, likely a different cause, will reopen |
podman debug logs show this
|
If you remove the final so |
The logic which checks for duplicated volumes here did not work correctly because it used filepath.Clean(). However the writes to the volDestinations map did not thus the string no longer matched when you included a final slash for example. So we can either call Clean() on all or no paths. I decided to call it on no path because this is what we do right now. Just the check did it. Fixed containers#18454 Signed-off-by: Paul Holzinger <[email protected]>
The logic which checks for duplicated volumes here did not work correctly because it used filepath.Clean(). However the writes to the volDestinations map did not thus the string no longer matched when you included a final slash for example. So we can either call Clean() on all or no paths. I decided to call it on no path because this is what we do right now. Just the check did it. Fixed containers#18454 Signed-off-by: Paul Holzinger <[email protected]>
Podman recently changed the handling of volumes at the container create stage[1] to fix a bug[2] so when the bootstrap image was updated this started failing in the check-provision lanes[3]. [1] containers/podman#18458 [2] containers/podman#18454 [3] https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirtci/1024/check-provision-k8s-1.26-centos9/1670717915193675776#1:build-log.txt%3A2275 Signed-off-by: Brian Carey <[email protected]>
Podman recently changed the handling of volumes at the container create stage[1] to fix a bug[2] so when the bootstrap image was updated this started failing in the check-provision lanes[3]. [1] containers/podman#18458 [2] containers/podman#18454 [3] https://prow.ci.kubevirt.io/view/gs/kubevirt-prow/pr-logs/pull/kubevirt_kubevirtci/1024/check-provision-k8s-1.26-centos9/1670717915193675776#1:build-log.txt%3A2275 Signed-off-by: Brian Carey <[email protected]>
Issue Description
When trying to create a container with a mounted config directory using docker-py, I gety the following error:
Steps to reproduce the issue
I've put together a reproducer here, the crux of the code that fails is:
If you want to use the full reproducer, clone https://github.com/cjw296/podman-issues and follow the README.
Describe the results you received
Describe the results you expected
podman info output
Podman in a container
No
Privileged Or Rootless
None
Upstream Latest Release
Yes
Additional environment details
I've reproduced on a relatively new podman v4 as well, and haven't seen this reported before...
Additional information
Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting
The text was updated successfully, but these errors were encountered: