-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
podman import writes the image 3 times #4019
Comments
@rhatdan Yep, root. BTW: Docker does not need any temporary file for import and Docker needs only 1 temporary file for load. |
@nalind Any ideas what is going on? |
@mtrmac ? |
*shrug*
The second copy could be avoidable, with some pretty tricky code, though containers/image#611 is doing a lot and it may not be easy to extricate from the rest. But really, my first recommendation is to use a real image distribution protocol that is random-access and can avoid pulling redundant data, like the docker/distribution one, or even Even if we fix the second temporary copy, [X] makes it necessary to create a copy anyway, just to compute its digest to see whether the layer already exists locally. That’s always one extra copy just because the transport method does not contain a manifest. (Or maybe c/storage can handle optimistically creating a layer, on the fly computing its digest, and then deleting it if it turns out to be redundant… note that that is much more expensive than the single large temporary file if the layer ends up being redundant; and it would require |
@mtrmac The thing is that I have the container in Docker on a builder VM and I want do deploy it on a machine I can ssh only through a jumpserver. So I do @mtrmac How can I create the |
(To make things easy: reconsider the jumpserver, or tunnel a HTTP proxy through that? I suppose that’s not an option.)
Yes, that’s quite a bit more work… but it uses the optimized/optimizable path, naturally avoids redundant copies of the same layers for multiple images, and it should be possible to automate. (There’s still one extra copy without containers/image#611, I’m afraid.)
Hum, |
(None of this is to say that we are happy to have the extra copies, and it should be fixed as time/priorities allow; it’s just to be clear that that at least one of them will probably need to remain anyway, and the only way to avoid that one is to avoid the tar stream approach.) |
You are right. I run out of disk space on the build host because |
@jfilak, does the provided workaround work for you? |
Well, I would have to configure docker and podman on the destination system and while it is technically possible, I didn't bother to mess up with this setup. Fortunately, I realized that I can point container storage to system partition (can hold 1 image), tmp to additional disk space (can hold 2 images) and I can pipe the original image from the source system via ssh - that way I managed to import the image. |
This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days. |
No, bot, this is still a fairly serious bug |
@vrothberg So it's closing bugs despite user activity? That seems wrong? |
I was surprised as well. Need to check if we can tweak it. |
Closing as the provided workaround does the trick. |
/kind bug
Description
I have an exported container (48GB). I run
podman import ctr.tar myimg
and the podman fails because it writes the data 3 times but my disk has only 120GB.Steps to reproduce the issue:
Describe the results you received:
Describe the results you expected:
IMPORTED
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
OpenStack VM, VMWare
The text was updated successfully, but these errors were encountered: