-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Podman Pull Images always error when docker not #4251
Comments
I Also just move to version 1.6.2-dev but still can't run. What'sthe problem anyway. I'm on centos 7.7. |
I do something stupid. But the way podman pull still is a problem. It's not working when the connection is really bad, where docker will retry untill the images is pulled. The rootless is working, https://www.reddit.com/r/Fedora/comments/bl100f/problem_with_podman_and_lamp_server/ |
it seems you are hitting two different problems. @mtrmac could you please take a look at the About the slirp4netns error, how are you creating the container? Are you trying to use a port that is already used on the host? |
I already fix it. It's my wrong doing I forgot when I do podman pod create --name=local -p 80:8080 That means bind in local on port 80 and on container 8080 for container on that pod podman pod create --name=local -p 8080:80 8080 on localhost and 80 on container, and I forgot when on rootles mode, I can't bind on root port, but I get an answer like thi rootless-containers/slirp4netns#154 (comment) Now the other problem is, the container can't connect to the outside world, so I can't use xdebug on mycontainer. I tried to see podman network ls, the podman network is there, but when do |
|
Still happend again. This even worst, my images is crossing with mysql images cause my images to run mysql even it doesn't have one. So i tried to repull the images, and now the problem occur again.
on the 2nd try, the old problem occur again
And After some try it happend to be like this
|
This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days. |
(Without actually understanding the specific root cause, given the available information), the fix basically needs to be to improve the reliability of the network connection. It might, perhaps, be reasonable for the clients to retry on handshake or transfer timeouts (although even in that case it’s unclear whether it is justified, or whether the caller should be informed immediately, so that it can e.g. decide to fall back to a different registry quickly without long pointless retries), but things like
are undistinguishable from deliberate rejections (because servers are down or because network policy prohibits the connection), and it does not seem warranted to me for the code to retry on such failures. |
Uhm the problem is, on docker it keep trying and it succed, but on podman it tried and fail whole time if the image is big, especially on poor connection. |
@mtrmac Would adding a --retry flag or something to podman make sense? Or since we can not distinquish we can just fail. On Docker, how does it work if their are multiple registries? |
Well, we are just failing. Actually , looking at the error message more closely,
These are both localhost addresses. So, it’s not just that the network is so bad, it’s that (presumably) the network has already brought down the localhost DNS server to the point of complete inoperability. How does anything work in such an environment? (Given the limited information so far), it’s just not remotely practical; this must be fixed in the environment external to c/image and Podman. (BTW, the way pulls work now, we download all blobs to a temporary directory, and only apply them to c/storage when all are successfully downloaded; on failure, the blobs are deleted and when retrying, the download starts anew from the start. Completely incidentally, containers/image#611 , among many other things, changes the process to create a c/storage layer immediately after the necessary blob is downloaded; on failure, the layers stay around and when retrying, only the remaining layers need to be downloaded. So, c/image might eventually behave a bit better when externally induced to retry — OTOH, arguably, leaving orphaned layers around is a bug and we should clean them up on failure. So, this is not a promise that just re-running |
Some of the conversation on the #podman irc, many of them said on RH based labs, or enterprise system they will always use local dns or corporate DNS so I think it;s practical to retry certain times until failure like docker does. |
@mtrmac, how shall we proceed? I suggest we open an issue over at c/image to clean up the temp files in case of an error. We could move this issue over to c/image to discuss if (and in which cases) a retry might be justified. WDYT? |
I think we do clean up the temporary files in case of an error. (We won’t remove correctly-created intermediate layers in c/storage if creating the rest of the image fails — but, well, that would actually help avoid redundant downloads in this case.) As for discussing automatic retries in c/image — sure, that would be more accurate than Podman, I guess. Still, I can’t (yet?) see that it makes sense to retry on localhost DNS failures (#4251 (comment) ), so I’m not sure we can make the reporter happy. I’d prefer to just close this “won’t fix”, but if someone figures out a clean way to retry in the right cases, sure, I guess… It should be easier with containers/image#703 (but that could help with pulls, not pushes). |
I trust your guts and agree to close the issue. If someone comes up with a clean way forward, we'd be more than happy and welcome contributions! |
Hey, I use Centos8 and have problems with podman as well. Via WLAN I can download everything from podman but not via LAN (which is important for a server) Smaller pulls such as : hello-world, nginx, apline etc. go. Mysql, Nextcloud, Owncloud as example do not work. Always get connection reset by peer. I am at cloudflare and have heard that there are problems in this regard, is that true? Docker doesn't cause me any problems at all, but I'd like to use podman - any ideas? |
This looks like we need the retry stuff that @QiWang19 was working with in Skopeo? |
It might be helpful to have the default retry behavior like buildah |
SGTM |
The patch doesn't fix the issue for me. The problem is that all chunks are redownloaded, even if just one of them fails. The same error occurs again and again on each try, so this isn't helping at all, unfortunately. |
@ngdio can you provide more information about the errors you get? |
Basically the same thing.
Some chunks don't complete and make it fail completely. This is probably due to my internet connection (I've had the same issue on multiple machines) but I experience zero issues anywhere else (downloads, streaming, docker all fine). It might be caused by the parallel downloads. Version is 2.0.5 with the retry patch applied on top. |
Yeah, if only some layer blobs are downloaded, they are not extracted into c/storage (that currently happens only at (Valentin was working on c/image changes that would apply layers as they were coming, that would indirectly help here.) |
If that change would effectively mean the process works just as in Docker (individual layers are redownloaded if necessary), then it would help indeed. Is there any issue/pull request where I can track progress, or do you have an ETA? I can't use Podman in my network at all right now, so that would be helpful. |
I have the same problem with Fedora CoreOS. |
Same issue for me:
|
Hello, seems there's a lot of people still face this problem. I think this need to be reopen, and maybe podman could see what docker do that can have a work arround for this kind of broken connection. Thanks |
Opened a new issue: https://github.com/containers/podman/issues/8503 |
someone can help me with current open TCP 443/80
|
@Nurlan199206 Please open a separate issue, don’t pile onto this one, unless there’s a clean connection. It’s always easier to merge two conversations than to teas them apart. And in the new issue please explain what exactly is failing; this looks like the image was pulled correctly, and we are only showing incorrect progress bars (containers/image#1013 ). |
@mtrmac please lock this conversation sir so not more people are bumping here. |
Hi all, one of developer push the code to the azure storage Blob account through git. but unfortunately, they are missing in Github he was able to recover 2 images. but we don't know the root cause of why it happened. anyone can help me with this |
Please create a new issue regarding it and post the output, log, etc, so people can see what's wrong. This issue already solved via other patch. So please don't bump here. Thanks. |
/kind bug
Hello. I'm quite frustated with podman when trying to pull images. I take hours and hours to take images but always fail. This's the podman version
I tried to pull docker.io/benyaminl/lap7-dev:lumen
When I'm on docker it's running okay, but with podman always error.
Can anyone enlighten me, not even single people reply my chat on #podman on freenode. I really confused. Any help is appriciated. Thanks
2nd When I try to add or run container inside pod, an error is generated like this
seems have connection to #2964
Slirp4netns version 1.41, still can't run on rootless mode.
The text was updated successfully, but these errors were encountered: