-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[v1.8] /kind bug Unable to access application publically/outside after exposing port with podman #5167
Comments
From |
@AkihiroSuda PTAL - potential issue with the port forwarder? |
Yep. |
Any chance you can provide the Podman commands in use? Can you verify that the pod has ports attached to it ( |
Also could you try |
|
Note, that inside the container it all works as expected and containers can communicate with each other. In the given example I did connect to my Django container from within Elasticsearch container. |
needs |
Also |
|
This one runs properly (I can connect to it from outside the container). |
So, the issue is specific to podman-compose? |
It might be, but it used to work fine before the Podman update (from 1.6.0 to 1.8.0). Do you have any grasp on what could be the cause? |
can't reproduce version: '3'
services:
nginx:
image: "docker.io/library/nginx:alpine"
ports:
- "8080:80" |
My impression (experiencing the same issue under podman-compose) is it varies from container to container and is sometimes an intermittent issue. See containers/podman-compose#107 (comment) |
Perhaps test with multiple services? |
This is what
Does this look correct to you, @AkihiroSuda? |
OK, the issue solved locally, however, with tricks that were not necessary in Podman 1.6.0. I need to forcibly stop all containers (even if Thus:
Then it works. |
What? That's... rather bizarre. |
As above, I don't think this is necessarily a solution - it appears that the behaviour is intermittent, and I assume in this case it worked whereas in other cases it hasn't and stopping the containers this way is just irrelevant. |
I'm not saying the issue is globally solved or no longer relevant. I reported symptoms and a workaround to be picked up by someone who has deep(er) understanding of Podman internals. |
The point is that I'm unconvinced that it is a workaround, as opposed to something in the middle of a set of intermittent failures. I have also managed to reproduce the problem with a fresh container:
So in this case the port mapping has again been created inside the container - rather than exposed outside. |
So you're seeing an open port inside the container, but not on the host, for 7080? That sounds like a bug with the port forwarder |
Yep - though it's intermittent - if I restart a few times I'll get the port forwarder on the host some of the time. |
That definitely sounds like a RootlessKit bug. Can you provide more details about your environment - OS, Podman version? @AkihiroSuda is there any additional debugging info we can get for debugging port forwarding? |
FYI, that workaround worked well for me; at least for a couple of hours. |
|
Since upgrading to Podman v1.8.0 I've also started having this issue in two different machines (both running Ubuntu 19.10), so I had to downgrade to v1.7.0. I can consistently reproduce the issue like this:
However if I remove either the
|
Thanks for the report, looks like lockosthread issue https://github.com/containers/libpod/blob/5ea6cad20c9659da9bae38a660da584ee2b58aec/pkg/rootlessport/rootlessport_linux.go#L157 |
#5167 (comment) is reproducible to me, the exit FD seems somehow closed immediately. |
@giuseppe Do you have an idea? |
the issue seems to happen only with Do we inject the other end of exitFD inside conmon? That will be the way to keep it alive |
PR here: #5245 |
when using -d and port mapping, make sure the correct fd is injected into conmon. Move the pipe creation earlier as the fd must be known at the time we create the container through conmon. Closes: containers#5167 Signed-off-by: Giuseppe Scrivano <[email protected]>
The NetNS race seems another issue, opened a new issue #5249 |
when using -d and port mapping, make sure the correct fd is injected into conmon. Move the pipe creation earlier as the fd must be known at the time we create the container through conmon. Closes: containers#5167 Signed-off-by: Giuseppe Scrivano <[email protected]>
That solved my problem. |
BUG REPORT
Description
Actually, exact copy of #4715
I have 6 containers running.
podman ps
tells me they are.netstat -ntlp
does not include ports allocated by the containers.However, each of them (internally) has access to all others, but not from outside the container.
Thus, if my API runs on port 8000, I can't access it, but if I go into any of the containers, I do.
Steps to reproduce the issue:
podman-compose up
Describe the results you received:
Containers running, but ports are not accessible from outside containers.
Describe the results you expected:
I expect ports to be accessible from outside containers.
Additional information you deem important (e.g. issue happens only occasionally):
Output of
podman version
:Output of
podman info --debug
:Package info (e.g. output of
rpm -q podman
orapt list podman
):Additional environment details (AWS, VirtualBox, physical, etc.):
Fedora 31
The text was updated successfully, but these errors were encountered: