Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Podman DNS resolver IP causes nginx name resolution to randomly fail in Docker #9

Closed
mattijv opened this issue Apr 15, 2023 · 4 comments

Comments

@mattijv
Copy link

mattijv commented Apr 15, 2023

(If this issue should go to the main Warden repo instead, I'm happy to move it over there.)

After the Podman default resolver was added to the nginx image in 70be181, the DNS resolution seems broken when running in Docker.

When browsing pages in the the Warden environment, the requests would (not consistently, but randomly though often) receive a 502 Bad Gateway error from nginx.

Checking the nginx container logs with

# show just the stderr log
docker logs environmentname-nginx-1 -f 1> /dev/null

Would reveal errors like this:

[error] 57#57: *1690 php-fpm could not be resolved (110: Operation timed out), client: 172.25.0.14, server: , request: "GET /foo/ HTTP/1.1", host: "environmentname.test", referrer: "..."
[error] 60#60: *1695 php-fpm could not be resolved (110: Operation timed out), client: 172.25.0.14, server: , request: "GET /foo/ HTTP/1.1", host: "environmentname.test", referrer: "..."
[error] 59#59: *1698 php-fpm could not be resolved (110: Operation timed out), client: 172.25.0.14, server: , request: "GET /foo/ HTTP/1.1", host: "environmentname.test", referrer: "..."

Removing the default Podman DNS resolver IP from /etc/nginx/conf.d/default.conf and reloading the nginx configs with nginx -s reload would completely remove the issue (or at least I could not reproduce it anymore). Adding the Podman IP back to the resolver list caused the problem to appear again.

System Information:

  • OS: Ubuntu 22.04.2 LT / Ubuntu 5.15.0-69.76-generic 5.15.87
  • Container image: latest wardenenv/nginx:1.16 at the time of writing
  • Docker: Docker version 20.10.3, build 48d30b5 (EDIT: also Docker version 23.0.3, build 3e7cbfd)
  • docker-compose: Docker Compose version v2.17.2

EDIT: Noticed my Docker was woefully out of date, but the same problem does happen on the newest version too (Docker version 23.0.3, build 3e7cbfd).

@mattijv
Copy link
Author

mattijv commented Apr 17, 2023

As additional information: sometimes the environment works fine for a while (I assume due to internal DNS caching in nginx) but switching to the php-debug container (with the XDEBUG_SESSION cookie) usually surfaces the problem again.

@navarr
Copy link
Member

navarr commented Apr 17, 2023

Reverted in #10

@sprankhub
Copy link
Contributor

Thanks for finding this, @mattijv and thanks for reacting so fast, @navarr! I already wondered about the new 502 issues, but did not find the reason for it yet.

@mattijv
Copy link
Author

mattijv commented Apr 19, 2023

Thank you for the quick reaction, @navarr! Chiming in to confirm that after the revert and warden env pull the problem seems to have been resolved.

@navarr navarr closed this as completed Apr 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants