You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(If this issue should go to the main Warden repo instead, I'm happy to move it over there.)
After the Podman default resolver was added to the nginx image in 70be181, the DNS resolution seems broken when running in Docker.
When browsing pages in the the Warden environment, the requests would (not consistently, but randomly though often) receive a 502 Bad Gateway error from nginx.
Checking the nginx container logs with
# show just the stderr log
docker logs environmentname-nginx-1 -f 1> /dev/null
Would reveal errors like this:
[error] 57#57: *1690 php-fpm could not be resolved (110: Operation timed out), client: 172.25.0.14, server: , request: "GET /foo/ HTTP/1.1", host: "environmentname.test", referrer: "..."
[error] 60#60: *1695 php-fpm could not be resolved (110: Operation timed out), client: 172.25.0.14, server: , request: "GET /foo/ HTTP/1.1", host: "environmentname.test", referrer: "..."
[error] 59#59: *1698 php-fpm could not be resolved (110: Operation timed out), client: 172.25.0.14, server: , request: "GET /foo/ HTTP/1.1", host: "environmentname.test", referrer: "..."
Removing the default Podman DNS resolver IP from /etc/nginx/conf.d/default.conf and reloading the nginx configs with nginx -s reload would completely remove the issue (or at least I could not reproduce it anymore). Adding the Podman IP back to the resolver list caused the problem to appear again.
As additional information: sometimes the environment works fine for a while (I assume due to internal DNS caching in nginx) but switching to the php-debug container (with the XDEBUG_SESSION cookie) usually surfaces the problem again.
Thanks for finding this, @mattijv and thanks for reacting so fast, @navarr! I already wondered about the new 502 issues, but did not find the reason for it yet.
(If this issue should go to the main Warden repo instead, I'm happy to move it over there.)
After the Podman default resolver was added to the nginx image in 70be181, the DNS resolution seems broken when running in Docker.
When browsing pages in the the Warden environment, the requests would (not consistently, but randomly though often) receive a 502 Bad Gateway error from nginx.
Checking the nginx container logs with
Would reveal errors like this:
Removing the default Podman DNS resolver IP from
/etc/nginx/conf.d/default.conf
and reloading the nginx configs withnginx -s reload
would completely remove the issue (or at least I could not reproduce it anymore). Adding the Podman IP back to the resolver list caused the problem to appear again.System Information:
EDIT: Noticed my Docker was woefully out of date, but the same problem does happen on the newest version too (Docker version 23.0.3, build 3e7cbfd).
The text was updated successfully, but these errors were encountered: