-
Notifications
You must be signed in to change notification settings - Fork 2.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bad Gateway at Admin Page: production.json cannot be read #333
Comments
Hi, I have the same problem on my RPI3 and I use the same docker-compose.yaml with nom in version 2.2.0. |
In addition, my error log shows this multiple times: 2020/03/21 22:06:24 [error] 223#223: *82 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: nginxproxymanager, request: "GET /api/ HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "127.0.0.1:81" |
docker-compose up -d created a directory config.json/ instead of a file. |
I'm getting this too :'( On tag :2 |
You're getting "Bad Gateway" because the upstream server which is the node app listening on port 3000 is not running or error-ed out and died.
But I'm pretty sure you just didn't create config.json which is stated so clearly in the docs, if you didn't do what @mspencerl87 and you're good to go |
Interesting. I’m not sure how I didn’t notice that if that was the case. I don’t know why, but I tried again yesterday and it all of a sudden decided to work. Both the login and ssl certs are working. I will leave the issue open for the other commenter who is having it. |
I don’t see anything about it in the Quick Setup. Which is what I used because I’m new to docker, self hosting, and all that. |
Seconding @bookandrelease's sentiments here -- @donmccoy is right that the Additionally, it seems like there's a bug where instead of creating a default |
I did read the docs for the Full Setup Instructions and still get Bad Gateway.
|
I can't find any instructions for config.json in quick or Full Setup. (google can't find it either when I use it to search https://nginxproxymanager.com) The only reference is in the compose yaml file in full setup and it says to look somewhere that does not exist I deleted the config.json line in compose and it comes up. I'll see what it looks like inside the container and try to recreate myself.... looks like it has jwt keys in it? that's hard to create since I dont know what these keys are.... |
it works nice now for me after three changes |
I'm getting the same issue. Changing to another database did not solve the problem. I have been running Nginx-Proxy-Manager for about a year now and recently after a restart of the container it started doing this. Since then, I've completely wiped everything (even the OS due to another reason) and pulled the latest image using the full installation guide. The problem persists with getting Additionally, in the health check, it has the error: Does anybody have a fix for this? |
I think I may have a solution. I had a similar issue with a bad gateway. This happened when I lost power to my pi4 and powered it back on. Here is my original docker-compose file with the actual user and password information replaced with the original setup (replace with your own):
So how I fixed the gateway issues is that I had to run
I am adding this because lots of my other containers have that same piece and they all came up fine and were accessible with their local ip and port even when I couldn't log into nginx proxy manager. Hopefully this will resolve the issue if I lose power again. |
Fixed my by changing the db host from "db" to the local ip. |
I'm sure there are many reasons for these errors, but I was getting this same one: "parse error: Invalid numeric literal at line 1, column 7 NOT OK" |
I solved it by changing the docker-compose.yml file:
instead of
|
I've been getting the same, none of the above fixing it. Even with the new docker compose setup where environment variables are used instead of a config.json. https://nginxproxymanager.com/setup/
|
@harveydobson Just brought up a stack using the exact docker-compose file you posted and it's working fine. Admittedly I'm not using any pre-existing data though. |
Hi @jc21, Can I say firstly, NPM is absolutely amazing thank you so much for bringing this into existence! I should have added more details to the above comment... With regards to this issue, so I think the docker-compose set-up that I've copied from the set-up guide would work fine using real docker-compose. Just like the op, I am using portainer. I simply shared the above docker-compose content to illustrate the use of the environment variables instead of a config.json, as that was the reason that some had the issue. (It's different to the one i used originally so i thought posting for reference could be useful to someone else in the future) The core issue is that portainer doesn't share the hostnames of the containers, so certain modifications have to be made for it to work, the configuration that actually wasn't working was something like this: (originally I was using a completely separate maria db set-up which was far more similar to the below, but this didn't work either)
Even this generates the same In the end, I gave up and set the database to use the docker network IP address for the database server. This is not ideal as it could change... but i figure NPM will work fine without the database, it's only needed to edit the config which is not done very frequently. So I can simply update each time. My biggest issue here is that I am using a Terramaster NAS that doesn't support docker-compose, and portainer doesn't support docker-compose fully, so i'm between a rock and a hard place :-D |
So yeah in summary, portainer can't seem to communicate on docker hostnames, this I already knew. Oddly, I can't seem to get NPM to communicate outside of the docker network. Not sure if this is a side effect of the 'docker stack' concept? But a bit frustrating to say the least :-D Edit: I think this could be a networking limitation of my TNAS. |
To confirm from the above, please disregard my issue, it does seem to be something strange with the TNAS networking. I have installed a new OS on there and it's all working without a problem. |
For me on a RPI 3 running Hypriot
does not work, but this does:
|
@wimmme did not work on my rpi2 |
Hey folks, So JSON file or not, and whichever DB variant I try, I'm getting the same result: connect ECONNREFUSED 172.18.0.2:3306 |
Im on a Raspberry Pi 3 I got this from docker ps, under ports: Then i changed the database to mariadb:latest, but the rest of the docker-compose file is untouched. Then I was able to log in using port 83: http://ip_addr:83 Hope it clear things up. |
@peterweissdk already tried that. did not work for me |
Also getting this - raspberry pi 4b 4gb - tried all solutions still get parse error: Invalid numeric literal at line 1, column 7 NOT OK |
After a low of troubles someone suggested the following config. The difference is that is uses another container for the database that supports arm and I think that the important thing is the is uses version 2 instead of 3. version: "2"
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: always
ports:
# Public HTTP Port:
- '80:80'
# Public HTTPS Port:
- '443:443'
# Admin Web Port:
- '81:81'
environment:
# These are the settings to access your db
DB_MYSQL_HOST: "db"
DB_MYSQL_PORT: 3306
DB_MYSQL_USER: "npm"
DB_MYSQL_PASSWORD: "npm"
DB_MYSQL_NAME: "npm"
DISABLE_IPV6: 'true'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
depends_on:
- db
db:
image: yobasystems/alpine-mariadb:latest
restart: always
environment:
MYSQL_ROOT_PASSWORD: 'npm'
MYSQL_DATABASE: 'npm'
MYSQL_USER: 'npm'
MYSQL_PASSWORD: 'npm'
volumes:
- ./data/mysql:/var/lib/mysql
|
@ anselal Thank you so much! After a few attempts to modify this, this worked. Thank you and everybody for figuring this out! |
@jc21 maybe add this dockerfile to your documentation ??? |
@anselal You are a life saver |
thnx goes to MichaIng/DietPi#1622 (comment) |
I wanted to leave a quick comment. I had a client that called me needing help. When I checked they were also receiving the Bad Gateway error. The first thing I did as it was over Zoom was a quick Google search which was the first page that I found. Someone commented earlier that there is multiple reasons why. They were on a slow connection. I finally was able to get to their logs. I realized what happened. They already had a MariaDB container running on part 3306 for WordPress. Once I stop that container rebuilt the NGINX-Proxy-Manager everything worked fine. I thought I'd add this incase someone in the future came across this same problem as I did. |
maybe this it trivial but if you start renaming services names, double check for the UI container to reference properly the db container hostname. I realized this gotcha after checking container logs:
|
Broo you have ended my sufferings I was trying to setup my home server for a month now, as soon as I used your example it was working. Thank you so much! Ευχαριστώ πολύ! |
να είσαι καλά αδερφέ, και εμένα μου έλυσα τα χέρια αν και στην τελική το κάνω με custom ρυθμίσεις στον nginx |
What I needed to work this it out (bad gateway on the login page):
|
YOOOO THANKS SOO MUCH DUDE THIS HELPED ME SO MUCH |
@DonSYS91 where is this documented? certainly not in Quick Setup: https://nginxproxymanager.com/guide/#quick-setup |
As this is a fairly old issue and contains some outdated information, like the |
I have been trying for days to get this to function with SSL. After a couple days it finally worked, but I eventually got "internal error" when trying to obtain an SSL for one of the proxies. I then tried to reinstall, but now I get "Bad Gateway" when trying to login to the admin portal.
Portainer shows the desktop_app_1 container is unhealthy and shows an error of "parse error: Invalid numeric literal at line 1, column 7 NOT OK".
Docker version 19.03.8, build afacb8b7f0
Ubuntu LTS 18.04
docker-compose version 1.25.4, build 8d51620a
Please help. I have a domain I paid for that also has an SSL through dnsimple.
docker-compose.yaml
desktop_db1_ output
desktop_app_1 output:
The text was updated successfully, but these errors were encountered: