mirror of
https://github.com/NginxProxyManager/nginx-proxy-manager.git
synced 2026-04-26 01:45:54 +03:00
[GH-ISSUE #3475] Error: Can't login to NPM-Dashboard "Bad Gateway" - v2.11.0 (latest) #2319
Labels
No labels
awaiting feedback
bug
cannot reproduce
dns provider request
duplicate
enhancement
enhancement
enhancement
good first issue
help wanted
invalid
need more info
no certbot plugin available
product-support
pull-request
question
stale
troll
upstream issue
v2
v2
v2
v3
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/nginx-proxy-manager-NginxProxyManager#2319
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @Gh0stExp10it on GitHub (Jan 19, 2024).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/3475
Checklist
jc21/nginx-proxy-manager:latestdocker image?Describe the bug
After I've upgraded to the latest version v2.11.0, the login to the NPM-Dashboard no longer works (npm-ip:81). As a side note, the upgrade was performed automatically on my server because I always use the "latest" version of the Docker image.
The log shows that no further entries follow the block below:
When rolling back to the previous version v2.10.4, the problem could be "fixed", a login in the dashboard was possible again.
Nginx Proxy Manager Version
v2.11.0 (latest, as of date 19.01.2024)
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Login should be possible, as it was in the previous version.
Screenshots
Operating System
Ubuntu Server 22.04.3 LTS
Additional context
/None/
@Gh0stExp10it commented on GitHub (Jan 19, 2024):
After finish the writing of this issue I saw the similiar issue (unfortunately did not realize it before) - for reference/duplication: #3473
@erikreimann commented on GitHub (Jan 19, 2024):
I don't get the "Bad Gateway" message - simply nothing happens when I click "Sign in" after the update to the new Docker image. Portainer claims the container to be "Unhealthy" with last output:
parse error: Invalid numeric literal at line 1, column 7 NOT OK
The logs look fine. Docker host runs on Ubuntu 22.04
Downgrading NPM to v2.10.4 solved the problem for now
@TheUntouchable commented on GitHub (Jan 19, 2024):
I get also the "Bad Gateway" message and additionally the Docker health status says:
parse error: Invalid numeric literal at line 1, column 7 NOT OKAlso, I had this in the log after first start with the new container:
❯ Configuring npm user ... useradd warning: npm's uid 0 outside of the UID_MIN 1000 and UID_MAX 60000 range.Second start seems to be ok, but still unhealthy and the "Bad gateway" error.
❯ Configuring npm user ... 0 usermod: no changes@Gh0stExp10it commented on GitHub (Jan 19, 2024):
If you want to verify it in another way, you could try sending a request via the API. Just change the IP and credentials in the body.
e.g. Get new Token - via cURL (or also via Postman):
The response should then be something like this:
@spupuz commented on GitHub (Jan 19, 2024):
same problem as op for me on 2 different installation
@LittleNewton commented on GitHub (Jan 19, 2024):
I have the same problem.
@wforumw commented on GitHub (Jan 19, 2024):
I have the same problem.
@LucifersCircle commented on GitHub (Jan 19, 2024):
I have this problem as well. If you downgrade to the last version it works again with your normal login/configurations still working.
Just change the image line in your docker compose from
image: 'jc21/nginx-proxy-manager:latest'toimage: 'jc21/nginx-proxy-manager:2.10.4'and re-deploy. Hopefully it's possible to find a real fix to this issue so we can update nginx proxy manager.@Xeroxxx commented on GitHub (Jan 19, 2024):
Same problem. Beside that database version shows none, but not sure if this was always the case with sqlite.
[1/19/2024] [9:26:27 PM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite [1/19/2024] [9:26:28 PM] [Migrate ] › ℹ info Current database version: none@erikreimann commented on GitHub (Jan 19, 2024):
"Current database version: none" is normal when using the internal Sqlite
@danielpmorris87 commented on GitHub (Jan 19, 2024):
Hello! I have 2 instances using the github-develop image for both. I swapped to github-develop awhile back to fix some weird issue and I forgot to switch back to latest. Anyway, both containers were updated at the same time early this morning. My intranet container is working fine, login and all. My DMZ instance is getting the "Bad Gateway" error upon logging in but reverse proxy is still working fine. Both containers are virtually identical, except for being on two different VLANs.
I tried to roll back the DMZ container to 2.10.4 but I'm still having the same issue.
@ignacio82 commented on GitHub (Jan 19, 2024):
downgrading to :2.10.4 solved this issue for me to.
@nemsys54 commented on GitHub (Jan 20, 2024):
Same problem using several different builds from the unraid app store.
If I access the terminal and delete the /data/sqllite.database I am able to log in, but of course all configuration is reset.
From fresh install:
Change username password
Create a simple single host fowarded to an internal IP address, and pulled a new SSL cert.
Everything works fine. I can logout and login just fine. I can access the host properly internally and externally.
If I restart the container I get the same error on the login screen "Bad Gateway"
The host is still accessable even after this error.
The error only occurs after restarting the container.
I replicated with 2 separate servers. (Both Unraid)
I replicated with a portainer compose file.
I replicated with the Nginx Official build from the unraid store.
I repleciated with the binhex build from the unraid store.
I can not find any log errors. (But maybe dont know where to look)
-Nemsys
@danielpmorris87 commented on GitHub (Jan 20, 2024):
An update on my situation:
I downgraded to 2.10.3 and removed the NODE_OPTIONS=--openssl-legacy-provider environment variable. The container came up fine and I was able to log in. I restarted the container multiple times and was able to log in after each restart.
I upgraded to 2.10.4. As before, container came up good and was able to restart/login with no issues.
I upgraded to 2.11.0 (this time using the latest tag) and saw the following in the console:
Configuring npm user ... useradd warning: npm's uid 0 outside of the UID_MIN 1000 and UID_MAX 60000 range.It appears that TheUntouchable commented on this earlier. Now I am unable to log in.
From here, I downgraded back to 2.10.4 and removed the NODE_OPTIONS=--openssl-legacy-provider environment variable again. BOOM. I'm able to log back in.
I still have no clue why my intranet NPM container on 2.11.0 is working fine, even after a restart.
@gVes commented on GitHub (Jan 20, 2024):
I have the same issue since 2.11.0.
Moving back to 2.10.4 solved the issue for me too.
@cmdflow commented on GitHub (Jan 20, 2024):
Confirmed bug since update to v2.11.0
@marvingerstner commented on GitHub (Jan 20, 2024):
I have the same issue since 2.11.0.
@johnWeak1192 commented on GitHub (Jan 20, 2024):
Same issue since 2.11.0 (with latest tag)
Moving back to 2.10.4 solved the issue for me too.
@masterwishx commented on GitHub (Jan 20, 2024):
Same issue with lasted on Ubuntu oracle cloud.
in Portainer also - unhealthy .
@ale82x commented on GitHub (Jan 20, 2024):
same here..
change in docker compose
image: 'jc21/nginx-proxy-manager:2.10.4'solve issue, but use old version (of course)
thank you
@masterwishx commented on GitHub (Jan 20, 2024):
yes i saw ,Thanks
@popallo commented on GitHub (Jan 20, 2024):
Hi all, same for me.
As far as i can see, starting logs with 2.10.4 are:
vs 2.11.0:
Obviously services are not launching on the new version of the image.
@trapassati commented on GitHub (Jan 20, 2024):
I have the same issue since 2.11.0
@popallo commented on GitHub (Jan 20, 2024):
Works with github-pr-3395 but there are let's encrypt errors like these:
And not works since github-bookworm-base version for me.
@tschaerni commented on GitHub (Jan 20, 2024):
I would like to remind people that "same issue" posts aren't really helpful, they just spam the inbox of people who are subscribed to this issue.
I think it's pretty clear by now that upgrading from 2.10.4 to 2.11.0 introduces a bug.
@ToxicToxster commented on GitHub (Jan 21, 2024):
I found the solution. Problem is when no certificates have dns-challenge.
setup.js would need this small code:
if (!plugins.length) return;added before:
return certbot.installPlugins(plugins);@jc21 commented on GitHub (Jan 21, 2024):
As @ToxicToxster has identified, yes this only affects startup where there are certs but none of them have DNS challenges.
I've pushed a fix to
developbranch, just waiting for a build and I can run a full test of the circumstance.@WoBBeLnl commented on GitHub (Jan 21, 2024):
Thanks, i just upgraded to the github-develop one and it works again. Great work!
@erikreimann commented on GitHub (Jan 21, 2024):
Login again possible with Docker image 2.11.1 - thank you jc21!
Status also "Healthy" now
@Gh0stExp10it commented on GitHub (Jan 21, 2024):
New version v2.11.1 has been released. Tests were successful after update.
Description in PR #3482
@jicho commented on GitHub (Jan 21, 2024):
Version 2.11.1 works a wanted (I can login!)
@harry8326 commented on GitHub (Jan 21, 2024):
Update to latest version worked, thank you for the fast update!
@ToxicToxster commented on GitHub (Jan 21, 2024):
Docker updated via Watchtower and works. Container healthy.
@MoaMoa90 commented on GitHub (Jan 27, 2024):
i got same problem in 2.11.1 , downgrade to 2.10.4 is also not working
what shall i do? T.T
@ToxicToxster commented on GitHub (Jan 28, 2024):
What kind of certificates do you use? Do you use DNS-challenge?
@TicTac9k1 commented on GitHub (Feb 2, 2024):
I also upgraded to :latest yesterday and use DNS-challenge.
At first no apparent problems. But then started noticing my domains were loading extremely slow.
Went out on a search, thought it was a Cloudflare/ISP issue. Sometimes it would connect fast and fine but only when loading from cache. Majority of the time it would give a 522 error.
Then wanted to log in on admin panel, got the bad gateway error. Pulled the instance down, opened it up and the admin panel was inaccessible. Downgrading to previous version didn't work as the update probably altered files, so had to restore from a backup and upgraded that to 2.10.4.
Edit;
Checked my docker images to be sure. This was due too v2.11.0
Upgrading 2.10.4 to 2.11.1 works.
@haldi4803 commented on GitHub (Feb 4, 2024):
stdout: [1;34m❯ [1;36mStarting nginx ...[0m stderr: nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-13/fullchain.pem": BIO_new_file() failed (SSL: error:80000002:system library::No such file or directory:calling fopen(/etc/letsencrypt/live/npm-13/fullchain.pem, r) error:10000080:BIO routines::no such file)copy npm-12 to non existent npm-13 solved the issue for me -.-
Edit: FML... after a Reboot of the docker it deleted npm-13 again....
@JamesDAdams commented on GitHub (Feb 27, 2024):
I have the same error today impossible to use NPM :
nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-28/fullchain.pem": BIO_new_file() failed (SSL: error:80000002:system library::No such file or directory:calling fopen(/etc/letsencrypt/live/npm-28/fullchain.pem, r) error:10000080:BIO routines::no such file)
@JamesDAdams commented on GitHub (Feb 27, 2024):
Doesn't work for me
@JamesDAdams commented on GitHub (Feb 27, 2024):
solution for now : cp-r npm-16 npm-28 in live directory
@vblues commented on GitHub (Mar 24, 2024):
I had this error whenever I used 2.11.x images:
node[468]: std::unique_ptr node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Start() at ../src/node_platform.cc:68
Nothing worked for me until I upgraded my Ubuntu VM from version 20 to 22. Hope this helps someone.
@Ville1ero commented on GitHub (Jun 12, 2024):
Resolving "Bad Gateway" Error in Podman Containers
Hi Team,
I encountered and resolved a recurring "Bad Gateway" error on the Admin Web interface (v2.11.1 © 2024 jc21.com, Theme by Tabler). Despite the error, the sites were operational. Here’s a summary of the steps taken:
Initial Issue:
The Admin Web interface displayed "Bad Gateway," and I was unable to log in, although the sites were up and running.
Attempted Solution:
I created a new container using the same docker-compose.yml file to address the issue. The configuration was as follows:
version: '3.8'
services:
superapp:
image: 'docker.io/jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- '2080:80' # Public HTTP Port
- '2443:443' # Public HTTPS Port
- '2081:81' # Admin Web Port
environment:
DB_MYSQL_HOST: "HOSTDB"
DB_MYSQL_PORT: 3306
DB_MYSQL_USER: "SUPERUSER"
DB_MYSQL_PASSWORD: "SUPERPASSWORD"
DB_MYSQL_NAME: "SUPERDB"
DISABLE_IPV6: 'true'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
depends_on:
- superdb
superdb:
image: 'docker.io/jc21/mariadb-aria:latest'
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: 'SUPERPASSWORD'
MYSQL_DATABASE: 'SUPERDB'
MYSQL_USER: 'SUPERUSER'
MYSQL_PASSWORD: 'SUPERPASSWORD'
MARIADB_AUTO_UPGRADE: '1'
volumes:
- ./mysql:/var/lib/mysql
Observations:
The new container operated without issues for a couple of weeks but eventually failed with the same "Bad Gateway" error. Checking the console, I found the following error message:
[superapp2] | [6/12/2024] [7:43:45 PM] [Global ] › ✖ error Migration table is already locked
[superapp2] | [6/12/2024] [7:43:46 PM] [Migrate ] › ℹ info Current database version: 20211108145214
[superapp2] | Can't take lock to run migrations: Migration table is already locked
[superapp2] | If you are sure migrations are not running you can release the lock manually by running 'knex migrate:unlock'
I attempted to run knex migrate:unlock, but it was unsuccessful.
Final Solution:
I created a third container and compared the database tables with the previous containers:
New (3rd) Container Tables:
MariaDB [npm]> show tables;
+--------------------+
| Tables_in_npm |
+--------------------+
| access_list |
| access_list_auth |
| access_list_client |
| audit_log |
| auth |
| certificate |
| dead_host |
| migrations |
| migrations_lock |
| proxy_host |
| redirection_host |
| setting |
| stream |
| user |
| user_permission |
+--------------------+
15 rows in set (0.001 sec)
Old (1st and 2nd) Container Tables:
MariaDB [npm]> show tables;
+----------------------+
| Tables_in_npm |
+----------------------+
| access_list |
| access_list_auth |
| access_list_client |
| audit_log |
| auth |
| certificate |
| dead_host |
| knex_migrations |
| knex_migrations_lock |
| migrations |
| migrations_lock |
| proxy_host |
| redirection_host |
| setting |
| stream |
| user |
| user_permission |
+----------------------+
17 rows in set (0.001 sec)
To resolve the issue, I deleted the extra knex_migrations and knex_migrations_lock tables from the 2nd container, which restored its functionality.
Important Note: This action was performed in a test environment with full data backup. If you choose to replicate this process, ensure you have backups to prevent data loss. Proceed with caution.
I hope this information is helpful.
Best regards,
@ccbadd commented on GitHub (Jul 27, 2024):
Ville1ero, can you explain how to do this? I don't know mysql at all, I'm just trying to get nginx installed. I am assuming this is done through a shell on the mysql container but again, I'm not familiar.
@Ville1ero commented on GitHub (Jul 27, 2024):
Hi ccbadd, I recommend against proceeding with this process without a solid understanding of MySQL. It involves data backup and requires familiarity with the steps to avoid data loss.
@ccbadd commented on GitHub (Jul 27, 2024):
Actually it is MariaDB (sorry) and I only installed it as part of the nginx install. There was no other data to risk. I did find another poster who posted their docker-conpose.yml file with a couple of differences so I tried it just now and that got things working!
Thanks for the quick reply.
@Ville1ero commented on GitHub (Jul 27, 2024):
Before proceeding, can you please confirm that you have the same error:
[Global ] › ✖ error Migration table is already locked
[Migrate ] › ℹ info Current database version: 20211108145214
[superapp2] | Can't take lock to run migrations: Migration table is already locked
[superapp2] | If you are sure migrations are not running you can release the lock manually by running 'knex migrate:unlock'
@ccbadd commented on GitHub (Jul 27, 2024):
Sorry, I already purged the original containers and reinstalled.
@Ville1ero commented on GitHub (Jul 27, 2024):
Ok, if it happens again, and is the same error:
[Global ] › ✖ error Migration table is already locked
[Migrate ] › ℹ info Current database version: 20211108145214
[superapp2] | Can't take lock to run migrations: Migration table is already locked
[superapp2] | If you are sure migrations are not running you can release the lock manually by running 'knex migrate:unlock'
These are the detailed steps:
podman exec --user=root -it [DATABASE_CONTAINER] /bin/sh
mysqldump -uroot -p [DATABASENAME] > 20240727_[DATABASENAME]_backup.sql
mysql -uroot -p [DATABASENAME]
MariaDB DATABASENAME> DROP TABLE knex_migrations, knex_migrations_lock;
For me still working.
@D10n1x commented on GitHub (Oct 2, 2024):
the same problem 2.11.3. solved by sequentially restarting containers, first mariadb then npm
@harroxelas commented on GitHub (Oct 21, 2024):
I'm on 2.11.3 and I'm having the same problem. Tried using older versions but they present the same error. Funny that it was working properly last month. Anyone still having this problem on 2.11.3?
@Ville1ero commented on GitHub (Oct 21, 2024):
My solution still working for me, no issues after more than 3 months.
@gundw commented on GitHub (Jul 13, 2025):
This error is back on 2.12.4, forcing the version 2.12.3 fixes it.
@wforumw commented on GitHub (Jul 13, 2025):
v2.12.6 works fine for me
@dominikbargiel commented on GitHub (Nov 17, 2025):
Hi ya, has anyone sorted this as the ticket seems to be closed. I have the same issues with 2.12.3, no changes has been made since last successful login, just reboot of the host where npm sits in docker:
@Ville1ero commented on GitHub (Nov 17, 2025):
My solution still working for me, no issues after more than 5 months.
@dominikbargiel commented on GitHub (Nov 18, 2025):
Hi Ville1ero, thank you for the reply. I will try your workaround. The main reason however why I have posted this is, that it seems to be NOT resolved or the issue came back with this version, but thank you anyway...