[GH-ISSUE #3475] Error: Can't login to NPM-Dashboard "Bad Gateway" - v2.11.0 (latest) #2319

Closed
opened 2026-02-26 07:34:59 +03:00 by kerem · 56 comments
Owner

Originally created by @Gh0stExp10it on GitHub (Jan 19, 2024).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/3475

Checklist

  • Have you pulled and found the error with jc21/nginx-proxy-manager:latest docker image?
    • Yes
  • Are you sure you're not using someone else's docker image?
    • Yes
  • Have you searched for similar issues (both open and closed)?
    • Yes

Describe the bug

After I've upgraded to the latest version v2.11.0, the login to the NPM-Dashboard no longer works (npm-ip:81). As a side note, the upgrade was performed automatically on my server because I always use the "latest" version of the Docker image.

The log shows that no further entries follow the block below:

-------------------------------------
 _   _ ____  __  __
| \ | |  _ \|  \/  |
|  \| | |_) | |\/| |
| |\  |  __/| |  | |
|_| \_|_|   |_|  |_|
-------------------------------------
User:  npm PUID:0 ID:0 GROUP:0
Group: npm PGID:0 ID:0
-------------------------------------

❯ Starting nginx ...
❯ Starting backend ...
[1/19/2024] [12:07:04 PM] [Global   ] › ℹ  info      Using MySQL configuration
[1/19/2024] [12:07:05 PM] [Migrate  ] › ℹ  info      Current database version: 20211108145214

When rolling back to the previous version v2.10.4, the problem could be "fixed", a login in the dashboard was possible again.

Nginx Proxy Manager Version
v2.11.0 (latest, as of date 19.01.2024)

To Reproduce
Steps to reproduce the behavior:

  1. Go to npm-ip:81
  2. Insert credentials
  3. Click on "Sign in"
  4. Error message "Bad Gateway" appears

Expected behavior

Login should be possible, as it was in the previous version.

Screenshots

NPM-Dashboard-Bad-Gateway

Operating System

Ubuntu Server 22.04.3 LTS

Additional context

/None/

Originally created by @Gh0stExp10it on GitHub (Jan 19, 2024). Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/3475 **Checklist** - Have you pulled and found the error with `jc21/nginx-proxy-manager:latest` docker image? - Yes - Are you sure you're not using someone else's docker image? - Yes - Have you searched for similar issues (both open and closed)? - Yes **Describe the bug** <!-- A clear and concise description of what the bug is. --> After I've upgraded to the latest version v2.11.0, the login to the NPM-Dashboard no longer works (npm-ip:81). As a side note, the upgrade was performed automatically on my server because I always use the "latest" version of the Docker image. The log shows that no further entries follow the block below: ```sh ------------------------------------- _ _ ____ __ __ | \ | | _ \| \/ | | \| | |_) | |\/| | | |\ | __/| | | | |_| \_|_| |_| |_| ------------------------------------- User: npm PUID:0 ID:0 GROUP:0 Group: npm PGID:0 ID:0 ------------------------------------- ❯ Starting nginx ... ❯ Starting backend ... [1/19/2024] [12:07:04 PM] [Global ] › ℹ info Using MySQL configuration [1/19/2024] [12:07:05 PM] [Migrate ] › ℹ info Current database version: 20211108145214 ``` When rolling back to the previous version v2.10.4, the problem could be "fixed", a login in the dashboard was possible again. **Nginx Proxy Manager Version** v2.11.0 (latest, as of date 19.01.2024) **To Reproduce** Steps to reproduce the behavior: 1. Go to npm-ip:81 2. Insert credentials 3. Click on "Sign in" 4. Error message "Bad Gateway" appears **Expected behavior** <!-- A clear and concise description of what you expected to happen. --> Login should be possible, as it was in the previous version. **Screenshots** <!-- If applicable, add screenshots to help explain your problem. --> ![NPM-Dashboard-Bad-Gateway](https://github.com/NginxProxyManager/nginx-proxy-manager/assets/7295005/9df7b0ff-867c-424b-999a-4e3013de440e) **Operating System** <!-- Please specify if using a Rpi, Mac, orchestration tool or any other setups that might affect the reproduction of this error. --> Ubuntu Server 22.04.3 LTS **Additional context** <!-- Add any other context about the problem here, docker version, browser version, logs if applicable to the problem. Too much info is better than too little. --> /None/
kerem 2026-02-26 07:34:59 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@Gh0stExp10it commented on GitHub (Jan 19, 2024):

After finish the writing of this issue I saw the similiar issue (unfortunately did not realize it before) - for reference/duplication: #3473

<!-- gh-comment-id:1900333066 --> @Gh0stExp10it commented on GitHub (Jan 19, 2024): After finish the writing of this issue I saw the similiar issue (unfortunately did not realize it before) - for reference/duplication: #3473
Author
Owner

@erikreimann commented on GitHub (Jan 19, 2024):

I don't get the "Bad Gateway" message - simply nothing happens when I click "Sign in" after the update to the new Docker image. Portainer claims the container to be "Unhealthy" with last output:
parse error: Invalid numeric literal at line 1, column 7 NOT OK
The logs look fine. Docker host runs on Ubuntu 22.04
Downgrading NPM to v2.10.4 solved the problem for now

<!-- gh-comment-id:1900720589 --> @erikreimann commented on GitHub (Jan 19, 2024): I don't get the "Bad Gateway" message - simply nothing happens when I click "Sign in" after the update to the new Docker image. Portainer claims the container to be "Unhealthy" with last output: parse error: Invalid numeric literal at line 1, column 7 NOT OK The logs look fine. Docker host runs on Ubuntu 22.04 Downgrading NPM to v2.10.4 solved the problem for now
Author
Owner

@TheUntouchable commented on GitHub (Jan 19, 2024):

I get also the "Bad Gateway" message and additionally the Docker health status says:
parse error: Invalid numeric literal at line 1, column 7 NOT OK

Also, I had this in the log after first start with the new container:
❯ Configuring npm user ... useradd warning: npm's uid 0 outside of the UID_MIN 1000 and UID_MAX 60000 range.

Second start seems to be ok, but still unhealthy and the "Bad gateway" error.
❯ Configuring npm user ... 0 usermod: no changes

<!-- gh-comment-id:1900731162 --> @TheUntouchable commented on GitHub (Jan 19, 2024): I get also the "Bad Gateway" message and additionally the Docker health status says: `parse error: Invalid numeric literal at line 1, column 7 NOT OK` Also, I had this in the log after first start with the new container: ` ❯ Configuring npm user ... useradd warning: npm's uid 0 outside of the UID_MIN 1000 and UID_MAX 60000 range. ` Second start seems to be ok, but still unhealthy and the "Bad gateway" error. ` ❯ Configuring npm user ... 0 usermod: no changes `
Author
Owner

@Gh0stExp10it commented on GitHub (Jan 19, 2024):

I don't get the "Bad Gateway" message - simply nothing happens when I click "Sign in" after the update to the new Docker image. Portainer claims the container to be "Unhealty" with last output: parse error: Invalid numeric literal at line 1, column 7 NOT OK

If you want to verify it in another way, you could try sending a request via the API. Just change the IP and credentials in the body.
e.g. Get new Token - via cURL (or also via Postman):

curl -X POST --location 'yourIPv4:81/api/tokens' \
--header 'Content-Type: application/json' \
--data-raw '{
    "identity": "yourEmail@mail.com",
    "secret": "yourPassword"
}'

The response should then be something like this:

<html>
<head>
    <title>502 Bad Gateway</title>
</head>
<body>
    <center>
        <h1>502 Bad Gateway</h1>
    </center>
    <hr>
    <center>openresty</center>
</body>
</html>
<!-- gh-comment-id:1900737687 --> @Gh0stExp10it commented on GitHub (Jan 19, 2024): > I don't get the "Bad Gateway" message - simply nothing happens when I click "Sign in" after the update to the new Docker image. Portainer claims the container to be "Unhealty" with last output: parse error: Invalid numeric literal at line 1, column 7 NOT OK If you want to verify it in another way, you could try sending a request via the API. Just change the IP and credentials in the body. e.g. Get new Token - via cURL (or also via Postman): ```sh curl -X POST --location 'yourIPv4:81/api/tokens' \ --header 'Content-Type: application/json' \ --data-raw '{ "identity": "yourEmail@mail.com", "secret": "yourPassword" }' ``` The response should then be something like this: ```html <html> <head> <title>502 Bad Gateway</title> </head> <body> <center> <h1>502 Bad Gateway</h1> </center> <hr> <center>openresty</center> </body> </html> ```
Author
Owner

@spupuz commented on GitHub (Jan 19, 2024):

same problem as op for me on 2 different installation

<!-- gh-comment-id:1900801498 --> @spupuz commented on GitHub (Jan 19, 2024): same problem as op for me on 2 different installation
Author
Owner

@LittleNewton commented on GitHub (Jan 19, 2024):

I have the same problem.

<!-- gh-comment-id:1900927416 --> @LittleNewton commented on GitHub (Jan 19, 2024): I have the same problem.
Author
Owner

@wforumw commented on GitHub (Jan 19, 2024):

I have the same problem.

<!-- gh-comment-id:1900979046 --> @wforumw commented on GitHub (Jan 19, 2024): I have the same problem.
Author
Owner

@LucifersCircle commented on GitHub (Jan 19, 2024):

I have this problem as well. If you downgrade to the last version it works again with your normal login/configurations still working.
Just change the image line in your docker compose from image: 'jc21/nginx-proxy-manager:latest' to image: 'jc21/nginx-proxy-manager:2.10.4' and re-deploy. Hopefully it's possible to find a real fix to this issue so we can update nginx proxy manager.

<!-- gh-comment-id:1901013814 --> @LucifersCircle commented on GitHub (Jan 19, 2024): I have this problem as well. If you downgrade to the last version it works again with your normal login/configurations still working. Just change the image line in your docker compose from `image: 'jc21/nginx-proxy-manager:latest'` to `image: 'jc21/nginx-proxy-manager:2.10.4'` and re-deploy. Hopefully it's possible to find a real fix to this issue so we can update nginx proxy manager.
Author
Owner

@Xeroxxx commented on GitHub (Jan 19, 2024):

Same problem. Beside that database version shows none, but not sure if this was always the case with sqlite.
[1/19/2024] [9:26:27 PM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite [1/19/2024] [9:26:28 PM] [Migrate ] › ℹ info Current database version: none

<!-- gh-comment-id:1901073937 --> @Xeroxxx commented on GitHub (Jan 19, 2024): Same problem. Beside that database version shows none, but not sure if this was always the case with sqlite. `[1/19/2024] [9:26:27 PM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite [1/19/2024] [9:26:28 PM] [Migrate ] › ℹ info Current database version: none`
Author
Owner

@erikreimann commented on GitHub (Jan 19, 2024):

Same problem. Beside that database version shows none. [1/19/2024] [9:26:27 PM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite [1/19/2024] [9:26:28 PM] [Migrate ] › ℹ info Current database version: none

"Current database version: none" is normal when using the internal Sqlite

<!-- gh-comment-id:1901080523 --> @erikreimann commented on GitHub (Jan 19, 2024): > Same problem. Beside that database version shows none. `[1/19/2024] [9:26:27 PM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite [1/19/2024] [9:26:28 PM] [Migrate ] › ℹ info Current database version: none` "Current database version: none" is normal when using the internal Sqlite
Author
Owner

@danielpmorris87 commented on GitHub (Jan 19, 2024):

Hello! I have 2 instances using the github-develop image for both. I swapped to github-develop awhile back to fix some weird issue and I forgot to switch back to latest. Anyway, both containers were updated at the same time early this morning. My intranet container is working fine, login and all. My DMZ instance is getting the "Bad Gateway" error upon logging in but reverse proxy is still working fine. Both containers are virtually identical, except for being on two different VLANs.

I tried to roll back the DMZ container to 2.10.4 but I'm still having the same issue.

<!-- gh-comment-id:1901151440 --> @danielpmorris87 commented on GitHub (Jan 19, 2024): Hello! I have 2 instances using the github-develop image for both. I swapped to github-develop awhile back to fix some weird issue and I forgot to switch back to latest. Anyway, both containers were updated at the same time early this morning. My intranet container is working fine, login and all. My DMZ instance is getting the "Bad Gateway" error upon logging in but reverse proxy is still working fine. Both containers are virtually identical, except for being on two different VLANs. I tried to roll back the DMZ container to 2.10.4 but I'm still having the same issue.
Author
Owner

@ignacio82 commented on GitHub (Jan 19, 2024):

downgrading to :2.10.4 solved this issue for me to.

<!-- gh-comment-id:1901319247 --> @ignacio82 commented on GitHub (Jan 19, 2024): downgrading to :2.10.4 solved this issue for me to.
Author
Owner

@nemsys54 commented on GitHub (Jan 20, 2024):

Same problem using several different builds from the unraid app store.
If I access the terminal and delete the /data/sqllite.database I am able to log in, but of course all configuration is reset.

From fresh install:
Change username password
Create a simple single host fowarded to an internal IP address, and pulled a new SSL cert.
Everything works fine. I can logout and login just fine. I can access the host properly internally and externally.
If I restart the container I get the same error on the login screen "Bad Gateway"
The host is still accessable even after this error.
The error only occurs after restarting the container.
I replicated with 2 separate servers. (Both Unraid)
I replicated with a portainer compose file.
I replicated with the Nginx Official build from the unraid store.
I repleciated with the binhex build from the unraid store.
I can not find any log errors. (But maybe dont know where to look)

nginx

-Nemsys

<!-- gh-comment-id:1901688267 --> @nemsys54 commented on GitHub (Jan 20, 2024): Same problem using several different builds from the unraid app store. If I access the terminal and delete the /data/sqllite.database I am able to log in, but of course all configuration is reset. From fresh install: Change username password Create a simple single host fowarded to an internal IP address, and pulled a new SSL cert. Everything works fine. I can logout and login just fine. I can access the host properly internally and externally. If I restart the container I get the same error on the login screen "Bad Gateway" The host is still accessable even after this error. The error only occurs after restarting the container. I replicated with 2 separate servers. (Both Unraid) I replicated with a portainer compose file. I replicated with the Nginx Official build from the unraid store. I repleciated with the binhex build from the unraid store. I can not find any log errors. (But maybe dont know where to look) ![nginx](https://github.com/NginxProxyManager/nginx-proxy-manager/assets/76817957/d057ab48-0558-4832-a5d8-1a7ee13e7bf9) -Nemsys
Author
Owner

@danielpmorris87 commented on GitHub (Jan 20, 2024):

An update on my situation:

  1. I downgraded to 2.10.3 and removed the NODE_OPTIONS=--openssl-legacy-provider environment variable. The container came up fine and I was able to log in. I restarted the container multiple times and was able to log in after each restart.

  2. I upgraded to 2.10.4. As before, container came up good and was able to restart/login with no issues.

  3. I upgraded to 2.11.0 (this time using the latest tag) and saw the following in the console:
    Configuring npm user ... useradd warning: npm's uid 0 outside of the UID_MIN 1000 and UID_MAX 60000 range.
    It appears that TheUntouchable commented on this earlier. Now I am unable to log in.

  4. From here, I downgraded back to 2.10.4 and removed the NODE_OPTIONS=--openssl-legacy-provider environment variable again. BOOM. I'm able to log back in.

I still have no clue why my intranet NPM container on 2.11.0 is working fine, even after a restart.

<!-- gh-comment-id:1901767395 --> @danielpmorris87 commented on GitHub (Jan 20, 2024): An update on my situation: 1) I downgraded to 2.10.3 and removed the **NODE_OPTIONS=--openssl-legacy-provider** environment variable. The container came up fine and I was able to log in. I restarted the container multiple times and was able to log in after each restart. 2) I upgraded to 2.10.4. As before, container came up good and was able to restart/login with no issues. 3) I upgraded to 2.11.0 (this time using the **latest** tag) and saw the following in the console: ` Configuring npm user ... useradd warning: npm's uid 0 outside of the UID_MIN 1000 and UID_MAX 60000 range.` It appears that TheUntouchable commented on this earlier. Now I am unable to log in. 4) From here, I downgraded back to 2.10.4 and removed the **NODE_OPTIONS=--openssl-legacy-provider** environment variable again. BOOM. I'm able to log back in. I still have no clue why my intranet NPM container on 2.11.0 is working fine, even after a restart.
Author
Owner

@gVes commented on GitHub (Jan 20, 2024):

I have the same issue since 2.11.0.
Moving back to 2.10.4 solved the issue for me too.

<!-- gh-comment-id:1902037179 --> @gVes commented on GitHub (Jan 20, 2024): I have the same issue since 2.11.0. Moving back to 2.10.4 solved the issue for me too.
Author
Owner

@cmdflow commented on GitHub (Jan 20, 2024):

Confirmed bug since update to v2.11.0

<!-- gh-comment-id:1902038942 --> @cmdflow commented on GitHub (Jan 20, 2024): Confirmed bug since update to v2.11.0
Author
Owner

@marvingerstner commented on GitHub (Jan 20, 2024):

I have the same issue since 2.11.0.

<!-- gh-comment-id:1902094174 --> @marvingerstner commented on GitHub (Jan 20, 2024): I have the same issue since 2.11.0.
Author
Owner

@johnWeak1192 commented on GitHub (Jan 20, 2024):

Same issue since 2.11.0 (with latest tag)
Moving back to 2.10.4 solved the issue for me too.

<!-- gh-comment-id:1902106234 --> @johnWeak1192 commented on GitHub (Jan 20, 2024): Same issue since 2.11.0 (with latest tag) Moving back to 2.10.4 solved the issue for me too.
Author
Owner

@masterwishx commented on GitHub (Jan 20, 2024):

Same issue with lasted on Ubuntu oracle cloud.
in Portainer also - unhealthy .

<!-- gh-comment-id:1902171530 --> @masterwishx commented on GitHub (Jan 20, 2024): Same issue with lasted on Ubuntu oracle cloud. in Portainer also - unhealthy .
Author
Owner

@ale82x commented on GitHub (Jan 20, 2024):

same here..

change in docker compose
image: 'jc21/nginx-proxy-manager:2.10.4'

solve issue, but use old version (of course)

thank you

<!-- gh-comment-id:1902177914 --> @ale82x commented on GitHub (Jan 20, 2024): same here.. change in docker compose ` image: 'jc21/nginx-proxy-manager:2.10.4' ` solve issue, but use old version (of course) thank you
Author
Owner

@masterwishx commented on GitHub (Jan 20, 2024):

change in docker compose

yes i saw ,Thanks

<!-- gh-comment-id:1902192226 --> @masterwishx commented on GitHub (Jan 20, 2024): > change in docker compose yes i saw ,Thanks
Author
Owner

@popallo commented on GitHub (Jan 20, 2024):

Hi all, same for me.

As far as i can see, starting logs with 2.10.4 are:

[Global   ] › ℹ  info      Using MySQL configuration
[Migrate  ] › ℹ  info      Current database version: 20211108145214
[Setup    ] › ℹ  info      Logrotate Timer initialized
[Setup    ] › ℹ  info      Logrotate completed.
[IP Ranges] › ℹ  info      Fetching IP Ranges from online services...
[IP Ranges] › ℹ  info      Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
[IP Ranges] › ℹ  info      Fetching https://www.cloudflare.com/ips-v4
[IP Ranges] › ℹ  info      Fetching https://www.cloudflare.com/ips-v6
[SSL      ] › ℹ  info      Let's Encrypt Renewal Timer initialized
[SSL      ] › ℹ  info      Renewing SSL certs close to expiry...
[IP Ranges] › ℹ  info      IP Ranges Renewal Timer initialized
[Global   ] › ℹ  info      Backend PID 168 listening on port 3000 ...

vs 2.11.0:

[Global   ] › ℹ  info      Using MySQL configuration
[Migrate  ] › ℹ  info      Current database version: 20211108145214

Obviously services are not launching on the new version of the image.

<!-- gh-comment-id:1902213067 --> @popallo commented on GitHub (Jan 20, 2024): Hi all, same for me. As far as i can see, starting logs with 2.10.4 are: ```txt [Global ] › ℹ info Using MySQL configuration [Migrate ] › ℹ info Current database version: 20211108145214 [Setup ] › ℹ info Logrotate Timer initialized [Setup ] › ℹ info Logrotate completed. [IP Ranges] › ℹ info Fetching IP Ranges from online services... [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4 [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6 [SSL ] › ℹ info Let's Encrypt Renewal Timer initialized [SSL ] › ℹ info Renewing SSL certs close to expiry... [IP Ranges] › ℹ info IP Ranges Renewal Timer initialized [Global ] › ℹ info Backend PID 168 listening on port 3000 ... ``` vs 2.11.0: ```txt [Global ] › ℹ info Using MySQL configuration [Migrate ] › ℹ info Current database version: 20211108145214 ``` Obviously services are not launching on the new version of the image.
Author
Owner

@trapassati commented on GitHub (Jan 20, 2024):

I have the same issue since 2.11.0

<!-- gh-comment-id:1902251349 --> @trapassati commented on GitHub (Jan 20, 2024): I have the same issue since 2.11.0
Author
Owner

@popallo commented on GitHub (Jan 20, 2024):

Works with github-pr-3395 but there are let's encrypt errors like these:

[1/20/2024] [8:57:38 PM] [SSL      ] › ✖  error     Error: Command failed: certbot renew --non-interactive --quiet --config "/etc/letsencrypt.ini" --work-dir "/tmp/letsencrypt-lib" --logs-dir "/tmp/letsencrypt-log" --preferred-challenges "dns,http" --disable-hook-validation
Failed to renew certificate npm-10 with error: Some challenges have failed.
Failed to renew certificate npm-11 with error: Some challenges have failed.
Failed to renew certificate npm-12 with error: Some challenges have failed.
Failed to renew certificate npm-13 with error: Some challenges have failed.
Failed to renew certificate npm-14 with error: Some challenges have failed.
Failed to renew certificate npm-15 with error: Some challenges have failed.
Failed to renew certificate npm-8 with error: Some challenges have failed.
Failed to renew certificate npm-9 with error: Some challenges have failed.
All renewals failed. The following certificates could not be renewed:
  /etc/letsencrypt/live/npm-10/fullchain.pem (failure)
  /etc/letsencrypt/live/npm-11/fullchain.pem (failure)
  /etc/letsencrypt/live/npm-12/fullchain.pem (failure)
  /etc/letsencrypt/live/npm-13/fullchain.pem (failure)
  /etc/letsencrypt/live/npm-14/fullchain.pem (failure)
  /etc/letsencrypt/live/npm-15/fullchain.pem (failure)
  /etc/letsencrypt/live/npm-8/fullchain.pem (failure)
  /etc/letsencrypt/live/npm-9/fullchain.pem (failure)
8 renew failure(s), 0 parse failure(s)

And not works since github-bookworm-base version for me.

<!-- gh-comment-id:1902264045 --> @popallo commented on GitHub (Jan 20, 2024): Works with github-pr-3395 but there are let's encrypt errors like these: ```txt [1/20/2024] [8:57:38 PM] [SSL ] › ✖ error Error: Command failed: certbot renew --non-interactive --quiet --config "/etc/letsencrypt.ini" --work-dir "/tmp/letsencrypt-lib" --logs-dir "/tmp/letsencrypt-log" --preferred-challenges "dns,http" --disable-hook-validation Failed to renew certificate npm-10 with error: Some challenges have failed. Failed to renew certificate npm-11 with error: Some challenges have failed. Failed to renew certificate npm-12 with error: Some challenges have failed. Failed to renew certificate npm-13 with error: Some challenges have failed. Failed to renew certificate npm-14 with error: Some challenges have failed. Failed to renew certificate npm-15 with error: Some challenges have failed. Failed to renew certificate npm-8 with error: Some challenges have failed. Failed to renew certificate npm-9 with error: Some challenges have failed. All renewals failed. The following certificates could not be renewed: /etc/letsencrypt/live/npm-10/fullchain.pem (failure) /etc/letsencrypt/live/npm-11/fullchain.pem (failure) /etc/letsencrypt/live/npm-12/fullchain.pem (failure) /etc/letsencrypt/live/npm-13/fullchain.pem (failure) /etc/letsencrypt/live/npm-14/fullchain.pem (failure) /etc/letsencrypt/live/npm-15/fullchain.pem (failure) /etc/letsencrypt/live/npm-8/fullchain.pem (failure) /etc/letsencrypt/live/npm-9/fullchain.pem (failure) 8 renew failure(s), 0 parse failure(s) ``` And not works since github-bookworm-base version for me.
Author
Owner

@tschaerni commented on GitHub (Jan 20, 2024):

I would like to remind people that "same issue" posts aren't really helpful, they just spam the inbox of people who are subscribed to this issue.
I think it's pretty clear by now that upgrading from 2.10.4 to 2.11.0 introduces a bug.

<!-- gh-comment-id:1902354340 --> @tschaerni commented on GitHub (Jan 20, 2024): I would like to remind people that "same issue" posts aren't really helpful, they just spam the inbox of people who are subscribed to this issue. I think it's pretty clear by now that upgrading from 2.10.4 to 2.11.0 introduces a bug.
Author
Owner

@ToxicToxster commented on GitHub (Jan 21, 2024):

I found the solution. Problem is when no certificates have dns-challenge.
setup.js would need this small code:
if (!plugins.length) return;
added before:
return certbot.installPlugins(plugins);

<!-- gh-comment-id:1902522639 --> @ToxicToxster commented on GitHub (Jan 21, 2024): I found the solution. Problem is when no certificates have dns-challenge. setup.js would need this small code: `if (!plugins.length) return;` added before: `return certbot.installPlugins(plugins);`
Author
Owner

@jc21 commented on GitHub (Jan 21, 2024):

As @ToxicToxster has identified, yes this only affects startup where there are certs but none of them have DNS challenges.

I've pushed a fix to develop branch, just waiting for a build and I can run a full test of the circumstance.

<!-- gh-comment-id:1902591075 --> @jc21 commented on GitHub (Jan 21, 2024): As @ToxicToxster has identified, yes this only affects startup where there are certs but none of them have DNS challenges. I've pushed a fix to `develop` branch, just waiting for a build and I can run a full test of the circumstance.
Author
Owner

@WoBBeLnl commented on GitHub (Jan 21, 2024):

Thanks, i just upgraded to the github-develop one and it works again. Great work!

<!-- gh-comment-id:1902600999 --> @WoBBeLnl commented on GitHub (Jan 21, 2024): Thanks, i just upgraded to the github-develop one and it works again. Great work!
Author
Owner

@erikreimann commented on GitHub (Jan 21, 2024):

As @ToxicToxster has identified, yes this only affects startup where there are certs but none of them have DNS challenges.

I've pushed a fix to develop branch, just waiting for a build and I can run a full test of the circumstance.

Login again possible with Docker image 2.11.1 - thank you jc21!
Status also "Healthy" now

<!-- gh-comment-id:1902614892 --> @erikreimann commented on GitHub (Jan 21, 2024): > As @ToxicToxster has identified, yes this only affects startup where there are certs but none of them have DNS challenges. > > I've pushed a fix to `develop` branch, just waiting for a build and I can run a full test of the circumstance. Login again possible with Docker image 2.11.1 - thank you jc21! Status also "Healthy" now
Author
Owner

@Gh0stExp10it commented on GitHub (Jan 21, 2024):

New version v2.11.1 has been released. Tests were successful after update.

Description in PR #3482

<!-- gh-comment-id:1902645316 --> @Gh0stExp10it commented on GitHub (Jan 21, 2024): New version v2.11.1 has been released. Tests were successful after update. Description in PR #3482
Author
Owner

@jicho commented on GitHub (Jan 21, 2024):

Version 2.11.1 works a wanted (I can login!)

<!-- gh-comment-id:1902661307 --> @jicho commented on GitHub (Jan 21, 2024): Version 2.11.1 works a wanted (I can login!)
Author
Owner

@harry8326 commented on GitHub (Jan 21, 2024):

Update to latest version worked, thank you for the fast update!

<!-- gh-comment-id:1902691242 --> @harry8326 commented on GitHub (Jan 21, 2024): Update to latest version worked, thank you for the fast update!
Author
Owner

@ToxicToxster commented on GitHub (Jan 21, 2024):

Docker updated via Watchtower and works. Container healthy.

<!-- gh-comment-id:1902693583 --> @ToxicToxster commented on GitHub (Jan 21, 2024): Docker updated via Watchtower and works. Container healthy.
Author
Owner

@MoaMoa90 commented on GitHub (Jan 27, 2024):

i got same problem in 2.11.1 , downgrade to 2.10.4 is also not working
what shall i do? T.T

<!-- gh-comment-id:1913110327 --> @MoaMoa90 commented on GitHub (Jan 27, 2024): i got same problem in 2.11.1 , downgrade to 2.10.4 is also not working what shall i do? T.T
Author
Owner

@ToxicToxster commented on GitHub (Jan 28, 2024):

i got same problem in 2.11.1 , downgrade to 2.10.4 is also not working what shall i do? T.T

What kind of certificates do you use? Do you use DNS-challenge?

<!-- gh-comment-id:1913408995 --> @ToxicToxster commented on GitHub (Jan 28, 2024): > i got same problem in 2.11.1 , downgrade to 2.10.4 is also not working what shall i do? T.T What kind of certificates do you use? Do you use DNS-challenge?
Author
Owner

@TicTac9k1 commented on GitHub (Feb 2, 2024):

i got same problem in 2.11.1 , downgrade to 2.10.4 is also not working what shall i do? T.T

What kind of certificates do you use? Do you use DNS-challenge?

I also upgraded to :latest yesterday and use DNS-challenge.
At first no apparent problems. But then started noticing my domains were loading extremely slow.

Went out on a search, thought it was a Cloudflare/ISP issue. Sometimes it would connect fast and fine but only when loading from cache. Majority of the time it would give a 522 error.

Then wanted to log in on admin panel, got the bad gateway error. Pulled the instance down, opened it up and the admin panel was inaccessible. Downgrading to previous version didn't work as the update probably altered files, so had to restore from a backup and upgraded that to 2.10.4.

Edit;

Checked my docker images to be sure. This was due too v2.11.0
Upgrading 2.10.4 to 2.11.1 works.

<!-- gh-comment-id:1924394774 --> @TicTac9k1 commented on GitHub (Feb 2, 2024): > > i got same problem in 2.11.1 , downgrade to 2.10.4 is also not working what shall i do? T.T > > What kind of certificates do you use? Do you use DNS-challenge? I also upgraded to :latest yesterday and use DNS-challenge. At first no apparent problems. But then started noticing my domains were loading extremely slow. Went out on a search, thought it was a Cloudflare/ISP issue. Sometimes it would connect fast and fine but only when loading from cache. Majority of the time it would give a 522 error. Then wanted to log in on admin panel, got the bad gateway error. Pulled the instance down, opened it up and the admin panel was inaccessible. Downgrading to previous version didn't work as the update probably altered files, so had to restore from a backup and upgraded that to 2.10.4. Edit; Checked my docker images to be sure. This was due too v2.11.0 Upgrading 2.10.4 to 2.11.1 works.
Author
Owner

@haldi4803 commented on GitHub (Feb 4, 2024):

stdout: [1;34m❯ [1;36mStarting nginx ...[0m stderr: nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-13/fullchain.pem": BIO_new_file() failed (SSL: error:80000002:system library::No such file or directory:calling fopen(/etc/letsencrypt/live/npm-13/fullchain.pem, r) error:10000080:BIO routines::no such file)

copy npm-12 to non existent npm-13 solved the issue for me -.-

Edit: FML... after a Reboot of the docker it deleted npm-13 again....

<!-- gh-comment-id:1925860590 --> @haldi4803 commented on GitHub (Feb 4, 2024): `stdout: [1;34m❯ [1;36mStarting nginx ...[0m stderr: nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-13/fullchain.pem": BIO_new_file() failed (SSL: error:80000002:system library::No such file or directory:calling fopen(/etc/letsencrypt/live/npm-13/fullchain.pem, r) error:10000080:BIO routines::no such file)` copy npm-12 to non existent npm-13 solved the issue for me -.- Edit: FML... after a Reboot of the docker it deleted npm-13 again....
Author
Owner

@JamesDAdams commented on GitHub (Feb 27, 2024):

I have the same error today impossible to use NPM :

nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-28/fullchain.pem": BIO_new_file() failed (SSL: error:80000002:system library::No such file or directory:calling fopen(/etc/letsencrypt/live/npm-28/fullchain.pem, r) error:10000080:BIO routines::no such file)

<!-- gh-comment-id:1965956619 --> @JamesDAdams commented on GitHub (Feb 27, 2024): I have the same error today impossible to use NPM : nginx: [emerg] cannot load certificate "/etc/letsencrypt/live/npm-28/fullchain.pem": BIO_new_file() failed (SSL: error:80000002:system library::No such file or directory:calling fopen(/etc/letsencrypt/live/npm-28/fullchain.pem, r) error:10000080:BIO routines::no such file)
Author
Owner

@JamesDAdams commented on GitHub (Feb 27, 2024):

As @ToxicToxster has identified, yes this only affects startup where there are certs but none of them have DNS challenges.

I've pushed a fix to develop branch, just waiting for a build and I can run a full test of the circumstance.

Doesn't work for me

<!-- gh-comment-id:1965964874 --> @JamesDAdams commented on GitHub (Feb 27, 2024): > As @ToxicToxster has identified, yes this only affects startup where there are certs but none of them have DNS challenges. > > I've pushed a fix to `develop` branch, just waiting for a build and I can run a full test of the circumstance. Doesn't work for me
Author
Owner

@JamesDAdams commented on GitHub (Feb 27, 2024):

solution for now : cp-r npm-16 npm-28 in live directory

<!-- gh-comment-id:1965978419 --> @JamesDAdams commented on GitHub (Feb 27, 2024): solution for now : cp-r npm-16 npm-28 in live directory
Author
Owner

@vblues commented on GitHub (Mar 24, 2024):

I had this error whenever I used 2.11.x images:
node[468]: std::unique_ptr node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Start() at ../src/node_platform.cc:68

Nothing worked for me until I upgraded my Ubuntu VM from version 20 to 22. Hope this helps someone.

<!-- gh-comment-id:2016850879 --> @vblues commented on GitHub (Mar 24, 2024): I had this error whenever I used 2.11.x images: node[468]: std::unique_ptr node::WorkerThreadsTaskRunner::DelayedTaskScheduler::Start() at ../src/node_platform.cc:68 Nothing worked for me until I upgraded my Ubuntu VM from version 20 to 22. Hope this helps someone.
Author
Owner

@Ville1ero commented on GitHub (Jun 12, 2024):

Resolving "Bad Gateway" Error in Podman Containers

Hi Team,

I encountered and resolved a recurring "Bad Gateway" error on the Admin Web interface (v2.11.1 © 2024 jc21.com, Theme by Tabler). Despite the error, the sites were operational. Here’s a summary of the steps taken:

Initial Issue:
The Admin Web interface displayed "Bad Gateway," and I was unable to log in, although the sites were up and running.

Attempted Solution:
I created a new container using the same docker-compose.yml file to address the issue. The configuration was as follows:

version: '3.8'
services:
superapp:
image: 'docker.io/jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
- '2080:80' # Public HTTP Port
- '2443:443' # Public HTTPS Port
- '2081:81' # Admin Web Port
environment:
DB_MYSQL_HOST: "HOSTDB"
DB_MYSQL_PORT: 3306
DB_MYSQL_USER: "SUPERUSER"
DB_MYSQL_PASSWORD: "SUPERPASSWORD"
DB_MYSQL_NAME: "SUPERDB"
DISABLE_IPV6: 'true'
volumes:
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
depends_on:
- superdb

superdb:
image: 'docker.io/jc21/mariadb-aria:latest'
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: 'SUPERPASSWORD'
MYSQL_DATABASE: 'SUPERDB'
MYSQL_USER: 'SUPERUSER'
MYSQL_PASSWORD: 'SUPERPASSWORD'
MARIADB_AUTO_UPGRADE: '1'
volumes:
- ./mysql:/var/lib/mysql

Observations:
The new container operated without issues for a couple of weeks but eventually failed with the same "Bad Gateway" error. Checking the console, I found the following error message:

[superapp2] | [6/12/2024] [7:43:45 PM] [Global ] › ✖ error Migration table is already locked
[superapp2] | [6/12/2024] [7:43:46 PM] [Migrate ] › ℹ info Current database version: 20211108145214
[superapp2] | Can't take lock to run migrations: Migration table is already locked
[superapp2] | If you are sure migrations are not running you can release the lock manually by running 'knex migrate:unlock'
I attempted to run knex migrate:unlock, but it was unsuccessful.

Final Solution:
I created a third container and compared the database tables with the previous containers:

New (3rd) Container Tables:
MariaDB [npm]> show tables;
+--------------------+
| Tables_in_npm |
+--------------------+
| access_list |
| access_list_auth |
| access_list_client |
| audit_log |
| auth |
| certificate |
| dead_host |
| migrations |
| migrations_lock |
| proxy_host |
| redirection_host |
| setting |
| stream |
| user |
| user_permission |
+--------------------+
15 rows in set (0.001 sec)

Old (1st and 2nd) Container Tables:
MariaDB [npm]> show tables;
+----------------------+
| Tables_in_npm |
+----------------------+
| access_list |
| access_list_auth |
| access_list_client |
| audit_log |
| auth |
| certificate |
| dead_host |
| knex_migrations |
| knex_migrations_lock |
| migrations |
| migrations_lock |
| proxy_host |
| redirection_host |
| setting |
| stream |
| user |
| user_permission |
+----------------------+
17 rows in set (0.001 sec)

To resolve the issue, I deleted the extra knex_migrations and knex_migrations_lock tables from the 2nd container, which restored its functionality.

Important Note: This action was performed in a test environment with full data backup. If you choose to replicate this process, ensure you have backups to prevent data loss. Proceed with caution.

I hope this information is helpful.

Best regards,

<!-- gh-comment-id:2164103127 --> @Ville1ero commented on GitHub (Jun 12, 2024): Resolving "Bad Gateway" Error in Podman Containers Hi Team, I encountered and resolved a recurring "Bad Gateway" error on the Admin Web interface (v2.11.1 © 2024 jc21.com, Theme by Tabler). Despite the error, the sites were operational. Here’s a summary of the steps taken: Initial Issue: The Admin Web interface displayed "Bad Gateway," and I was unable to log in, although the sites were up and running. Attempted Solution: I created a new container using the same docker-compose.yml file to address the issue. The configuration was as follows: version: '3.8' services: superapp: image: 'docker.io/jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: - '2080:80' # Public HTTP Port - '2443:443' # Public HTTPS Port - '2081:81' # Admin Web Port environment: DB_MYSQL_HOST: "HOSTDB" DB_MYSQL_PORT: 3306 DB_MYSQL_USER: "SUPERUSER" DB_MYSQL_PASSWORD: "SUPERPASSWORD" DB_MYSQL_NAME: "SUPERDB" DISABLE_IPV6: 'true' volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt depends_on: - superdb superdb: image: 'docker.io/jc21/mariadb-aria:latest' restart: unless-stopped environment: MYSQL_ROOT_PASSWORD: 'SUPERPASSWORD' MYSQL_DATABASE: 'SUPERDB' MYSQL_USER: 'SUPERUSER' MYSQL_PASSWORD: 'SUPERPASSWORD' MARIADB_AUTO_UPGRADE: '1' volumes: - ./mysql:/var/lib/mysql Observations: The new container operated without issues for a couple of weeks but eventually failed with the same "Bad Gateway" error. Checking the console, I found the following error message: [superapp2] | [6/12/2024] [7:43:45 PM] [Global ] › ✖ error Migration table is already locked [superapp2] | [6/12/2024] [7:43:46 PM] [Migrate ] › ℹ info Current database version: 20211108145214 [superapp2] | Can't take lock to run migrations: Migration table is already locked [superapp2] | If you are sure migrations are not running you can release the lock manually by running 'knex migrate:unlock' I attempted to run knex migrate:unlock, but it was unsuccessful. Final Solution: I created a third container and compared the database tables with the previous containers: New (3rd) Container Tables: MariaDB [npm]> show tables; +--------------------+ | Tables_in_npm | +--------------------+ | access_list | | access_list_auth | | access_list_client | | audit_log | | auth | | certificate | | dead_host | | migrations | | migrations_lock | | proxy_host | | redirection_host | | setting | | stream | | user | | user_permission | +--------------------+ 15 rows in set (0.001 sec) Old (1st and 2nd) Container Tables: MariaDB [npm]> show tables; +----------------------+ | Tables_in_npm | +----------------------+ | access_list | | access_list_auth | | access_list_client | | audit_log | | auth | | certificate | | dead_host | | knex_migrations | | knex_migrations_lock | | migrations | | migrations_lock | | proxy_host | | redirection_host | | setting | | stream | | user | | user_permission | +----------------------+ 17 rows in set (0.001 sec) To resolve the issue, I deleted the extra knex_migrations and knex_migrations_lock tables from the 2nd container, which restored its functionality. Important Note: This action was performed in a test environment with full data backup. If you choose to replicate this process, ensure you have backups to prevent data loss. Proceed with caution. I hope this information is helpful. Best regards,
Author
Owner

@ccbadd commented on GitHub (Jul 27, 2024):

To resolve the issue, I deleted the extra knex_migrations and knex_migrations_lock tables from the 2nd container, which restored its functionality.

Ville1ero, can you explain how to do this? I don't know mysql at all, I'm just trying to get nginx installed. I am assuming this is done through a shell on the mysql container but again, I'm not familiar.

<!-- gh-comment-id:2254238431 --> @ccbadd commented on GitHub (Jul 27, 2024): > To resolve the issue, I deleted the extra knex_migrations and knex_migrations_lock tables from the 2nd container, which restored its functionality. Ville1ero, can you explain how to do this? I don't know mysql at all, I'm just trying to get nginx installed. I am assuming this is done through a shell on the mysql container but again, I'm not familiar.
Author
Owner

@Ville1ero commented on GitHub (Jul 27, 2024):

Ville1ero, can you explain how to do this? I don't know mysql at all, I'm just trying to get nginx installed. I am assuming this is done through a shell on the mysql container but again, I'm not familiar.

Hi ccbadd, I recommend against proceeding with this process without a solid understanding of MySQL. It involves data backup and requires familiarity with the steps to avoid data loss.

<!-- gh-comment-id:2254241015 --> @Ville1ero commented on GitHub (Jul 27, 2024): > Ville1ero, can you explain how to do this? I don't know mysql at all, I'm just trying to get nginx installed. I am assuming this is done through a shell on the mysql container but again, I'm not familiar. Hi ccbadd, I recommend against proceeding with this process without a solid understanding of MySQL. It involves data backup and requires familiarity with the steps to avoid data loss.
Author
Owner

@ccbadd commented on GitHub (Jul 27, 2024):

Ville1ero, can you explain how to do this? I don't know mysql at all, I'm just trying to get nginx installed. I am assuming this is done through a shell on the mysql container but again, I'm not familiar.

Hi ccbadd, I recommend against proceeding with this process without a solid understanding of MySQL. It involves data backup and requires familiarity with the steps to avoid data loss.

Actually it is MariaDB (sorry) and I only installed it as part of the nginx install. There was no other data to risk. I did find another poster who posted their docker-conpose.yml file with a couple of differences so I tried it just now and that got things working!
Thanks for the quick reply.

<!-- gh-comment-id:2254243619 --> @ccbadd commented on GitHub (Jul 27, 2024): > > Ville1ero, can you explain how to do this? I don't know mysql at all, I'm just trying to get nginx installed. I am assuming this is done through a shell on the mysql container but again, I'm not familiar. > > Hi ccbadd, I recommend against proceeding with this process without a solid understanding of MySQL. It involves data backup and requires familiarity with the steps to avoid data loss. Actually it is MariaDB (sorry) and I only installed it as part of the nginx install. There was no other data to risk. I did find another poster who posted their docker-conpose.yml file with a couple of differences so I tried it just now and that got things working! Thanks for the quick reply.
Author
Owner

@Ville1ero commented on GitHub (Jul 27, 2024):

Actually it is MariaDB (sorry) and I only installed it as part of the nginx install. There was no other data to risk. I did find another poster who posted their docker-conpose.yml file with a couple of differences so I tried it just now and that got things working! Thanks for the quick reply.

Before proceeding, can you please confirm that you have the same error:
[Global ] › ✖ error Migration table is already locked
[Migrate ] › ℹ info Current database version: 20211108145214
[superapp2] | Can't take lock to run migrations: Migration table is already locked
[superapp2] | If you are sure migrations are not running you can release the lock manually by running 'knex migrate:unlock'

<!-- gh-comment-id:2254246266 --> @Ville1ero commented on GitHub (Jul 27, 2024): > Actually it is MariaDB (sorry) and I only installed it as part of the nginx install. There was no other data to risk. I did find another poster who posted their docker-conpose.yml file with a couple of differences so I tried it just now and that got things working! Thanks for the quick reply. Before proceeding, can you please confirm that you have the same error: [Global ] › ✖ error Migration table is already locked [Migrate ] › ℹ info Current database version: 20211108145214 [superapp2] | Can't take lock to run migrations: Migration table is already locked [superapp2] | If you are sure migrations are not running you can release the lock manually by running 'knex migrate:unlock'
Author
Owner

@ccbadd commented on GitHub (Jul 27, 2024):

Sorry, I already purged the original containers and reinstalled.

<!-- gh-comment-id:2254253785 --> @ccbadd commented on GitHub (Jul 27, 2024): Sorry, I already purged the original containers and reinstalled.
Author
Owner

@Ville1ero commented on GitHub (Jul 27, 2024):

Ok, if it happens again, and is the same error:
[Global ] › ✖ error Migration table is already locked
[Migrate ] › ℹ info Current database version: 20211108145214
[superapp2] | Can't take lock to run migrations: Migration table is already locked
[superapp2] | If you are sure migrations are not running you can release the lock manually by running 'knex migrate:unlock'

These are the detailed steps:
podman exec --user=root -it [DATABASE_CONTAINER] /bin/sh
mysqldump -uroot -p [DATABASENAME] > 20240727_[DATABASENAME]_backup.sql
mysql -uroot -p [DATABASENAME]
MariaDB DATABASENAME> DROP TABLE knex_migrations, knex_migrations_lock;

For me still working.

<!-- gh-comment-id:2254254809 --> @Ville1ero commented on GitHub (Jul 27, 2024): Ok, if it happens again, and is the same error: [Global ] › ✖ error Migration table is already locked [Migrate ] › ℹ info Current database version: 20211108145214 [superapp2] | Can't take lock to run migrations: Migration table is already locked [superapp2] | If you are sure migrations are not running you can release the lock manually by running 'knex migrate:unlock' These are the detailed steps: podman exec --user=root -it [DATABASE_CONTAINER] /bin/sh mysqldump -uroot -p [DATABASENAME] > 20240727_[DATABASENAME]_backup.sql mysql -uroot -p [DATABASENAME] MariaDB [[DATABASENAME]]> DROP TABLE knex_migrations, knex_migrations_lock; For me still working.
Author
Owner

@D10n1x commented on GitHub (Oct 2, 2024):

the same problem 2.11.3. solved by sequentially restarting containers, first mariadb then npm

<!-- gh-comment-id:2388275342 --> @D10n1x commented on GitHub (Oct 2, 2024): the same problem 2.11.3. solved by sequentially restarting containers, first mariadb then npm
Author
Owner

@harroxelas commented on GitHub (Oct 21, 2024):

I'm on 2.11.3 and I'm having the same problem. Tried using older versions but they present the same error. Funny that it was working properly last month. Anyone still having this problem on 2.11.3?

<!-- gh-comment-id:2425501848 --> @harroxelas commented on GitHub (Oct 21, 2024): I'm on 2.11.3 and I'm having the same problem. Tried using older versions but they present the same error. Funny that it was working properly last month. Anyone still having this problem on 2.11.3?
Author
Owner

@Ville1ero commented on GitHub (Oct 21, 2024):

My solution still working for me, no issues after more than 3 months.

<!-- gh-comment-id:2427098454 --> @Ville1ero commented on GitHub (Oct 21, 2024): My solution still working for me, no issues after more than 3 months.
Author
Owner

@gundw commented on GitHub (Jul 13, 2025):

This error is back on 2.12.4, forcing the version 2.12.3 fixes it.

<!-- gh-comment-id:3066692900 --> @gundw commented on GitHub (Jul 13, 2025): This error is back on 2.12.4, forcing the version 2.12.3 fixes it.
Author
Owner

@wforumw commented on GitHub (Jul 13, 2025):

This error is back on 2.12.4, forcing the version 2.12.3 fixes it.

v2.12.6 works fine for me

<!-- gh-comment-id:3066697910 --> @wforumw commented on GitHub (Jul 13, 2025): > This error is back on 2.12.4, forcing the version 2.12.3 fixes it. v2.12.6 works fine for me
Author
Owner

@dominikbargiel commented on GitHub (Nov 17, 2025):

Hi ya, has anyone sorted this as the ticket seems to be closed. I have the same issues with 2.12.3, no changes has been made since last successful login, just reboot of the host where npm sits in docker:

Image

<!-- gh-comment-id:3542421308 --> @dominikbargiel commented on GitHub (Nov 17, 2025): Hi ya, has anyone sorted this as the ticket seems to be closed. I have the same issues with 2.12.3, no changes has been made since last successful login, just reboot of the host where npm sits in docker: ![Image](https://github.com/user-attachments/assets/2b836eda-400e-42f3-b143-a20809672cc3)
Author
Owner

@Ville1ero commented on GitHub (Nov 17, 2025):

Hi ya, has anyone sorted this as the ticket seems to be closed. I have the same issues with 2.12.3, no changes has been made since last successful login, just reboot of the host where npm sits in docker:

Image

My solution still working for me, no issues after more than 5 months.

<!-- gh-comment-id:3543105102 --> @Ville1ero commented on GitHub (Nov 17, 2025): > Hi ya, has anyone sorted this as the ticket seems to be closed. I have the same issues with 2.12.3, no changes has been made since last successful login, just reboot of the host where npm sits in docker: > > ![Image](https://github.com/user-attachments/assets/2b836eda-400e-42f3-b143-a20809672cc3) My solution still working for me, no issues after more than 5 months.
Author
Owner

@dominikbargiel commented on GitHub (Nov 18, 2025):

Hi ya, has anyone sorted this as the ticket seems to be closed. I have the same issues with 2.12.3, no changes has been made since last successful login, just reboot of the host where npm sits in docker:
Image

My solution still working for me, no issues after more than 5 months.

Hi Ville1ero, thank you for the reply. I will try your workaround. The main reason however why I have posted this is, that it seems to be NOT resolved or the issue came back with this version, but thank you anyway...

<!-- gh-comment-id:3547384011 --> @dominikbargiel commented on GitHub (Nov 18, 2025): > > Hi ya, has anyone sorted this as the ticket seems to be closed. I have the same issues with 2.12.3, no changes has been made since last successful login, just reboot of the host where npm sits in docker: > > ![Image](https://github.com/user-attachments/assets/2b836eda-400e-42f3-b143-a20809672cc3) > > My solution still working for me, no issues after more than 5 months. Hi Ville1ero, thank you for the reply. I will try your workaround. The main reason however why I have posted this is, that it seems to be NOT resolved or the issue came back with this version, but thank you anyway...
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/nginx-proxy-manager-NginxProxyManager#2319
No description provided.