[GH-ISSUE #2706] Bad Gateway on Frontend after Upgrade to 2.9.21 #1865

Closed
opened 2026-02-26 07:32:47 +03:00 by kerem · 22 comments
Owner

Originally created by @phybersplice on GitHub (Mar 18, 2023).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2706

Checklist

  • Have you pulled and found the error with jc21/nginx-proxy-manager:latest docker image?
    • Yes
  • Are you sure you're not using someone else's docker image?
    • Yes
  • Have you searched for similar issues (both open and closed)?
    • Yes

Describe the bug
Since Watchtower took an upgrade to the latest version, the restart of the container works, however I am unable to login to the front end and the logs show issues.

Nginx Proxy Manager Version
2.9.21

Logs
2023-03-18T07:30:44.462695207Z s6-rc: info: service s6rc-oneshot-runner: starting
2023-03-18T07:30:44.465995072Z s6-rc: info: service s6rc-oneshot-runner successfully started
2023-03-18T07:30:44.466098352Z s6-rc: info: service fix-attrs: starting
2023-03-18T07:30:44.470635343Z s6-rc: info: service fix-attrs successfully started
2023-03-18T07:30:44.470747743Z s6-rc: info: service legacy-cont-init: starting
2023-03-18T07:30:44.475032623Z cont-init: info: running /etc/cont-init.d/01_perms.sh
2023-03-18T07:30:44.475160074Z /package/admin/s6-overlay-3.1.4.1/etc/s6-rc/scripts/cont-init: 20: /package/admin/s6-overlay-3.1.4.1/etc/s6-rc/scripts/cont-init: /etc/cont-init.d/01_perms.sh: not found
2023-03-18T07:30:44.475223404Z cont-init: info: /etc/cont-init.d/01_perms.sh exited 127
2023-03-18T07:30:44.475261704Z cont-init: info: running /etc/cont-init.d/01_s6-secret-init.sh
2023-03-18T07:30:44.475348905Z /package/admin/s6-overlay-3.1.4.1/etc/s6-rc/scripts/cont-init: 20: /package/admin/s6-overlay-3.1.4.1/etc/s6-rc/scripts/cont-init: /etc/cont-init.d/01_s6-secret-init.sh: Permission denied
2023-03-18T07:30:44.475406135Z cont-init: info: /etc/cont-init.d/01_s6-secret-init.sh exited 126
2023-03-18T07:30:44.476746421Z cont-init: warning: some scripts exited nonzero
2023-03-18T07:30:44.477097442Z s6-rc: info: service legacy-cont-init successfully started
2023-03-18T07:30:44.477199123Z s6-rc: info: service prepare: starting
2023-03-18T07:30:44.482177186Z ❯ Checking folder structure ...
2023-03-18T07:30:44.492934735Z ❯ Enabling IPV6 in hosts: /etc/nginx/conf.d
2023-03-18T07:30:44.493071006Z ❯ /etc/nginx/conf.d/include/assets.conf
2023-03-18T07:30:44.494690023Z ❯ /etc/nginx/conf.d/include/block-exploits.conf
2023-03-18T07:30:44.496363781Z ❯ /etc/nginx/conf.d/include/force-ssl.conf
2023-03-18T07:30:44.498005978Z ❯ /etc/nginx/conf.d/include/ip_ranges.conf
2023-03-18T07:30:44.499598465Z ❯ /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
2023-03-18T07:30:44.501253803Z ❯ /etc/nginx/conf.d/include/proxy.conf
2023-03-18T07:30:44.502925121Z ❯ /etc/nginx/conf.d/include/ssl-ciphers.conf
2023-03-18T07:30:44.504510958Z ❯ /etc/nginx/conf.d/include/resolvers.conf
2023-03-18T07:30:44.506164905Z ❯ /etc/nginx/conf.d/default.conf
2023-03-18T07:30:44.507717663Z ❯ /etc/nginx/conf.d/production.conf
2023-03-18T07:30:44.514301083Z ❯ Enabling IPV6 in hosts: /data/nginx
2023-03-18T07:30:44.514413493Z ❯ /data/nginx/default_host/site.conf
2023-03-18T07:30:44.516041551Z ❯ /data/nginx/proxy_host/1.conf
2023-03-18T07:30:44.517753429Z ❯ /data/nginx/proxy_host/2.conf
2023-03-18T07:30:44.519437616Z ❯ /data/nginx/proxy_host/3.conf
2023-03-18T07:30:44.521189684Z ❯ /data/nginx/proxy_host/4.conf
2023-03-18T07:30:44.522984612Z ❯ /data/nginx/proxy_host/5.conf
2023-03-18T07:30:44.524717690Z ❯ /data/nginx/proxy_host/6.conf
2023-03-18T07:30:44.526496498Z ❯ /data/nginx/proxy_host/7.conf
2023-03-18T07:30:44.528184116Z ❯ /data/nginx/proxy_host/8.conf
2023-03-18T07:30:44.529967204Z ❯ /data/nginx/proxy_host/9.conf
2023-03-18T07:30:44.531864333Z
2023-03-18T07:30:44.531936283Z -------------------------------------
2023-03-18T07:30:44.531962583Z _ _ ____ __ __
2023-03-18T07:30:44.531994284Z | \ | | _ | / |
2023-03-18T07:30:44.532017744Z | | | |) | |/| |
2023-03-18T07:30:44.532039104Z | |\ | __/| | | |
2023-03-18T07:30:44.532059204Z |
| _|| || |_|
2023-03-18T07:30:44.532080334Z -------------------------------------
2023-03-18T07:30:44.532113064Z
2023-03-18T07:30:44.532281925Z s6-rc: info: service prepare successfully started
2023-03-18T07:30:44.532390335Z s6-rc: info: service nginx: starting
2023-03-18T07:30:44.532472086Z s6-rc: info: service frontend: starting
2023-03-18T07:30:44.532672087Z s6-rc: info: service backend: starting
2023-03-18T07:30:44.536224543Z s6-rc: info: service frontend successfully started
2023-03-18T07:30:44.536397654Z s6-rc: info: service nginx successfully started
2023-03-18T07:30:44.536433794Z s6-rc: info: service backend successfully started
2023-03-18T07:30:44.536643575Z s6-rc: info: service legacy-services: starting
2023-03-18T07:30:44.539354547Z ❯ Starting nginx ...
2023-03-18T07:30:44.539493008Z ❯ Starting backend ...
2023-03-18T07:30:44.543829498Z s6-rc: info: service legacy-services successfully started
2023-03-18T07:30:45.554489212Z [3/18/2023] [7:30:45 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db
2023-03-18T07:30:46.560462785Z [3/18/2023] [7:30:46 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db
2023-03-18T07:30:47.565025631Z [3/18/2023] [7:30:47 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db
2023-03-18T07:30:48.570174530Z [3/18/2023] [7:30:48 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db
2023-03-18T07:30:49.575289338Z [3/18/2023] [7:30:49 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db
2023-03-18T07:30:50.579833474Z [3/18/2023] [7:30:50 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db
2023-03-18T07:30:51.583636767Z [3/18/2023] [7:30:51 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db
2023-03-18T07:30:52.588066903Z [3/18/2023] [7:30:52 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db
2023-03-18T07:30:53.593748814Z [3/18/2023] [7:30:53 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db

Operating System
Synology DS1821+ with Docker Containers.

Docker Compose via Portainer
version: '3'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '8765:80'
- '81:81'
- '8907:443'
environment:
DB_MYSQL_HOST: "db"
DB_MYSQL_PORT: 3306
DB_MYSQL_USER: "npm"
DB_MYSQL_PASSWORD: "npm"
DB_MYSQL_NAME: "npm"
network_mode: synobridge
volumes:
- /volume1/docker/nginx_proxy_manager/data/:/data
- /volume1/docker/nginx_proxy_manager/letsencrypt:/etc/letsencrypt
restart: unless-stopped
db:
image: 'jc21/mariadb-aria:latest'
environment:
MYSQL_ROOT_PASSWORD: 'npm'
MYSQL_DATABASE: 'npm'
MYSQL_USER: 'npm'
MYSQL_PASSWORD: 'npm'
network_mode: synobridge
volumes:
- /volume1/docker/nginx_proxy_manager/mysql:/var/lib/mysql
restart: unless-stopped

Originally created by @phybersplice on GitHub (Mar 18, 2023). Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2706 **Checklist** - Have you pulled and found the error with `jc21/nginx-proxy-manager:latest` docker image? - Yes - Are you sure you're not using someone else's docker image? - Yes - Have you searched for similar issues (both open and closed)? - Yes **Describe the bug** Since Watchtower took an upgrade to the latest version, the restart of the container works, however I am unable to login to the front end and the logs show issues. **Nginx Proxy Manager Version** 2.9.21 **Logs** 2023-03-18T07:30:44.462695207Z s6-rc: info: service s6rc-oneshot-runner: starting 2023-03-18T07:30:44.465995072Z s6-rc: info: service s6rc-oneshot-runner successfully started 2023-03-18T07:30:44.466098352Z s6-rc: info: service fix-attrs: starting 2023-03-18T07:30:44.470635343Z s6-rc: info: service fix-attrs successfully started 2023-03-18T07:30:44.470747743Z s6-rc: info: service legacy-cont-init: starting 2023-03-18T07:30:44.475032623Z cont-init: info: running /etc/cont-init.d/01_perms.sh 2023-03-18T07:30:44.475160074Z /package/admin/s6-overlay-3.1.4.1/etc/s6-rc/scripts/cont-init: 20: /package/admin/s6-overlay-3.1.4.1/etc/s6-rc/scripts/cont-init: /etc/cont-init.d/01_perms.sh: not found 2023-03-18T07:30:44.475223404Z cont-init: info: /etc/cont-init.d/01_perms.sh exited 127 2023-03-18T07:30:44.475261704Z cont-init: info: running /etc/cont-init.d/01_s6-secret-init.sh 2023-03-18T07:30:44.475348905Z /package/admin/s6-overlay-3.1.4.1/etc/s6-rc/scripts/cont-init: 20: /package/admin/s6-overlay-3.1.4.1/etc/s6-rc/scripts/cont-init: /etc/cont-init.d/01_s6-secret-init.sh: Permission denied 2023-03-18T07:30:44.475406135Z cont-init: info: /etc/cont-init.d/01_s6-secret-init.sh exited 126 2023-03-18T07:30:44.476746421Z cont-init: warning: some scripts exited nonzero 2023-03-18T07:30:44.477097442Z s6-rc: info: service legacy-cont-init successfully started 2023-03-18T07:30:44.477199123Z s6-rc: info: service prepare: starting 2023-03-18T07:30:44.482177186Z ❯ Checking folder structure ... 2023-03-18T07:30:44.492934735Z ❯ Enabling IPV6 in hosts: /etc/nginx/conf.d 2023-03-18T07:30:44.493071006Z ❯ /etc/nginx/conf.d/include/assets.conf 2023-03-18T07:30:44.494690023Z ❯ /etc/nginx/conf.d/include/block-exploits.conf 2023-03-18T07:30:44.496363781Z ❯ /etc/nginx/conf.d/include/force-ssl.conf 2023-03-18T07:30:44.498005978Z ❯ /etc/nginx/conf.d/include/ip_ranges.conf 2023-03-18T07:30:44.499598465Z ❯ /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf 2023-03-18T07:30:44.501253803Z ❯ /etc/nginx/conf.d/include/proxy.conf 2023-03-18T07:30:44.502925121Z ❯ /etc/nginx/conf.d/include/ssl-ciphers.conf 2023-03-18T07:30:44.504510958Z ❯ /etc/nginx/conf.d/include/resolvers.conf 2023-03-18T07:30:44.506164905Z ❯ /etc/nginx/conf.d/default.conf 2023-03-18T07:30:44.507717663Z ❯ /etc/nginx/conf.d/production.conf 2023-03-18T07:30:44.514301083Z ❯ Enabling IPV6 in hosts: /data/nginx 2023-03-18T07:30:44.514413493Z ❯ /data/nginx/default_host/site.conf 2023-03-18T07:30:44.516041551Z ❯ /data/nginx/proxy_host/1.conf 2023-03-18T07:30:44.517753429Z ❯ /data/nginx/proxy_host/2.conf 2023-03-18T07:30:44.519437616Z ❯ /data/nginx/proxy_host/3.conf 2023-03-18T07:30:44.521189684Z ❯ /data/nginx/proxy_host/4.conf 2023-03-18T07:30:44.522984612Z ❯ /data/nginx/proxy_host/5.conf 2023-03-18T07:30:44.524717690Z ❯ /data/nginx/proxy_host/6.conf 2023-03-18T07:30:44.526496498Z ❯ /data/nginx/proxy_host/7.conf 2023-03-18T07:30:44.528184116Z ❯ /data/nginx/proxy_host/8.conf 2023-03-18T07:30:44.529967204Z ❯ /data/nginx/proxy_host/9.conf 2023-03-18T07:30:44.531864333Z 2023-03-18T07:30:44.531936283Z ------------------------------------- 2023-03-18T07:30:44.531962583Z _ _ ____ __ __ 2023-03-18T07:30:44.531994284Z | \ | | _ \| \/ | 2023-03-18T07:30:44.532017744Z | \| | |_) | |\/| | 2023-03-18T07:30:44.532039104Z | |\ | __/| | | | 2023-03-18T07:30:44.532059204Z |_| \_|_| |_| |_| 2023-03-18T07:30:44.532080334Z ------------------------------------- 2023-03-18T07:30:44.532113064Z 2023-03-18T07:30:44.532281925Z s6-rc: info: service prepare successfully started 2023-03-18T07:30:44.532390335Z s6-rc: info: service nginx: starting 2023-03-18T07:30:44.532472086Z s6-rc: info: service frontend: starting 2023-03-18T07:30:44.532672087Z s6-rc: info: service backend: starting 2023-03-18T07:30:44.536224543Z s6-rc: info: service frontend successfully started 2023-03-18T07:30:44.536397654Z s6-rc: info: service nginx successfully started 2023-03-18T07:30:44.536433794Z s6-rc: info: service backend successfully started 2023-03-18T07:30:44.536643575Z s6-rc: info: service legacy-services: starting 2023-03-18T07:30:44.539354547Z ❯ Starting nginx ... 2023-03-18T07:30:44.539493008Z ❯ Starting backend ... 2023-03-18T07:30:44.543829498Z s6-rc: info: service legacy-services successfully started 2023-03-18T07:30:45.554489212Z [3/18/2023] [7:30:45 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db 2023-03-18T07:30:46.560462785Z [3/18/2023] [7:30:46 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db 2023-03-18T07:30:47.565025631Z [3/18/2023] [7:30:47 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db 2023-03-18T07:30:48.570174530Z [3/18/2023] [7:30:48 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db 2023-03-18T07:30:49.575289338Z [3/18/2023] [7:30:49 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db 2023-03-18T07:30:50.579833474Z [3/18/2023] [7:30:50 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db 2023-03-18T07:30:51.583636767Z [3/18/2023] [7:30:51 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db 2023-03-18T07:30:52.588066903Z [3/18/2023] [7:30:52 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db 2023-03-18T07:30:53.593748814Z [3/18/2023] [7:30:53 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db **Operating System** Synology DS1821+ with Docker Containers. **Docker Compose via Portainer** version: '3' services: app: image: 'jc21/nginx-proxy-manager:latest' ports: - '8765:80' - '81:81' - '8907:443' environment: DB_MYSQL_HOST: "db" DB_MYSQL_PORT: 3306 DB_MYSQL_USER: "npm" DB_MYSQL_PASSWORD: "npm" DB_MYSQL_NAME: "npm" network_mode: synobridge volumes: - /volume1/docker/nginx_proxy_manager/data/:/data - /volume1/docker/nginx_proxy_manager/letsencrypt:/etc/letsencrypt restart: unless-stopped db: image: 'jc21/mariadb-aria:latest' environment: MYSQL_ROOT_PASSWORD: 'npm' MYSQL_DATABASE: 'npm' MYSQL_USER: 'npm' MYSQL_PASSWORD: 'npm' network_mode: synobridge volumes: - /volume1/docker/nginx_proxy_manager/mysql:/var/lib/mysql restart: unless-stopped
kerem 2026-02-26 07:32:47 +03:00
  • closed this issue
  • added the
    bug
    label
Author
Owner

@jc21 commented on GitHub (Mar 18, 2023):

This appears to be very strange. I'm also using the MySQL storage and don't have any problems connecting via the internal docker DNS.

Are you able to confirm that your configuration works with 2.9.19 tag?

<!-- gh-comment-id:1474764252 --> @jc21 commented on GitHub (Mar 18, 2023): This appears to be very strange. I'm also using the MySQL storage and don't have any problems connecting via the internal docker DNS. Are you able to confirm that your configuration works with `2.9.19` tag?
Author
Owner

@phybersplice commented on GitHub (Mar 18, 2023):

Strange indeed. I'll test the tag and report back in a few.

<!-- gh-comment-id:1474764454 --> @phybersplice commented on GitHub (Mar 18, 2023): Strange indeed. I'll test the tag and report back in a few.
Author
Owner

@phybersplice commented on GitHub (Mar 18, 2023):

2.9.19:

2023-03-18T07:53:02.918727808Z [3/18/2023] [7:53:02 AM] [Global ] › ℹ info Generating MySQL knex configuration from environment variables
2023-03-18T07:53:02.922927898Z [3/18/2023] [7:53:02 AM] [Global ] › ⬤ debug Wrote db configuration to config file: ./config/production.json
2023-03-18T07:53:04.696809687Z [3/18/2023] [7:53:04 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db
2023-03-18T07:53:05.702202083Z [3/18/2023] [7:53:05 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db

Looks like something is severely broken.

<!-- gh-comment-id:1474764953 --> @phybersplice commented on GitHub (Mar 18, 2023): 2.9.19: 2023-03-18T07:53:02.918727808Z [3/18/2023] [7:53:02 AM] [Global ] › ℹ info Generating MySQL knex configuration from environment variables 2023-03-18T07:53:02.922927898Z [3/18/2023] [7:53:02 AM] [Global ] › ⬤ debug Wrote db configuration to config file: ./config/production.json 2023-03-18T07:53:04.696809687Z [3/18/2023] [7:53:04 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db 2023-03-18T07:53:05.702202083Z [3/18/2023] [7:53:05 AM] [Global ] › ✖ error getaddrinfo ENOTFOUND db Looks like something is severely broken.
Author
Owner

@jc21 commented on GitHub (Mar 18, 2023):

sometimes a docker-compose down and docker-compose up -d can fix a few things in the background.

<!-- gh-comment-id:1474765289 --> @jc21 commented on GitHub (Mar 18, 2023): sometimes a `docker-compose down` and `docker-compose up -d` can fix a few things in the background.
Author
Owner

@phybersplice commented on GitHub (Mar 18, 2023):

I'm using Portainer - I re-deploy the entire stack and it's still not able to login.
Strange though, some of my proxied sites are working.
I'm going to reboot the NAS and see...

<!-- gh-comment-id:1474766518 --> @phybersplice commented on GitHub (Mar 18, 2023): I'm using Portainer - I re-deploy the entire stack and it's still not able to login. Strange though, some of my proxied sites are working. I'm going to reboot the NAS and see...
Author
Owner

@phybersplice commented on GitHub (Mar 18, 2023):

No go after reboot.

<!-- gh-comment-id:1474769026 --> @phybersplice commented on GitHub (Mar 18, 2023): No go after reboot.
Author
Owner

@topa-LE commented on GitHub (Mar 18, 2023):

the latest with error:

s6-supervise s6-linux-init-shutdownd: fatal: unable to iopause: Operation not permitted
s6-linux-init-shutdownd: fatal: unable to iopause: Operation not permitted
s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted

with tag 2.9.19 is ok!

<!-- gh-comment-id:1474780048 --> @topa-LE commented on GitHub (Mar 18, 2023): the latest with error: s6-supervise s6-linux-init-shutdownd: fatal: unable to iopause: Operation not permitted s6-linux-init-shutdownd: fatal: unable to iopause: Operation not permitted s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted with tag 2.9.19 is ok!
Author
Owner

@phybersplice commented on GitHub (Mar 18, 2023):

I blasted away all docker folders and was able to do :latest which is loading.
Will need to reconfigure everything from scratch.

<!-- gh-comment-id:1474790383 --> @phybersplice commented on GitHub (Mar 18, 2023): I blasted away all docker folders and was able to do :latest which is loading. Will need to reconfigure everything from scratch.
Author
Owner

@zolero commented on GitHub (Mar 18, 2023):

I've got the exact same problem, it seems that with a mysql database the problem is not there anymore? I was already losing my mind a bit yesterday. Couldn't figure it out. Many reinstalls.

<!-- gh-comment-id:1474791140 --> @zolero commented on GitHub (Mar 18, 2023): I've got the exact same problem, it seems that with a mysql database the problem is not there anymore? I was already losing my mind a bit yesterday. Couldn't figure it out. Many reinstalls.
Author
Owner

@topa-LE commented on GitHub (Mar 18, 2023):

But this is not a solution to reconfigure everything. Unfortunately, this is not an alternative for active 50-100 subdomains. I remain before only with 2.9.19.

<!-- gh-comment-id:1474791399 --> @topa-LE commented on GitHub (Mar 18, 2023): But this is not a solution to reconfigure everything. Unfortunately, this is not an alternative for active 50-100 subdomains. I remain before only with 2.9.19.
Author
Owner

@jc21 commented on GitHub (Mar 18, 2023):

Sounds like there's multiple issues getting mixed up here. The original issue looks to be internal docker DNS related and as @phybersplice says, he's seeing it on 2.9.19 as well.

I agree, reconfiguring everything should not be required. I myself have been able to use the latest image without any problems, using a MySQL database. The only difference between what I have and what @phybersplice has is that I'm not using environment variables to configure NPM, so my configuration file does not get altered at startup.

@topa-LE The s6 shutdown stuff you've pasted looks like it has problems stopping a container, not starting the backend.

Perhaps you could all paste some more of your startup logs. And try adding the DEBUG=1 environment variable to your docker configurations to see some more detailed outputs. My startup looks like this, note that I'm using the github-develop (2.19.22 candidate) docker image tag:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
❯ Checking folder structure ...
Changing ownership of /data/logs to 0:0
❯ Enabling IPV6 in hosts: /etc/nginx/conf.d
  ❯ /etc/nginx/conf.d/default.conf
  ❯ /etc/nginx/conf.d/include/assets.conf
  ❯ /etc/nginx/conf.d/include/block-exploits.conf
  ❯ /etc/nginx/conf.d/include/force-ssl.conf
  ❯ /etc/nginx/conf.d/include/ip_ranges.conf
  ❯ /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
  ❯ /etc/nginx/conf.d/include/proxy.conf
  ❯ /etc/nginx/conf.d/include/ssl-ciphers.conf
  ❯ /etc/nginx/conf.d/include/resolvers.conf
  ❯ /etc/nginx/conf.d/production.conf
❯ Enabling IPV6 in hosts: /data/nginx
  ❯ /data/nginx/proxy_host/9.conf
  ❯ /data/nginx/proxy_host/106.conf
...
❯ Secrets-init ...

-------------------------------------
 _   _ ____  __  __
| \ | |  _ \|  \/  |
|  \| | |_) | |\/| |
| |\  |  __/| |  | |
|_| \_|_|   |_|  |_|
-------------------------------------

s6-rc: info: service prepare successfully started
s6-rc: info: service nginx: starting
s6-rc: info: service frontend: starting
s6-rc: info: service backend: starting
s6-rc: info: service nginx successfully started
s6-rc: info: service frontend successfully started
s6-rc: info: service backend successfully started
s6-rc: info: service legacy-services: starting
❯ Starting nginx ...
❯ Starting backend ...
s6-rc: info: service legacy-services successfully started
[3/18/2023] [11:47:52 AM] [Global   ] › ℹ  info      Manual db configuration already exists, skipping config creation from environment variables
[3/18/2023] [11:47:53 AM] [Migrate  ] › ℹ  info      Current database version: 20211108145214
[3/18/2023] [11:47:53 AM] [Setup    ] › ⬤  debug     JWT Keypair already exists
[3/18/2023] [11:47:53 AM] [Setup    ] › ⬤  debug     Admin user setup not required
[3/18/2023] [11:47:53 AM] [Setup    ] › ⬤  debug     Default setting setup not required
[3/18/2023] [11:48:17 AM] [Setup    ] › ℹ  info      Added Certbot plugins certbot-dns-route53==$(certbot --version | grep -Eo '[0-9](\.[0-9]+)+')
[3/18/2023] [11:48:17 AM] [Setup    ] › ℹ  info      Logrotate Timer initialized
[3/18/2023] [11:48:17 AM] [Setup    ] › ℹ  info      Logrotate completed.
[3/18/2023] [11:48:17 AM] [IP Ranges] › ℹ  info      Fetching IP Ranges from online services...
[3/18/2023] [11:48:17 AM] [IP Ranges] › ℹ  info      Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
[3/18/2023] [11:48:19 AM] [IP Ranges] › ℹ  info      Fetching https://www.cloudflare.com/ips-v4
[3/18/2023] [11:48:19 AM] [IP Ranges] › ℹ  info      Fetching https://www.cloudflare.com/ips-v6
[3/18/2023] [11:48:19 AM] [SSL      ] › ℹ  info      Let's Encrypt Renewal Timer initialized
[3/18/2023] [11:48:19 AM] [SSL      ] › ℹ  info      Renewing SSL certs close to expiry...
[3/18/2023] [11:48:19 AM] [IP Ranges] › ℹ  info      IP Ranges Renewal Timer initialized
[3/18/2023] [11:48:19 AM] [Global   ] › ℹ  info      Backend PID 147 listening on port 3000 ...
[3/18/2023] [11:48:20 AM] [Nginx    ] › ℹ  info      Testing Nginx configuration
[3/18/2023] [11:48:20 AM] [Nginx    ] › ℹ  info      Reloading Nginx
[3/18/2023] [11:48:20 AM] [SSL      ] › ℹ  info      Renew Complete
<!-- gh-comment-id:1474824610 --> @jc21 commented on GitHub (Mar 18, 2023): Sounds like there's multiple issues getting mixed up here. The original issue looks to be internal docker DNS related and as @phybersplice says, he's seeing it on 2.9.19 as well. I agree, reconfiguring everything should not be required. I myself have been able to use the latest image without any problems, using a MySQL database. The only difference between what I have and what @phybersplice has is that I'm not using environment variables to configure NPM, so my configuration file does not get altered at startup. @topa-LE The s6 shutdown stuff you've pasted looks like it has problems stopping a container, not starting the backend. Perhaps you could all paste some more of your startup logs. And try adding the `DEBUG=1` environment variable to your docker configurations to see some more detailed outputs. My startup looks like this, note that I'm using the `github-develop` (2.19.22 candidate) docker image tag: ``` s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service prepare: starting ❯ Checking folder structure ... Changing ownership of /data/logs to 0:0 ❯ Enabling IPV6 in hosts: /etc/nginx/conf.d ❯ /etc/nginx/conf.d/default.conf ❯ /etc/nginx/conf.d/include/assets.conf ❯ /etc/nginx/conf.d/include/block-exploits.conf ❯ /etc/nginx/conf.d/include/force-ssl.conf ❯ /etc/nginx/conf.d/include/ip_ranges.conf ❯ /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf ❯ /etc/nginx/conf.d/include/proxy.conf ❯ /etc/nginx/conf.d/include/ssl-ciphers.conf ❯ /etc/nginx/conf.d/include/resolvers.conf ❯ /etc/nginx/conf.d/production.conf ❯ Enabling IPV6 in hosts: /data/nginx ❯ /data/nginx/proxy_host/9.conf ❯ /data/nginx/proxy_host/106.conf ... ❯ Secrets-init ... ------------------------------------- _ _ ____ __ __ | \ | | _ \| \/ | | \| | |_) | |\/| | | |\ | __/| | | | |_| \_|_| |_| |_| ------------------------------------- s6-rc: info: service prepare successfully started s6-rc: info: service nginx: starting s6-rc: info: service frontend: starting s6-rc: info: service backend: starting s6-rc: info: service nginx successfully started s6-rc: info: service frontend successfully started s6-rc: info: service backend successfully started s6-rc: info: service legacy-services: starting ❯ Starting nginx ... ❯ Starting backend ... s6-rc: info: service legacy-services successfully started [3/18/2023] [11:47:52 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables [3/18/2023] [11:47:53 AM] [Migrate ] › ℹ info Current database version: 20211108145214 [3/18/2023] [11:47:53 AM] [Setup ] › ⬤ debug JWT Keypair already exists [3/18/2023] [11:47:53 AM] [Setup ] › ⬤ debug Admin user setup not required [3/18/2023] [11:47:53 AM] [Setup ] › ⬤ debug Default setting setup not required [3/18/2023] [11:48:17 AM] [Setup ] › ℹ info Added Certbot plugins certbot-dns-route53==$(certbot --version | grep -Eo '[0-9](\.[0-9]+)+') [3/18/2023] [11:48:17 AM] [Setup ] › ℹ info Logrotate Timer initialized [3/18/2023] [11:48:17 AM] [Setup ] › ℹ info Logrotate completed. [3/18/2023] [11:48:17 AM] [IP Ranges] › ℹ info Fetching IP Ranges from online services... [3/18/2023] [11:48:17 AM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json [3/18/2023] [11:48:19 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4 [3/18/2023] [11:48:19 AM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6 [3/18/2023] [11:48:19 AM] [SSL ] › ℹ info Let's Encrypt Renewal Timer initialized [3/18/2023] [11:48:19 AM] [SSL ] › ℹ info Renewing SSL certs close to expiry... [3/18/2023] [11:48:19 AM] [IP Ranges] › ℹ info IP Ranges Renewal Timer initialized [3/18/2023] [11:48:19 AM] [Global ] › ℹ info Backend PID 147 listening on port 3000 ... [3/18/2023] [11:48:20 AM] [Nginx ] › ℹ info Testing Nginx configuration [3/18/2023] [11:48:20 AM] [Nginx ] › ℹ info Reloading Nginx [3/18/2023] [11:48:20 AM] [SSL ] › ℹ info Renew Complete ```
Author
Owner

@topa-LE commented on GitHub (Mar 18, 2023):

@jc21 Unfortunately the same problem with github-develop. ENV Debug=1

s6-svscan: warning: executing into .s6-svscan/crash
s6-svscan crashed. Killing everything and exiting.
s6-supervise s6-linux-init-shutdownd: fatal: unable to iopause: Operation not permitted
s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted
s6-svscan: warning: unable to iopause: Operation not permitted
s6-svscan: warning: executing into .s6-svscan/crash
s6-svscan crashed. Killing everything and exiting.
s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted

I currently stay with 2.9.19 .

<!-- gh-comment-id:1474833216 --> @topa-LE commented on GitHub (Mar 18, 2023): @jc21 Unfortunately the same problem with github-develop. ENV Debug=1 ``` s6-svscan: warning: executing into .s6-svscan/crash s6-svscan crashed. Killing everything and exiting. s6-supervise s6-linux-init-shutdownd: fatal: unable to iopause: Operation not permitted s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted s6-svscan: warning: unable to iopause: Operation not permitted s6-svscan: warning: executing into .s6-svscan/crash s6-svscan crashed. Killing everything and exiting. s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted ``` I currently stay with 2.9.19 .
Author
Owner

@labodj commented on GitHub (Mar 18, 2023):

@jc21
I experienced problems with .21, it was unable to connect to my external mariadb.
I've just tried github-develop and it seems that it's fixed.

Anyway I saw this error in the log, with debug:1

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service prepare: starting
❯ Checking folder structure ...
Changing ownership of /data/logs to 0:0
Disabling IPV6 in hosts
❯ Disabling IPV6 in hosts: /etc/nginx/conf.d
  ❯ /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf
  ❯ /etc/nginx/conf.d/include/proxy.conf
  ❯ /etc/nginx/conf.d/include/force-ssl.conf
  ❯ /etc/nginx/conf.d/include/ip_ranges.conf
  ❯ /etc/nginx/conf.d/include/block-exploits.conf
  ❯ /etc/nginx/conf.d/include/ssl-ciphers.conf
  ❯ /etc/nginx/conf.d/include/assets.conf
  ❯ /etc/nginx/conf.d/include/resolvers.conf
  ❯ /etc/nginx/conf.d/production.conf
  ❯ /etc/nginx/conf.d/default.conf
Disabling IPV6 in hosts
❯ Disabling IPV6 in hosts: /data/nginx
  ❯ /data/nginx/proxy_host/3.conf
  ❯ /data/nginx/proxy_host/1.conf
  ❯ /data/nginx/default_host/site.conf
❯ Secrets-init ...
[secret-init] Evaluating DB_MYSQL_PASSWORD__FILE ...
[secret-init] Success! DB_MYSQL_PASSWORD set from DB_MYSQL_PASSWORD__FILE
-------------------------------------
 _   _ ____  __  __
| \ | |  _ \|  \/  |
|  \| | |_) | |\/| |
| |\  |  __/| |  | |
|_| \_|_|   |_|  |_|
-------------------------------------
s6-rc: info: service prepare successfully started
s6-rc: info: service nginx: starting
s6-rc: info: service frontend: starting
s6-rc: info: service backend: starting
s6-rc: info: service nginx successfully started
s6-rc: info: service frontend successfully started
s6-rc: info: service backend successfully started
s6-rc: info: service legacy-services: starting
❯ Starting nginx ...
❯ Starting backend ...
s6-rc: info: service legacy-services successfully started
[3/18/2023] [6:54:59 PM] [Global   ] › ℹ  info      Generating MySQL knex configuration from environment variables
[3/18/2023] [6:54:59 PM] [Global   ] › ⬤  debug     Wrote db configuration to config file: ./config/production.json
[3/18/2023] [6:55:01 PM] [Migrate  ] › ℹ  info      Current database version: 20211108145214
[3/18/2023] [6:55:01 PM] [Setup    ] › ℹ  info      Creating a new JWT key pair...
[3/18/2023] [6:55:14 PM] [Setup    ] › ℹ  info      Wrote JWT key pair to config file: /app/config/production.json
[3/18/2023] [6:55:14 PM] [Setup    ] › ⬤  debug     Admin user setup not required
[3/18/2023] [6:55:14 PM] [Setup    ] › ⬤  debug     Default setting setup not required
[3/18/2023] [6:55:14 PM] [Setup    ] › ℹ  info      Logrotate Timer initialized
[3/18/2023] [6:55:14 PM] [Setup    ] › ℹ  info      Logrotate completed.
[3/18/2023] [6:55:14 PM] [IP Ranges] › ℹ  info      Fetching IP Ranges from online services...
[3/18/2023] [6:55:14 PM] [IP Ranges] › ℹ  info      Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
[3/18/2023] [6:55:15 PM] [IP Ranges] › ℹ  info      Fetching https://www.cloudflare.com/ips-v4
[3/18/2023] [6:55:15 PM] [IP Ranges] › ℹ  info      Fetching https://www.cloudflare.com/ips-v6
[3/18/2023] [6:55:15 PM] [SSL      ] › ℹ  info      Let's Encrypt Renewal Timer initialized
[3/18/2023] [6:55:15 PM] [SSL      ] › ℹ  info      Renewing SSL certs close to expiry...
[3/18/2023] [6:55:15 PM] [IP Ranges] › ℹ  info      IP Ranges Renewal Timer initialized
[3/18/2023] [6:55:15 PM] [Global   ] › ℹ  info      Backend PID 117 listening on port 3000 ...
[3/18/2023] [6:55:17 PM] [Nginx    ] › ℹ  info      Testing Nginx configuration
[3/18/2023] [6:55:17 PM] [Nginx    ] › ℹ  info      Reloading Nginx
[3/18/2023] [6:55:17 PM] [SSL      ] › ℹ  info      Renew Complete
[3/18/2023] [6:55:29 PM] [Express  ] › ⬤  debug     JsonWebTokenError: invalid signature
    at /app/node_modules/jsonwebtoken/verify.js:171:19
    at getSecret (/app/node_modules/jsonwebtoken/verify.js:97:14)
    at Object.module.exports [as verify] (/app/node_modules/jsonwebtoken/verify.js:101:10)
    at /app/models/token.js:71:11
    at new Promise (<anonymous>)
    at Object.load (/app/models/token.js:65:11)
    at /app/lib/access.js:228:20
    at new Promise (<anonymous>)
    at Object.load (/app/lib/access.js:226:11)
    at /app/lib/express/jwt-decode.js:7:10
<!-- gh-comment-id:1474932258 --> @labodj commented on GitHub (Mar 18, 2023): @jc21 I experienced problems with .21, it was unable to connect to my external mariadb. I've just tried github-develop and it seems that it's fixed. Anyway I saw this error in the log, with debug:1 ``` s6-rc: info: service s6rc-oneshot-runner: starting s6-rc: info: service s6rc-oneshot-runner successfully started s6-rc: info: service fix-attrs: starting s6-rc: info: service fix-attrs successfully started s6-rc: info: service legacy-cont-init: starting s6-rc: info: service legacy-cont-init successfully started s6-rc: info: service prepare: starting ❯ Checking folder structure ... Changing ownership of /data/logs to 0:0 Disabling IPV6 in hosts ❯ Disabling IPV6 in hosts: /etc/nginx/conf.d ❯ /etc/nginx/conf.d/include/letsencrypt-acme-challenge.conf ❯ /etc/nginx/conf.d/include/proxy.conf ❯ /etc/nginx/conf.d/include/force-ssl.conf ❯ /etc/nginx/conf.d/include/ip_ranges.conf ❯ /etc/nginx/conf.d/include/block-exploits.conf ❯ /etc/nginx/conf.d/include/ssl-ciphers.conf ❯ /etc/nginx/conf.d/include/assets.conf ❯ /etc/nginx/conf.d/include/resolvers.conf ❯ /etc/nginx/conf.d/production.conf ❯ /etc/nginx/conf.d/default.conf Disabling IPV6 in hosts ❯ Disabling IPV6 in hosts: /data/nginx ❯ /data/nginx/proxy_host/3.conf ❯ /data/nginx/proxy_host/1.conf ❯ /data/nginx/default_host/site.conf ❯ Secrets-init ... [secret-init] Evaluating DB_MYSQL_PASSWORD__FILE ... [secret-init] Success! DB_MYSQL_PASSWORD set from DB_MYSQL_PASSWORD__FILE ------------------------------------- _ _ ____ __ __ | \ | | _ \| \/ | | \| | |_) | |\/| | | |\ | __/| | | | |_| \_|_| |_| |_| ------------------------------------- s6-rc: info: service prepare successfully started s6-rc: info: service nginx: starting s6-rc: info: service frontend: starting s6-rc: info: service backend: starting s6-rc: info: service nginx successfully started s6-rc: info: service frontend successfully started s6-rc: info: service backend successfully started s6-rc: info: service legacy-services: starting ❯ Starting nginx ... ❯ Starting backend ... s6-rc: info: service legacy-services successfully started [3/18/2023] [6:54:59 PM] [Global ] › ℹ info Generating MySQL knex configuration from environment variables [3/18/2023] [6:54:59 PM] [Global ] › ⬤ debug Wrote db configuration to config file: ./config/production.json [3/18/2023] [6:55:01 PM] [Migrate ] › ℹ info Current database version: 20211108145214 [3/18/2023] [6:55:01 PM] [Setup ] › ℹ info Creating a new JWT key pair... [3/18/2023] [6:55:14 PM] [Setup ] › ℹ info Wrote JWT key pair to config file: /app/config/production.json [3/18/2023] [6:55:14 PM] [Setup ] › ⬤ debug Admin user setup not required [3/18/2023] [6:55:14 PM] [Setup ] › ⬤ debug Default setting setup not required [3/18/2023] [6:55:14 PM] [Setup ] › ℹ info Logrotate Timer initialized [3/18/2023] [6:55:14 PM] [Setup ] › ℹ info Logrotate completed. [3/18/2023] [6:55:14 PM] [IP Ranges] › ℹ info Fetching IP Ranges from online services... [3/18/2023] [6:55:14 PM] [IP Ranges] › ℹ info Fetching https://ip-ranges.amazonaws.com/ip-ranges.json [3/18/2023] [6:55:15 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v4 [3/18/2023] [6:55:15 PM] [IP Ranges] › ℹ info Fetching https://www.cloudflare.com/ips-v6 [3/18/2023] [6:55:15 PM] [SSL ] › ℹ info Let's Encrypt Renewal Timer initialized [3/18/2023] [6:55:15 PM] [SSL ] › ℹ info Renewing SSL certs close to expiry... [3/18/2023] [6:55:15 PM] [IP Ranges] › ℹ info IP Ranges Renewal Timer initialized [3/18/2023] [6:55:15 PM] [Global ] › ℹ info Backend PID 117 listening on port 3000 ... [3/18/2023] [6:55:17 PM] [Nginx ] › ℹ info Testing Nginx configuration [3/18/2023] [6:55:17 PM] [Nginx ] › ℹ info Reloading Nginx [3/18/2023] [6:55:17 PM] [SSL ] › ℹ info Renew Complete [3/18/2023] [6:55:29 PM] [Express ] › ⬤ debug JsonWebTokenError: invalid signature at /app/node_modules/jsonwebtoken/verify.js:171:19 at getSecret (/app/node_modules/jsonwebtoken/verify.js:97:14) at Object.module.exports [as verify] (/app/node_modules/jsonwebtoken/verify.js:101:10) at /app/models/token.js:71:11 at new Promise (<anonymous>) at Object.load (/app/models/token.js:65:11) at /app/lib/access.js:228:20 at new Promise (<anonymous>) at Object.load (/app/lib/access.js:226:11) at /app/lib/express/jwt-decode.js:7:10 ```
Author
Owner

@tcarlsen commented on GitHub (Mar 18, 2023):

Having the reboot error as well after the update

s6-svscan: warning: executing into .s6-svscan/crash s6-svscan crashed. Killing everything and exiting. s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted

<!-- gh-comment-id:1475018248 --> @tcarlsen commented on GitHub (Mar 18, 2023): Having the reboot error as well after the update `s6-svscan: warning: executing into .s6-svscan/crash s6-svscan crashed. Killing everything and exiting. s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted `
Author
Owner

@dmalanij commented on GitHub (Mar 19, 2023):

Perhaps you could all paste some more of your startup logs. And try adding the DEBUG=1 environment variable to your docker configurations to see some more detailed outputs. My startup looks like this, note that I'm using the github-develop (2.19.22 candidate) docker image tag:

@jc21 I just put some analysis on the logs for 2.19.21 on a my reply to probably duplicated issue #2707. Root cause seems to be related with the s6 initialization... I haven't replied to this thread as the behaviour described originally is slightly different and the other issue was more like what I saw (as you have correctly mentioned about mixed behaviours)

<!-- gh-comment-id:1475273406 --> @dmalanij commented on GitHub (Mar 19, 2023): > Perhaps you could all paste some more of your startup logs. And try adding the `DEBUG=1` environment variable to your docker configurations to see some more detailed outputs. My startup looks like this, note that I'm using the `github-develop` (2.19.22 candidate) docker image tag: @jc21 I just put some analysis on the logs for `2.19.21` on a [my reply](https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2707#issuecomment-1475271328) to probably duplicated issue #2707. Root cause seems to be related with the s6 initialization... I haven't replied to this thread as the behaviour described originally is slightly different and the other issue was more like what I saw (as you have correctly mentioned about mixed behaviours)
Author
Owner

@jc21 commented on GitHub (Mar 20, 2023):

@labodj the JsonWebTokenError isn't an issue and can be safely ignored but specifically here's why it happens:

  • You bring up NPM, it creates new keys and saves them to the config file
  • You login, and are given a JWT that is encrypted with those keys
  • When you restart NPM, since the config file is not mounted and not persistent, it generates new keys again
  • The UI tries to use the JWT from localstorage against the API and it fails with this error
  • Then you're redirected to login again and you'll received a new JWT that works
<!-- gh-comment-id:1475470634 --> @jc21 commented on GitHub (Mar 20, 2023): @labodj the JsonWebTokenError isn't an issue and can be safely ignored but specifically here's why it happens: - You bring up NPM, it creates new keys and saves them to the config file - You login, and are given a JWT that is encrypted with those keys - When you restart NPM, since the config file is not mounted and not persistent, it generates new keys again - The UI tries to use the JWT from localstorage against the API and it fails with this error - Then you're redirected to login again and you'll received a new JWT that works
Author
Owner

@phybersplice commented on GitHub (Mar 21, 2023):

@jc21 can this be closed off? I couldn't figure out what was causing the issue with the upgrade, but it is fixed now.
Additionally, is there a function to backup the config to a .tgz file or something like that and be able to import into a NPM instance in case something was to break again?

<!-- gh-comment-id:1478161438 --> @phybersplice commented on GitHub (Mar 21, 2023): @jc21 can this be closed off? I couldn't figure out what was causing the issue with the upgrade, but it is fixed now. Additionally, is there a function to backup the config to a .tgz file or something like that and be able to import into a NPM instance in case something was to break again?
Author
Owner

@jc21 commented on GitHub (Mar 23, 2023):

2.9.22 has been released so I'd recommend using that docker tag instead of github-develop as I'm about to merge some new stuff to develop.

No there's no backup mechanism. I do my best not to introduce changes that can't be undone with a downgrade to a revision version.

<!-- gh-comment-id:1482030099 --> @jc21 commented on GitHub (Mar 23, 2023): `2.9.22` has been released so I'd recommend using that docker tag instead of `github-develop` as I'm about to merge some new stuff to develop. No there's no backup mechanism. I do my best not to introduce changes that can't be undone with a downgrade to a revision version.
Author
Owner

@topa-LE commented on GitHub (Mar 24, 2023):

NPM runs on Pi4 arm. Unfortunately still with 2.9.22 tag error with

s6-svscan: Warning: execution in .s6-svscan/crash s6-svscan crashed. 
Kills everything and exits. s6-linux-init-hpr: fatal: unable to reboot(): Operation not allowed

I have no choice but to stay on version 2.9.19. Does anyone also have this error?

<!-- gh-comment-id:1482412913 --> @topa-LE commented on GitHub (Mar 24, 2023): NPM runs on Pi4 arm. Unfortunately still with 2.9.22 tag error with ``` s6-svscan: Warning: execution in .s6-svscan/crash s6-svscan crashed. Kills everything and exits. s6-linux-init-hpr: fatal: unable to reboot(): Operation not allowed ``` I have no choice but to stay on version 2.9.19. Does anyone also have this error?
Author
Owner

@lbureau-billettiqueservices commented on GitHub (Mar 24, 2023):

NPM runs on Pi4 arm. Unfortunately still with 2.9.22 tag error with

s6-svscan: Warning: execution in .s6-svscan/crash s6-svscan crashed. 
Kills everything and exits. s6-linux-init-hpr: fatal: unable to reboot(): Operation not allowed

I have no choice but to stay on version 2.9.19. Does anyone also have this error?

I had the same problem on pi3 arm. Staying on 2.9.19 too is a sufficient workaround so far.

<!-- gh-comment-id:1482784679 --> @lbureau-billettiqueservices commented on GitHub (Mar 24, 2023): > NPM runs on Pi4 arm. Unfortunately still with 2.9.22 tag error with > > ``` > s6-svscan: Warning: execution in .s6-svscan/crash s6-svscan crashed. > Kills everything and exits. s6-linux-init-hpr: fatal: unable to reboot(): Operation not allowed > ``` > > I have no choice but to stay on version 2.9.19. Does anyone also have this error? I had the same problem on pi3 arm. Staying on 2.9.19 too is a sufficient workaround so far.
Author
Owner

@nullndr commented on GitHub (Mar 29, 2023):

I have the same error:

s6-svscan: warning: unable to iopause: Operation not permitted
s6-svscan: warning: executing into .s6-svscan/crash
s6-svscan crashed. Killing everything and exiting.
s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted

Both on 2.9.21 and 2.9.22, running on pi3b+ arm

<!-- gh-comment-id:1488565165 --> @nullndr commented on GitHub (Mar 29, 2023): I have the same error: ``` s6-svscan: warning: unable to iopause: Operation not permitted s6-svscan: warning: executing into .s6-svscan/crash s6-svscan crashed. Killing everything and exiting. s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted ``` Both on `2.9.21` and `2.9.22`, running on pi3b+ arm
Author
Owner

@topa-LE commented on GitHub (Mar 29, 2023):

I have the same error:

s6-svscan: warning: unable to iopause: Operation not permitted
s6-svscan: warning: executing into .s6-svscan/crash
s6-svscan crashed. Killing everything and exiting.
s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted

Both on 2.9.21 and 2.9.22, running on pi3b+ arm

Stay with 2.9.19 for now until the developer fixes the bug

<!-- gh-comment-id:1488980446 --> @topa-LE commented on GitHub (Mar 29, 2023): > I have the same error: > > ``` > s6-svscan: warning: unable to iopause: Operation not permitted > s6-svscan: warning: executing into .s6-svscan/crash > s6-svscan crashed. Killing everything and exiting. > s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted > ``` > > Both on `2.9.21` and `2.9.22`, running on pi3b+ arm Stay with 2.9.19 for now until the developer fixes the bug
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/nginx-proxy-manager-NginxProxyManager#1865
No description provided.