[GH-ISSUE #310] Bad Gateway on login attempt #274

Closed
opened 2026-02-26 06:31:53 +03:00 by kerem · 114 comments
Owner

Originally created by @adman120 on GitHub (Feb 28, 2020).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/310

On container launch via the recommended compose file, I am unable to get past the login screen as it tells me I have a bad gateway every time I try and log in.

Originally created by @adman120 on GitHub (Feb 28, 2020). Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/310 On container launch via the recommended compose file, I am unable to get past the login screen as it tells me I have a bad gateway every time I try and log in.
kerem closed this issue 2026-02-26 06:31:54 +03:00
Author
Owner

@ghost commented on GitHub (Feb 29, 2020):

Hello guys, any news regarding this topic?

<!-- gh-comment-id:592900524 --> @ghost commented on GitHub (Feb 29, 2020): Hello guys, any news regarding this topic?
Author
Owner

@Xantios commented on GitHub (Feb 29, 2020):

I know this isn't much of a solution, but I had this very same issue and just let the container run for a couple of hours (as I went to bed) and now I can log in.

Maybe there is a bug in the initial startup?

<!-- gh-comment-id:592928736 --> @Xantios commented on GitHub (Feb 29, 2020): I know this isn't much of a solution, but I had this very same issue and just let the container run for a couple of hours (as I went to bed) and now I can log in. Maybe there is a bug in the initial startup?
Author
Owner

@ghost commented on GitHub (Feb 29, 2020):

I still can't login. I'm afraid that i've configured it wrong, because i am new using docker.

<!-- gh-comment-id:592948531 --> @ghost commented on GitHub (Feb 29, 2020): I still can't login. I'm afraid that i've configured it wrong, because i am new using docker.
Author
Owner

@theniwo commented on GitHub (Mar 4, 2020):

having the same issue here. running on a pi4 with buster.
I read
app_1 | [3/4/2020] [12:48:41 PM] [Global ] › ✖ error create table auth (idint unsigned not null auto_increment primary key,created_ondatetime not null,modified_ondatetime not null,user_idint unsigned not null,typevarchar(30) not null,secretvarchar(255) not null,metajson not null,is_deletedint unsigned not null default '0') - ER_PARSE_ERROR: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'json not null,is_deletedint unsigned not null default '0')' at line 1 app_1 | [3/4/2020] [12:48:42 PM] [Migrate ] › ℹ info Current database version: none app_1 | [3/4/2020] [12:48:42 PM] [Migrate ] › ℹ info [initial-schema] Migrating Up... app_1 | migration file "20180618015850_initial.js" failed app_1 | migration failed with error: create tableauth (idint unsigned not null auto_increment primary key,created_ondatetime not null,modified_ondatetime not null,user_idint unsigned not null,typevarchar(30) not null,secretvarchar(255) not null,metajson not null,is_deletedint unsigned not null default '0') - ER_PARSE_ERROR: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'json not null,is_deleted int unsigned not null default '0')' at line 1
when starting docker-compose without -d

<!-- gh-comment-id:594340344 --> @theniwo commented on GitHub (Mar 4, 2020): having the same issue here. running on a pi4 with buster. I read `app_1 | [3/4/2020] [12:48:41 PM] [Global ] › ✖ error create table `auth` (`id` int unsigned not null auto_increment primary key, `created_on` datetime not null, `modified_on` datetime not null, `user_id` int unsigned not null, `type` varchar(30) not null, `secret` varchar(255) not null, `meta` json not null, `is_deleted` int unsigned not null default '0') - ER_PARSE_ERROR: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'json not null, `is_deleted` int unsigned not null default '0')' at line 1 app_1 | [3/4/2020] [12:48:42 PM] [Migrate ] › ℹ info Current database version: none app_1 | [3/4/2020] [12:48:42 PM] [Migrate ] › ℹ info [initial-schema] Migrating Up... app_1 | migration file "20180618015850_initial.js" failed app_1 | migration failed with error: create table `auth` (`id` int unsigned not null auto_increment primary key, `created_on` datetime not null, `modified_on` datetime not null, `user_id` int unsigned not null, `type` varchar(30) not null, `secret` varchar(255) not null, `meta` json not null, `is_deleted` int unsigned not null default '0') - ER_PARSE_ERROR: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'json not null, `is_deleted` int unsigned not null default '0')' at line 1` when starting docker-compose without -d
Author
Owner

@jc21 commented on GitHub (Mar 5, 2020):

To debug everyone's problems I would need to see the docker-compose file you're using and output of both the NPM container and the DB container, if you're using it and not using a direct connection to a mysql/maria host.

The JSON field error suggests that the mariadb version is old? what version are you running?

<!-- gh-comment-id:595504710 --> @jc21 commented on GitHub (Mar 5, 2020): To debug everyone's problems I would need to see the docker-compose file you're using and output of both the NPM container and the DB container, if you're using it and not using a direct connection to a mysql/maria host. The JSON field error suggests that the mariadb version is old? what version are you running?
Author
Owner

@theniwo commented on GitHub (Mar 8, 2020):

Config
Errors
Errors with arm64v8/mariadb
Docker Version

<!-- gh-comment-id:596181925 --> @theniwo commented on GitHub (Mar 8, 2020): [Config](https://pastebin.com/7XPhEmjw) [Errors](https://pastebin.com/mhQvrufG) [Errors with arm64v8/mariadb](https://pastebin.com/60DWKixa) [Docker Version](https://pastebin.com/LZrXbpzR)
Author
Owner

@jc21 commented on GitHub (Mar 9, 2020):

@theniwo so from the first error logs, you're using jsurf/rpi-mariadb maria image which is v10.0. JSON field support was added in maria 10.2.

The second error log has standard_init_linux.go:211: exec user process caused "exec format error" which is a clear indicator that you're using the wrong architecture image for your system. When using raspbian, no matter if it's rpi3 or 4, you need armv7. I can see that the official mariadb account doesn't make v7 versions. I'll have a stab at making my version multiarch.

<!-- gh-comment-id:596278966 --> @jc21 commented on GitHub (Mar 9, 2020): @theniwo so from the first error logs, you're using [jsurf/rpi-mariadb](https://hub.docker.com/r/jsurf/rpi-mariadb) maria image which is v10.0. JSON field support was added in maria 10.2. The second error log has `standard_init_linux.go:211: exec user process caused "exec format error"` which is a clear indicator that you're using the wrong architecture image for your system. When using raspbian, no matter if it's rpi3 or 4, you need armv7. I can see that the [official mariadb account](https://hub.docker.com/_/mariadb?tab=tags) doesn't make v7 versions. I'll have a stab at making [my version](https://hub.docker.com/repository/docker/jc21/mariadb-aria) multiarch.
Author
Owner

@jc21 commented on GitHub (Mar 9, 2020):

Try using jc21/mariadb-aria:10.4 here. Should work for Raspbian as well. Let me know how you go!

<!-- gh-comment-id:596327126 --> @jc21 commented on GitHub (Mar 9, 2020): Try using `jc21/mariadb-aria:10.4` [here](https://hub.docker.com/repository/docker/jc21/mariadb-aria). Should work for Raspbian as well. Let me know how you go!
Author
Owner

@mark-130 commented on GitHub (Mar 9, 2020):

I'm having the same issues and getting the following error in portainer (container showing as unhealthy). I'm using linux as a base

Last output | parse error: Invalid numeric literal at line 1, column 7 NOT OK

<!-- gh-comment-id:596567179 --> @mark-130 commented on GitHub (Mar 9, 2020): I'm having the same issues and getting the following error in portainer (container showing as unhealthy). I'm using linux as a base > Last output | parse error: Invalid numeric literal at line 1, column 7 NOT OK
Author
Owner

@jc21 commented on GitHub (Mar 9, 2020):

A very cryptic message. is that from the db container or npm container? If it was a sql error there would be an error message in the npm container. But if it was an error that happens in the db container on start, it might be an issue of incompatible database versions.

<!-- gh-comment-id:596808499 --> @jc21 commented on GitHub (Mar 9, 2020): A very cryptic message. is that from the db container or npm container? If it was a sql error there would be an error message in the npm container. But if it was an error that happens in the db container on start, it might be an issue of incompatible database versions.
Author
Owner

@theniwo commented on GitHub (Mar 10, 2020):

Try using jc21/mariadb-aria:10.4 here. Should work for Raspbian as well. Let me know how you go!

Thanks, but I get the
db_1 | standard_init_linux.go:211: exec user process caused "exec format error"
error message on this too.

Here's my new config.

<!-- gh-comment-id:596908011 --> @theniwo commented on GitHub (Mar 10, 2020): > Try using `jc21/mariadb-aria:10.4` [here](https://hub.docker.com/repository/docker/jc21/mariadb-aria). Should work for Raspbian as well. Let me know how you go! Thanks, but I get the `db_1 | standard_init_linux.go:211: exec user process caused "exec format error"` error message on this too. [Here's](https://pastebin.com/JaG2ZKvv) my new config.
Author
Owner

@jc21 commented on GitHub (Mar 10, 2020):

@theniwo ok it's time to debug your OS. Can you paste the output of this command:

cat /proc/cpuinfo

<!-- gh-comment-id:596911712 --> @jc21 commented on GitHub (Mar 10, 2020): @theniwo ok it's time to debug your OS. Can you paste the output of this command: `cat /proc/cpuinfo`
Author
Owner

@theniwo commented on GitHub (Mar 10, 2020):

@theniwo ok it's time to debug your OS. Can you paste the output of this command:

cat /proc/cpuinfo

/proc/cpuinfo

EDIT:
Having the same result on a raspberry pi 3
/proc/cpuinfo

<!-- gh-comment-id:596916095 --> @theniwo commented on GitHub (Mar 10, 2020): > @theniwo ok it's time to debug your OS. Can you paste the output of this command: > > `cat /proc/cpuinfo` [/proc/cpuinfo](https://pastebin.com/YP1D8qbT) EDIT: Having the same result on a raspberry pi 3 [/proc/cpuinfo](https://pastebin.com/GCgHeLrn)
Author
Owner

@Delvien commented on GitHub (Mar 11, 2020):

Im having the same issue. same config as the OP. Same raspberry pi CPU

<!-- gh-comment-id:597382526 --> @Delvien commented on GitHub (Mar 11, 2020): Im having the same issue. same config as the OP. Same raspberry pi CPU
Author
Owner

@theniwo commented on GitHub (Mar 11, 2020):

Versions:

Raspbian 10 buster - 4.19.102-v7l+

docker-ce          5:19.03.7~3-0~raspbian-buster  armhf
docker-ce-cli      5:19.03.7~3-0~raspbian-buster  armhf
docker-compose     1.21.0-3                       all

Client: Docker Engine - Community
 Version:           19.03.7
 API version:       1.40
 Go version:        go1.12.17
 Git commit:        7141c19
 Built:             Wed Mar  4 01:55:10 2020
 OS/Arch:           linux/arm
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.7
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.12.17
  Git commit:       7141c19
  Built:            Wed Mar  4 01:49:01 2020
  OS/Arch:          linux/arm
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

<!-- gh-comment-id:597448860 --> @theniwo commented on GitHub (Mar 11, 2020): Versions: ``` Raspbian 10 buster - 4.19.102-v7l+ docker-ce 5:19.03.7~3-0~raspbian-buster armhf docker-ce-cli 5:19.03.7~3-0~raspbian-buster armhf docker-compose 1.21.0-3 all Client: Docker Engine - Community Version: 19.03.7 API version: 1.40 Go version: go1.12.17 Git commit: 7141c19 Built: Wed Mar 4 01:55:10 2020 OS/Arch: linux/arm Experimental: false Server: Docker Engine - Community Engine: Version: 19.03.7 API version: 1.40 (minimum version 1.12) Go version: go1.12.17 Git commit: 7141c19 Built: Wed Mar 4 01:49:01 2020 OS/Arch: linux/arm Experimental: false containerd: Version: 1.2.13 GitCommit: 7ad184331fa3e55e52b890ea95e65ba581ae3429 runc: Version: 1.0.0-rc10 GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd docker-init: Version: 0.18.0 GitCommit: fec3683 ```
Author
Owner

@drewbeer commented on GitHub (Mar 11, 2020):

this is because you are all missing the config. the instructions don't include it.

rm -rf config.json (its a dir it shouldn't be)

create a new config.json with

{
  "database": {
    "engine": "mysql",
    "host": "db",
    "name": "npm",
    "user": "npm",
    "password": "npm",
    "port": 3306
  }
}

and that will fix it.

<!-- gh-comment-id:597497367 --> @drewbeer commented on GitHub (Mar 11, 2020): this is because you are all missing the config. the instructions don't include it. rm -rf config.json (its a dir it shouldn't be) create a new config.json with ``` { "database": { "engine": "mysql", "host": "db", "name": "npm", "user": "npm", "password": "npm", "port": 3306 } } ``` and that will fix it.
Author
Owner

@theniwo commented on GitHub (Mar 11, 2020):

I have that config.json as a file in the same directory as the docker-compose.yml. I did not clone from the git, I created the files manually

<!-- gh-comment-id:597510167 --> @theniwo commented on GitHub (Mar 11, 2020): I have that config.json as a file in the same directory as the docker-compose.yml. I did not clone from the git, I created the files manually
Author
Owner

@zerpex commented on GitHub (Mar 14, 2020):

Hi,
Got the same issue:
docker logs tool-nginx_db standard_init_linux.go:211: exec user process caused "exec format error" standard_init_linux.go:211: exec user process caused "exec format error" standard_init_linux.go:211: exec user process caused "exec format error"

cat /proc/cpuinfo processor : 0 model name : ARMv7 Processor rev 10 (v7l) BogoMIPS : 3.00 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpd32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x2 CPU part : 0xc09 CPU revision : 10

I don't know if it can help, but it is working with the following image: https://hub.docker.com/r/yobasystems/alpine-mariadb/tags

Regards

<!-- gh-comment-id:599042107 --> @zerpex commented on GitHub (Mar 14, 2020): Hi, Got the same issue: `docker logs tool-nginx_db standard_init_linux.go:211: exec user process caused "exec format error" standard_init_linux.go:211: exec user process caused "exec format error" standard_init_linux.go:211: exec user process caused "exec format error"` `cat /proc/cpuinfo processor : 0 model name : ARMv7 Processor rev 10 (v7l) BogoMIPS : 3.00 Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpd32 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x2 CPU part : 0xc09 CPU revision : 10` I don't know if it can help, but it is working with the following image: https://hub.docker.com/r/yobasystems/alpine-mariadb/tags Regards
Author
Owner

@theniwo commented on GitHub (Mar 15, 2020):

I get an ER_ACCESS_DENIED_NO_PASSWORD_ERROR error on this image

<!-- gh-comment-id:599173337 --> @theniwo commented on GitHub (Mar 15, 2020): I get an `ER_ACCESS_DENIED_NO_PASSWORD_ERROR` error on this image
Author
Owner

@jc21 commented on GitHub (Mar 15, 2020):

Funny, because I am using that same image as a base for mine:
https://github.com/jc21/docker-mariadb-aria/blob/master/Dockerfile

<!-- gh-comment-id:599279015 --> @jc21 commented on GitHub (Mar 15, 2020): Funny, because I am using that same image as a base for mine: https://github.com/jc21/docker-mariadb-aria/blob/master/Dockerfile
Author
Owner

@jdkruzr commented on GitHub (Mar 23, 2020):

This issue still exists for anyone who starts from the instructions. I tried @drewbeer 's suggestion, but that didn't fix it either. The issue appears to be that it expects something on the host to be running on port 3000 (node, I guess?) but it isn't.

<!-- gh-comment-id:602921263 --> @jdkruzr commented on GitHub (Mar 23, 2020): This issue still exists for anyone who starts from the instructions. I tried @drewbeer 's suggestion, but that didn't fix it either. The issue appears to be that it expects something on the host to be running on port 3000 (node, I guess?) but it isn't.
Author
Owner

@miguelwill commented on GitHub (Mar 24, 2020):

This issue still exists for anyone who starts from the instructions. I tried @drewbeer 's suggestion, but that didn't fix it either. The issue appears to be that it expects something on the host to be running on port 3000 (node, I guess?) but it isn't.

check the log of the nginx-proxy-manager container to see if the npm process loads the configuration or shows an error

<!-- gh-comment-id:602956406 --> @miguelwill commented on GitHub (Mar 24, 2020): > This issue still exists for anyone who starts from the instructions. I tried @drewbeer 's suggestion, but that didn't fix it either. The issue appears to be that it expects something on the host to be running on port 3000 (node, I guess?) but it isn't. check the log of the nginx-proxy-manager container to see if the npm process loads the configuration or shows an error
Author
Owner

@adman120 commented on GitHub (Mar 24, 2020):

Tfw you start the thread but contribute 0 to it.

On Mon, Mar 23, 2020 at 9:37 PM Miguelwill notifications@github.com wrote:

This issue still exists for anyone who starts from the instructions. I
tried @drewbeer https://github.com/drewbeer 's suggestion, but that
didn't fix it either. The issue appears to be that it expects something on
the host to be running on port 3000 (node, I guess?) but it isn't.

check the log of the nginx-proxy-manager container to see if the npm
process loads the configuration or shows an error


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/jc21/nginx-proxy-manager/issues/310#issuecomment-602956406,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AA4EKQQWKRFGY2EFSRS4G2LRJAFEDANCNFSM4K5W4CGA
.

<!-- gh-comment-id:602983523 --> @adman120 commented on GitHub (Mar 24, 2020): Tfw you start the thread but contribute 0 to it. On Mon, Mar 23, 2020 at 9:37 PM Miguelwill <notifications@github.com> wrote: > This issue still exists for anyone who starts from the instructions. I > tried @drewbeer <https://github.com/drewbeer> 's suggestion, but that > didn't fix it either. The issue appears to be that it expects something on > the host to be running on port 3000 (node, I guess?) but it isn't. > > check the log of the nginx-proxy-manager container to see if the npm > process loads the configuration or shows an error > > — > You are receiving this because you authored the thread. > Reply to this email directly, view it on GitHub > <https://github.com/jc21/nginx-proxy-manager/issues/310#issuecomment-602956406>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AA4EKQQWKRFGY2EFSRS4G2LRJAFEDANCNFSM4K5W4CGA> > . >
Author
Owner

@theniwo commented on GitHub (Mar 24, 2020):

I managed to get it running with yobasystems/alpine-mariadb

<!-- gh-comment-id:603014546 --> @theniwo commented on GitHub (Mar 24, 2020): I managed to get it running with [yobasystems/alpine-mariadb](https://hub.docker.com/r/yobasystems/alpine-mariadb)
Author
Owner

@drewbeer commented on GitHub (Mar 24, 2020):

ok so i tested this again and yes its doesn't create the config properly.

fresh install, on startup...

app_1 | [3/24/2020] [11:10:02 PM] [Global ] › ✖ error Config file /app/config/production.json cannot be read. Error code is: EISDIR. Error message is: EISDIR: illegal operation on a directory, read Error: Config file /app/config/production.json cannot be read. Error code is: EISDIR. Error message is: EISDIR:
illegal operation on a directory, read

stop the containers and list the files

drwxr-xr-x 2 root root 4096 Mar 25 00:09 config.json
drwxr-xr-x 8 root root 4096 Mar 25 00:10 data
-rw-r--r-- 1 root root 480 Mar 25 00:08 docker-compose.yaml
drwxr-xr-x 2 root root 4096 Mar 25 00:09 letsencrypt

all dirs are created correctly however the config.json is now a dir also... which is wrong...

so rm -rf config.json, edit it and then put in

{
  "database": {
    "engine": "mysql",
    "host": "db",
    "name": "npm",
    "user": "npm",
    "password": "npm",
    "port": 3306
  }
}

save
docker-compose down
docker-compose up -d

fixed.

here is my compose pulled straight from the site

version: '3'
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    ports:
      - '80:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./config.json:/app/config/production.json
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
  db:
    image: 'jc21/mariadb-aria:10.4'
    environment:
      MYSQL_ROOT_PASSWORD: 'npm'
      MYSQL_DATABASE: 'npm'
      MYSQL_USER: 'npm'
      MYSQL_PASSWORD: 'npm'
    volumes:
      - ./data/mysql:/var/lib/mysql
<!-- gh-comment-id:603552819 --> @drewbeer commented on GitHub (Mar 24, 2020): ok so i tested this again and yes its doesn't create the config properly. fresh install, on startup... app_1 | [3/24/2020] [11:10:02 PM] [Global ] › ✖ error Config file /app/config/production.json cannot be read. Error code is: EISDIR. Error message is: EISDIR: illegal operation on a directory, read Error: Config file /app/config/production.json cannot be read. Error code is: EISDIR. Error message is: EISDIR: illegal operation on a directory, read stop the containers and list the files drwxr-xr-x 2 root root 4096 Mar 25 00:09 config.json drwxr-xr-x 8 root root 4096 Mar 25 00:10 data -rw-r--r-- 1 root root 480 Mar 25 00:08 docker-compose.yaml drwxr-xr-x 2 root root 4096 Mar 25 00:09 letsencrypt all dirs are created correctly however the config.json is now a dir also... which is wrong... so rm -rf config.json, edit it and then put in ``` { "database": { "engine": "mysql", "host": "db", "name": "npm", "user": "npm", "password": "npm", "port": 3306 } } ``` save docker-compose down docker-compose up -d fixed. here is my compose pulled straight from the site ``` version: '3' services: app: image: 'jc21/nginx-proxy-manager:latest' ports: - '80:80' - '81:81' - '443:443' volumes: - ./config.json:/app/config/production.json - ./data:/data - ./letsencrypt:/etc/letsencrypt db: image: 'jc21/mariadb-aria:10.4' environment: MYSQL_ROOT_PASSWORD: 'npm' MYSQL_DATABASE: 'npm' MYSQL_USER: 'npm' MYSQL_PASSWORD: 'npm' volumes: - ./data/mysql:/var/lib/mysql ```
Author
Owner

@jdkruzr commented on GitHub (Mar 25, 2020):

I’d like to add: if you, like me, thought this would be fine to run in an lxc container with nesting turned on and the appropriate AppArmor flags set:

lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop:

then I regret to inform you that we were both wrong, because nginx-proxy-manager exhibited the same symptoms in this configuration when I correctly set up the config.json file. I had to run it in a VM instead. Can’t speak to why; I don’t grok Docker networking.

On Mar 24, 2020, at 7:14 PM, Drew notifications@github.com wrote:

ok so i tested this again and yes its doesn't create the config properly.

fresh install, on startup...

app_1 | [3/24/2020] [11:10:02 PM] [Global ] › ✖ error Config file /app/config/production.json cannot be read. Error code is: EISDIR. Error message is: EISDIR: illegal operation on a directory, read Error: Config file /app/config/production.json cannot be read. Error code is: EISDIR. Error message is: EISDIR:
illegal operation on a directory, read

stop the containers and list the files

drwxr-xr-x 2 root root 4096 Mar 25 00:09 config.json
drwxr-xr-x 8 root root 4096 Mar 25 00:10 data
-rw-r--r-- 1 root root 480 Mar 25 00:08 docker-compose.yaml
drwxr-xr-x 2 root root 4096 Mar 25 00:09 letsencrypt

all dirs are created correctly however the config.json is now a dir also... which is wrong...

so rm -rf config.json, edit it and then put in

{
"database": {
"engine": "mysql",
"host": "db",
"name": "npm",
"user": "npm",
"password": "npm",
"port": 3306
}
}
save
docker-compose down
docker-compose up -d

fixed.

here is my compose pulled straight from the site

version: '3'
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
ports:
- '80:80'
- '81:81'
- '443:443'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
db:
image: 'jc21/mariadb-aria:10.4'
environment:
MYSQL_ROOT_PASSWORD: 'npm'
MYSQL_DATABASE: 'npm'
MYSQL_USER: 'npm'
MYSQL_PASSWORD: 'npm'
volumes:
- ./data/mysql:/var/lib/mysql

You are receiving this because you commented.
Reply to this email directly, view it on GitHub https://github.com/jc21/nginx-proxy-manager/issues/310#issuecomment-603552819, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAEW5VT2TAWWNHJ7DJCXCS3RJE5GTANCNFSM4K5W4CGA.

<!-- gh-comment-id:603605401 --> @jdkruzr commented on GitHub (Mar 25, 2020): I’d like to add: if you, like me, thought this would be fine to run in an lxc container with nesting turned on and the appropriate AppArmor flags set: lxc.apparmor.profile: unconfined lxc.cgroup.devices.allow: a lxc.cap.drop: then I regret to inform you that we were both wrong, because nginx-proxy-manager exhibited the same symptoms in this configuration when I correctly set up the config.json file. I had to run it in a VM instead. Can’t speak to why; I don’t grok Docker networking. > On Mar 24, 2020, at 7:14 PM, Drew <notifications@github.com> wrote: > > > ok so i tested this again and yes its doesn't create the config properly. > > fresh install, on startup... > > app_1 | [3/24/2020] [11:10:02 PM] [Global ] › ✖ error Config file /app/config/production.json cannot be read. Error code is: EISDIR. Error message is: EISDIR: illegal operation on a directory, read Error: Config file /app/config/production.json cannot be read. Error code is: EISDIR. Error message is: EISDIR: > illegal operation on a directory, read > > stop the containers and list the files > > drwxr-xr-x 2 root root 4096 Mar 25 00:09 config.json > drwxr-xr-x 8 root root 4096 Mar 25 00:10 data > -rw-r--r-- 1 root root 480 Mar 25 00:08 docker-compose.yaml > drwxr-xr-x 2 root root 4096 Mar 25 00:09 letsencrypt > > all dirs are created correctly however the config.json is now a dir also... which is wrong... > > so rm -rf config.json, edit it and then put in > > { > "database": { > "engine": "mysql", > "host": "db", > "name": "npm", > "user": "npm", > "password": "npm", > "port": 3306 > } > } > save > docker-compose down > docker-compose up -d > > fixed. > > here is my compose pulled straight from the site > > version: '3' > services: > app: > image: 'jc21/nginx-proxy-manager:latest' > ports: > - '80:80' > - '81:81' > - '443:443' > volumes: > - ./config.json:/app/config/production.json > - ./data:/data > - ./letsencrypt:/etc/letsencrypt > db: > image: 'jc21/mariadb-aria:10.4' > environment: > MYSQL_ROOT_PASSWORD: 'npm' > MYSQL_DATABASE: 'npm' > MYSQL_USER: 'npm' > MYSQL_PASSWORD: 'npm' > volumes: > - ./data/mysql:/var/lib/mysql > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub <https://github.com/jc21/nginx-proxy-manager/issues/310#issuecomment-603552819>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AAEW5VT2TAWWNHJ7DJCXCS3RJE5GTANCNFSM4K5W4CGA>. >
Author
Owner

@lopugit commented on GitHub (Mar 25, 2020):

I get this in my logs of the app container

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
[services.d] done.
[3/25/2020] [5:29:15 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:16 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:17 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:18 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:19 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:20 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:21 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:22 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:23 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:24 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:25 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:26 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:27 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:28 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:29 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:30 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:31 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:32 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:33 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:34 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:35 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:36 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:37 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:38 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:39 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:40 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:41 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:42 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:43 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:44 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:45 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:46 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:47 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:48 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:49 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:50 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:51 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:52 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:53 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:54 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:55 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:56 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:57 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:58 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:29:59 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:00 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:01 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:02 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:03 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:04 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:05 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:06 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:07 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:08 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:09 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:10 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:11 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:12 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:13 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:14 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:15 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:16 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:17 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:18 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:19 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:20 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:21 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:22 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:23 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:24 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:25 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:26 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:27 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:28 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:29 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:30 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:31 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:32 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:33 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:34 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:35 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:36 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:37 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:38 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:39 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:40 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:41 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:42 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:43 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:44 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:45 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:46 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:47 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:48 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:49 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:50 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:51 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:52 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:53 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:54 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:55 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:56 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:57 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:58 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:30:59 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:31:00 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:31:01 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:31:02 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:31:03 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:31:04 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:31:05 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:31:06 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:31:07 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/25/2020] [5:31:08 AM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0

And this from my sql container

[i] mysqld not found, creating....
[i] MySQL directory already present, skipping creation
2020-03-25  5:28:43 0 [Note] /usr/bin/mysqld (mysqld 10.4.10-MariaDB) starting as process 1 ...
2020-03-25  5:28:43 0 [Note] InnoDB: Using Linux native AIO
2020-03-25  5:28:43 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-03-25  5:28:43 0 [Note] InnoDB: Uses event mutexes
2020-03-25  5:28:43 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2020-03-25  5:28:43 0 [Note] InnoDB: Number of pools: 1
2020-03-25  5:28:43 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-03-25  5:28:43 0 [Note] mysqld: O_TMPFILE is not supported on /var/tmp (disabling future attempts)
2020-03-25  5:28:43 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M
2020-03-25  5:28:43 0 [Note] InnoDB: Completed initialization of buffer pool
2020-03-25  5:28:43 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2020-03-25  5:28:45 0 [Note] InnoDB: 128 out of 128 rollback segments are active.
2020-03-25  5:28:45 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2020-03-25  5:28:45 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2020-03-25  5:28:45 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2020-03-25  5:28:45 0 [Note] InnoDB: Waiting for purge to start
2020-03-25  5:28:45 0 [Note] InnoDB: 10.4.10 started; log sequence number 126551; transaction id 15
2020-03-25  5:28:45 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool
2020-03-25  5:28:45 0 [Note] InnoDB: Buffer pool(s) load completed at 200325  5:28:45
2020-03-25  5:28:45 0 [Note] Plugin 'FEEDBACK' is disabled.
2020-03-25  5:28:45 0 [Note] Server socket created on IP: '::'.
2020-03-25  5:28:45 0 [ERROR] Missing system table mysql.proxies_priv; please run mysql_upgrade to create it
2020-03-25  5:28:45 6 [Warning] Failed to load slave replication state from table mysql.gtid_slave_pos: 1146: Table 'mysql.gtid_slave_pos' doesn't exist
2020-03-25  5:28:45 0 [Note] Reading of all Master_info entries succeeded
2020-03-25  5:28:45 0 [Note] Added new Master_info '' to hash table
2020-03-25  5:28:45 0 [Note] /usr/bin/mysqld: ready for connections.
Version: '10.4.10-MariaDB'  socket: '/run/mysqld/mysqld.sock'  port: 3306  MariaDB Server
2020-03-25  5:29:15 8 [Warning] Aborted connection 8 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2020-03-25  5:29:16 9 [Warning] Aborted connection 9 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2020-03-25  5:29:17 10 [Warning] Aborted connection 10 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
2020-03-25  5:29:18 11 [Warning] Aborted connection 11 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication)
etc.. connection 11, 12, 13 ..... up to 400
<!-- gh-comment-id:603649484 --> @lopugit commented on GitHub (Mar 25, 2020): I get this in my logs of the app container ```terminal [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] done. [services.d] starting services [services.d] done. [3/25/2020] [5:29:15 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:16 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:17 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:18 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:19 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:20 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:21 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:22 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:23 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:24 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:25 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:26 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:27 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:28 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:29 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:30 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:31 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:32 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:33 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:34 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:35 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:36 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:37 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:38 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:39 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:40 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:41 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:42 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:43 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:44 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:45 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:46 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:47 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:48 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:49 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:50 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:51 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:52 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:53 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:54 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:55 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:56 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:57 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:58 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:29:59 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:00 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:01 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:02 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:03 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:04 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:05 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:06 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:07 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:08 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:09 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:10 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:11 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:12 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:13 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:14 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:15 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:16 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:17 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:18 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:19 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:20 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:21 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:22 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:23 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:24 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:25 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:26 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:27 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:28 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:29 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:30 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:31 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:32 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:33 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:34 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:35 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:36 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:37 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:38 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:39 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:40 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:41 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:42 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:43 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:44 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:45 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:46 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:47 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:48 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:49 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:50 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:51 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:52 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:53 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:54 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:55 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:56 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:57 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:58 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:30:59 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:31:00 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:31:01 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:31:02 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:31:03 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:31:04 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:31:05 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:31:06 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:31:07 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/25/2020] [5:31:08 AM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 ``` And this from my sql container ```terminal [i] mysqld not found, creating.... [i] MySQL directory already present, skipping creation 2020-03-25 5:28:43 0 [Note] /usr/bin/mysqld (mysqld 10.4.10-MariaDB) starting as process 1 ... 2020-03-25 5:28:43 0 [Note] InnoDB: Using Linux native AIO 2020-03-25 5:28:43 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins 2020-03-25 5:28:43 0 [Note] InnoDB: Uses event mutexes 2020-03-25 5:28:43 0 [Note] InnoDB: Compressed tables use zlib 1.2.11 2020-03-25 5:28:43 0 [Note] InnoDB: Number of pools: 1 2020-03-25 5:28:43 0 [Note] InnoDB: Using SSE2 crc32 instructions 2020-03-25 5:28:43 0 [Note] mysqld: O_TMPFILE is not supported on /var/tmp (disabling future attempts) 2020-03-25 5:28:43 0 [Note] InnoDB: Initializing buffer pool, total size = 128M, instances = 1, chunk size = 128M 2020-03-25 5:28:43 0 [Note] InnoDB: Completed initialization of buffer pool 2020-03-25 5:28:43 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority(). 2020-03-25 5:28:45 0 [Note] InnoDB: 128 out of 128 rollback segments are active. 2020-03-25 5:28:45 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2020-03-25 5:28:45 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2020-03-25 5:28:45 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2020-03-25 5:28:45 0 [Note] InnoDB: Waiting for purge to start 2020-03-25 5:28:45 0 [Note] InnoDB: 10.4.10 started; log sequence number 126551; transaction id 15 2020-03-25 5:28:45 0 [Note] InnoDB: Loading buffer pool(s) from /var/lib/mysql/ib_buffer_pool 2020-03-25 5:28:45 0 [Note] InnoDB: Buffer pool(s) load completed at 200325 5:28:45 2020-03-25 5:28:45 0 [Note] Plugin 'FEEDBACK' is disabled. 2020-03-25 5:28:45 0 [Note] Server socket created on IP: '::'. 2020-03-25 5:28:45 0 [ERROR] Missing system table mysql.proxies_priv; please run mysql_upgrade to create it 2020-03-25 5:28:45 6 [Warning] Failed to load slave replication state from table mysql.gtid_slave_pos: 1146: Table 'mysql.gtid_slave_pos' doesn't exist 2020-03-25 5:28:45 0 [Note] Reading of all Master_info entries succeeded 2020-03-25 5:28:45 0 [Note] Added new Master_info '' to hash table 2020-03-25 5:28:45 0 [Note] /usr/bin/mysqld: ready for connections. Version: '10.4.10-MariaDB' socket: '/run/mysqld/mysqld.sock' port: 3306 MariaDB Server 2020-03-25 5:29:15 8 [Warning] Aborted connection 8 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication) 2020-03-25 5:29:16 9 [Warning] Aborted connection 9 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication) 2020-03-25 5:29:17 10 [Warning] Aborted connection 10 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication) 2020-03-25 5:29:18 11 [Warning] Aborted connection 11 to db: 'unconnected' user: 'unauthenticated' host: '172.18.0.3' (This connection closed normally without authentication) etc.. connection 11, 12, 13 ..... up to 400 ```
Author
Owner

@lopugit commented on GitHub (Mar 25, 2020):

Ok so I can confirm as I've read elsewhere, letting it sit for some time (I went to a friends for a few hours) and coming back, it'll be working, magically... I still wonder what the problem is though.

My docker has been a little slow to react because I moved it onto an HDD on AWS. So maybe it's just the slow HDD messing with something.

But yeah, if you leave it for a few hours it'll work.

Actually when I built it on my local machine, SSD, it had the same problem and I'm guessing it would have fixed itself too. I really wonder what causes this delay

<!-- gh-comment-id:603758094 --> @lopugit commented on GitHub (Mar 25, 2020): Ok so I can confirm as I've read elsewhere, letting it sit for some time (I went to a friends for a few hours) and coming back, it'll be working, magically... I still wonder what the problem is though. My docker has been a little slow to react because I moved it onto an HDD on AWS. So maybe it's just the slow HDD messing with something. But yeah, if you leave it for a few hours it'll work. Actually when I built it on my local machine, SSD, it had the same problem and I'm guessing it would have fixed itself too. I really wonder what causes this delay
Author
Owner

@lopugit commented on GitHub (Mar 25, 2020):

Ok so now I only had to wait about 10 minutes, interesting!

<!-- gh-comment-id:603772537 --> @lopugit commented on GitHub (Mar 25, 2020): Ok so now I only had to wait about 10 minutes, interesting!
Author
Owner

@satanahell commented on GitHub (Mar 26, 2020):

Hello everyone,

I got a similar issue i guess on Centos 8 but with a different origin

After running the setup tutorial this is where i am :

I'm available if you need further information.
Thanks for your help,

<!-- gh-comment-id:604373549 --> @satanahell commented on GitHub (Mar 26, 2020): Hello everyone, I got a similar issue i guess on Centos 8 but with a different origin - Issue : Bad gateway for every logon try on http://localhost:81/login or http://172.2.0.3:81/login - Possible origin: app_1 logs show cannot connect 'EHOSTUNREACH 172.23.0.2:3306) and mariadb seems running fine on port 3306 on db_1 ! After running the setup tutorial this is where i am : - Got my two files [config.json](https://pastebin.com/s9Nb6DzV) and [docker-compose.yml](https://pastebin.com/JK95uXGr). - This is an [ls -la and a pwd](https://pastebin.com/FkgBYhc1). - [docker-compose up result](https://pastebin.com/dVV6xviw) - the config.json file remain not alterred (not encrypted) after starting the application. - This is the l[ogs output ](https://pastebin.com/P8aaUFu5)of the both docker containers app db. - Docker [version info.](https://pastebin.com/9rMJUNUZ) I'm available if you need further information. Thanks for your help,
Author
Owner

@wwboynton commented on GitHub (Mar 27, 2020):

I could be mistaken, but it seems like the first run of the container is creating a directory called config.json instead of a file. It's possible that it's supposed to create a file and then input the default config contents, or perhaps the user is supposed to do it themselves, but either way it seems like completing a successful setup right now means:

docker-compose up -d
# Wait for it to come up and scaffold the directory/file structure in the mount...
docker-compose stop
rmdir config.json
vi config.json
# input the default config from [the setup](https://nginxproxymanager.com/setup/), modifying if necessary
docker-compose up -d

(pardon any typos in that sample, I hope you get the idea)

EDIT: For what it's worth, on my Raspi4 running Ubuntu 18.04.4LTS, I had issues with the suggested DB image. I switched to the official MariaDB image of the same version, changing my root user name to root to avoid duplication. I changed that value in the docker-compose.yml and again in the config.json to use root as the user, and ran docker-compose down before bringing it back up to purge the old containers. I'm all up and running now. The previous MariaDB container was dying immediately with:

db_1   | standard_init_linux.go:211: exec user process caused "exec format error"

and I'd prefer to just use the official DB images anyway wherever I can.

<!-- gh-comment-id:604790905 --> @wwboynton commented on GitHub (Mar 27, 2020): I could be mistaken, but it seems like the first run of the container is creating a ***directory*** called `config.json` instead of a file. It's possible that it's supposed to create a file and then input the default config contents, or perhaps the user is supposed to do it themselves, but either way it seems like completing a successful setup right now means: ```bash docker-compose up -d # Wait for it to come up and scaffold the directory/file structure in the mount... docker-compose stop rmdir config.json vi config.json # input the default config from [the setup](https://nginxproxymanager.com/setup/), modifying if necessary docker-compose up -d ``` (pardon any typos in that sample, I hope you get the idea) EDIT: For what it's worth, on my Raspi4 running Ubuntu 18.04.4LTS, I had issues with the suggested DB image. I switched to the official MariaDB image of the same version, changing my root user name to `root` to avoid duplication. I changed that value in the `docker-compose.yml` and again in the `config.json` to use `root` as the user, and ran `docker-compose down` before bringing it back up to purge the old containers. I'm all up and running now. The previous MariaDB container was dying immediately with: ``` db_1 | standard_init_linux.go:211: exec user process caused "exec format error" ``` and I'd prefer to just use the official DB images anyway wherever I can.
Author
Owner

@satanahell commented on GitHub (Mar 27, 2020):

@wwboynton thanks for the reply.

I also saw the config.json directory been created on the first run but only because i made a mistake in the docker-compose file. I have repeated many times the same operation without this mistake and all was going like a charm, except this database connection issue. But to be sure, i exactly tried what you've suggested, and i got the same error.

As you mentioned, i will try to switch to the official database container but i'm not quite familiar with docker.

Thanks for the advice !

<!-- gh-comment-id:604934950 --> @satanahell commented on GitHub (Mar 27, 2020): @wwboynton thanks for the reply. I also saw the config.json directory been created on the first run but only because i made a mistake in the docker-compose file. I have repeated many times the same operation without this mistake and all was going like a charm, except this database connection issue. But to be sure, i exactly tried what you've suggested, and i got the same error. As you mentioned, i will try to switch to the official database container but i'm not quite familiar with docker. Thanks for the advice !
Author
Owner

@lopugit commented on GitHub (Mar 27, 2020):

Wait 10 minutes or 3 hours it should Magically start working

On Fri., 27 Mar. 2020, 9:49 pm SaTaNaeL, notifications@github.com wrote:

@wwboynton https://github.com/wwboynton thanks for the reply.

I also saw the config.json directory been created on the first run but
only because i made a mistake in the docker-compose file. I have repeated
many times the same operation without this mistake and all was going like a
charm, except this database connection issue. But to be sure, i exactly
tried what you've suggested, and i got the same error.

As you mentioned, i will try to switch to the official database container
but i'm not quite familiar with docker.

Thanks for the advice !


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/jc21/nginx-proxy-manager/issues/310#issuecomment-604934950,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/ADH7OBTBKLT6NNCQH7OLY7DRJSADZANCNFSM4K5W4CGA
.

<!-- gh-comment-id:604971686 --> @lopugit commented on GitHub (Mar 27, 2020): Wait 10 minutes or 3 hours it should Magically start working On Fri., 27 Mar. 2020, 9:49 pm SaTaNaeL, <notifications@github.com> wrote: > @wwboynton <https://github.com/wwboynton> thanks for the reply. > > I also saw the config.json directory been created on the first run but > only because i made a mistake in the docker-compose file. I have repeated > many times the same operation without this mistake and all was going like a > charm, except this database connection issue. But to be sure, i exactly > tried what you've suggested, and i got the same error. > > As you mentioned, i will try to switch to the official database container > but i'm not quite familiar with docker. > > Thanks for the advice ! > > — > You are receiving this because you commented. > Reply to this email directly, view it on GitHub > <https://github.com/jc21/nginx-proxy-manager/issues/310#issuecomment-604934950>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ADH7OBTBKLT6NNCQH7OLY7DRJSADZANCNFSM4K5W4CGA> > . >
Author
Owner

@satanahell commented on GitHub (Mar 27, 2020):

@lopugit i already tried this one all night long! ^_^
But with no success at the moment.

<!-- gh-comment-id:605050046 --> @satanahell commented on GitHub (Mar 27, 2020): @lopugit i already tried this one all night long! ^_^ But with no success at the moment.
Author
Owner

@satanahell commented on GitHub (Mar 29, 2020):

Hello everyone,

I got a similar issue i guess on Centos 8 but with a different origin

After running the setup tutorial this is where i am :

Since my last post, i figured out how to log on on each container to make some ping, all is fine obviously but in docker-compose events i got container exec-die about the rproxy_app_1.

I also tried to change db container to official image but i'm not use to docker and i would to check if i'm doing this the good way. @wwboynton can you please post your docker compose file with the official mariadb image ?

I still looking for a solution because my homelab really need a reverse proxy with a GUI to expose and secure many HTTP/HTTPS web servers (same ports/ diff internal IP) !!

<!-- gh-comment-id:605619879 --> @satanahell commented on GitHub (Mar 29, 2020): > Hello everyone, > > I got a similar issue i guess on Centos 8 but with a different origin > > * Issue : Bad gateway for every logon try on http://localhost:81/login or http://172.2.0.3:81/login > * Possible origin: app_1 logs show cannot connect 'EHOSTUNREACH 172.23.0.2:3306) and mariadb seems running fine on port 3306 on db_1 ! > > After running the setup tutorial this is where i am : > > * Got my two files [config.json](https://pastebin.com/s9Nb6DzV) and [docker-compose.yml](https://pastebin.com/JK95uXGr). > * This is an [ls -la and a pwd](https://pastebin.com/FkgBYhc1). > * [docker-compose up result](https://pastebin.com/dVV6xviw) > * the config.json file remain not alterred (not encrypted) after starting the application. > * This is the [logs output ](https://pastebin.com/P8aaUFu5)of the both docker containers app db. > * Docker [version info.](https://pastebin.com/9rMJUNUZ) > Since my last post, i figured out how to log on on each container to make some ping, all is fine obviously but in docker-compose events i got container exec-die about the rproxy_app_1. I also tried to change db container to official image but i'm not use to docker and i would to check if i'm doing this the good way. @wwboynton can you please post your docker compose file with the official mariadb image ? I still looking for a solution because my homelab really need a reverse proxy with a **GUI** to expose and secure many HTTP/HTTPS web servers (same ports/ diff internal IP) !!
Author
Owner

@promuta commented on GitHub (Mar 31, 2020):

I managed to get it running with yobasystems/alpine-mariadb

Thanks, this also fixed the issue for me

<!-- gh-comment-id:606831484 --> @promuta commented on GitHub (Mar 31, 2020): > I managed to get it running with [yobasystems/alpine-mariadb](https://hub.docker.com/r/yobasystems/alpine-mariadb) Thanks, this also fixed the issue for me
Author
Owner

@Wonderbox2000 commented on GitHub (Apr 5, 2020):

Hi, I am still having the same issue.
I just tried with yobasystems/alpine-mariadb .=> same result in data/logs/error.log:

2020/04/05 06:51:04 [error] 226#226: *2159 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: nginxproxymanager, request: "GET /api/ HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "127.0.0.1:81"

Can someone please guide me ?

So far I have also tried the quick-setup and the full-setup

Capture d’écran 2020-04-05 à 08 57 11

But I have probably missed something.

Running on Ubuntu 18.04

<!-- gh-comment-id:609368850 --> @Wonderbox2000 commented on GitHub (Apr 5, 2020): Hi, I am still having the same issue. I just tried with [yobasystems/alpine-mariadb ](https://hub.docker.com/r/yobasystems/alpine-mariadb).=> same result in data/logs/error.log: _2020/04/05 06:51:04 [error] 226#226: *2159 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: nginxproxymanager, request: "GET /api/ HTTP/1.1", upstream: "http://127.0.0.1:3000/", host: "127.0.0.1:81"_ Can someone please guide me ? So far I have also tried the [quick-setup](https://nginxproxymanager.com/#quick-setup) and the [full-setup](https://nginxproxymanager.com/setup/) ![Capture d’écran 2020-04-05 à 08 57 11](https://user-images.githubusercontent.com/40339769/78468820-89ec2b80-771b-11ea-8039-51259f22504e.png) But I have probably missed something. Running on Ubuntu 18.04
Author
Owner

@miguelwill commented on GitHub (Apr 6, 2020):

Are you sure you have created the configuration file and added it to the NPM image mount ?
Compare what you did with the instructions
https://nginxproxymanager.com/setup/

<!-- gh-comment-id:610062000 --> @miguelwill commented on GitHub (Apr 6, 2020): Are you sure you have created the configuration file and added it to the NPM image mount ? Compare what you did with the instructions https://nginxproxymanager.com/setup/
Author
Owner

@unixbird commented on GitHub (Apr 10, 2020):

Having the same issues here and tried using a different mysql image. still bad gateway. I'll wait a little bit and see what happens

<!-- gh-comment-id:612253574 --> @unixbird commented on GitHub (Apr 10, 2020): Having the same issues here and tried using a different mysql image. still bad gateway. I'll wait a little bit and see what happens
Author
Owner

@unixbird commented on GitHub (Apr 11, 2020):

Alright so it works. What I've found is

  1. Create the config.json before you make the servers and run docker-compose
  2. in the docker compose use the yobasystems/alpine-mariadb image instead of the default (I'm not sure if this actually affects anything but I used it and it works so I'll stick to it).
  3. You definitely need to wait a moment. I noticed after when it works it actually put in RSA keys into the config.json so that may be the reason you need to wait as it is using entropy to generate them.

That's seemingly how mine works now

<!-- gh-comment-id:612286838 --> @unixbird commented on GitHub (Apr 11, 2020): Alright so it works. What I've found is 1. Create the config.json before you make the servers and run docker-compose 2. in the docker compose use the yobasystems/alpine-mariadb image instead of the default (I'm not sure if this actually affects anything but I used it and it works so I'll stick to it). 3. You definitely need to wait a moment. I noticed after when it works it actually put in RSA keys into the config.json so that may be the reason you need to wait as it is using entropy to generate them. That's seemingly how mine works now
Author
Owner

@Xantios commented on GitHub (Apr 11, 2020):

So it basically comes down to making a PR to the docs so they explain how to create the json file, point to it in the docker-compose file and have to wait for it to populate?

<!-- gh-comment-id:612392581 --> @Xantios commented on GitHub (Apr 11, 2020): So it basically comes down to making a PR to the docs so they explain how to create the json file, point to it in the docker-compose file and have to wait for it to populate?
Author
Owner

@jpmurray commented on GitHub (Apr 11, 2020):

EDIT: Nevermind me, @unixbird mentionned it does! I'll just wait!

Full instructions page states that

After the first run of the application, the config file will be altered to include generated encryption keys unique to your installation.

Can anyone confirm to me that checking the config.json we created has additionnal entries when they can log in? I'm hitting the same trouble as everyone (having created my config.json file before as I am following full guide), but the file stays exactly the same.

<!-- gh-comment-id:612446417 --> @jpmurray commented on GitHub (Apr 11, 2020): **EDIT: Nevermind me, @unixbird mentionned it does! I'll just wait!** Full instructions page states that > After the first run of the application, the config file will be altered to include generated encryption keys unique to your installation. Can anyone confirm to me that checking the `config.json` we created has additionnal entries when they can log in? I'm hitting the same trouble as everyone (having created my config.json file _before_ as I am following full guide), but the file stays exactly the same.
Author
Owner

@jpmurray commented on GitHub (Apr 11, 2020):

Adding my voice to this, but using the proposed jc21/mariadb-aria:10.4 image, checking docker ps the database container status was always Restarting (1) 1 second ago after ~10 seconds. Switching to the alpine-mariadb database solved it instantly.

<!-- gh-comment-id:612491808 --> @jpmurray commented on GitHub (Apr 11, 2020): Adding my voice to this, but using the proposed `jc21/mariadb-aria:10.4` image, checking `docker ps` the database container status was always `Restarting (1) 1 second ago` after ~10 seconds. Switching to the `alpine-mariadb` database solved it instantly.
Author
Owner

@jc21 commented on GitHub (Apr 13, 2020):

Why was the jc21/mariadb-aria:10.4 container restarting though? What errors were you able to see? I'm using this exact image in production (amd64) and don't have problems.

<!-- gh-comment-id:613140375 --> @jc21 commented on GitHub (Apr 13, 2020): Why was the `jc21/mariadb-aria:10.4` container restarting though? What errors were you able to see? I'm using this exact image in production (amd64) and don't have problems.
Author
Owner

@jpmurray commented on GitHub (Apr 14, 2020):

What errors were you able to see?

None that I could see, although I might be missing places where to look for said errors, I'm still not entirely versed into Docker's inner workings. What would be the usual place to catch thoses? I'll go and try to reproduce the problem.

<!-- gh-comment-id:613455401 --> @jpmurray commented on GitHub (Apr 14, 2020): > What errors were you able to see? None that I could see, although I might be missing places where to look for said errors, I'm still not entirely versed into Docker's inner workings. What would be the usual place to catch thoses? I'll go and try to reproduce the problem.
Author
Owner

@delacosta456 commented on GitHub (Oct 6, 2020):

hi all
i was also having this issue and reading some issues (this too didn't help) but bellow is what definitely worked:

  • it looks like the port 81 (may be after several tries) stay in locked or occupied by a process which make the app container sometimes stay in healthy state while database container boot correctly. This mean the port 81 needed to be "free-ed".
    So WORKAROUND on my ubuntu, step by step :
  1. (from the npm folder where config and yaml file are ) in terminal i run : sudo docker-compose down

  2. then sudo kill -9 sudo lsof -t -i:81 or sudo kill -9 $(sudo lsof -t -i:9001)

  3. now sudo docker-compose up -d

  4. IMPORTANT STEP wait for at least 10 to 15 seconds before refreshing you browser on http://127.0.0.1:81

hope this will help (sorry for my English i'am french)

<!-- gh-comment-id:704025148 --> @delacosta456 commented on GitHub (Oct 6, 2020): hi all i was also having this issue and reading some issues (this too didn't help) but bellow is what definitely worked: * it looks like the port 81 (may be after several tries) stay in locked or occupied by a process which make the app container sometimes stay in **healthy** state while database container boot correctly. This mean the port 81 needed to be "free-ed". So WORKAROUND on my ubuntu, step by step : 1. (from the npm folder where config and yaml file are ) in terminal i run : **_sudo docker-compose down_** 2. then **_sudo kill -9 `sudo lsof -t -i:81`_** or **_sudo kill -9 $(sudo lsof -t -i:9001)_** 3. now **_sudo docker-compose up -d_** 4. *IMPORTANT STEP* wait for at least 10 to 15 seconds before refreshing you browser on **_http://127.0.0.1:81_** hope this will help (sorry for my English i'am french)
Author
Owner

@wimmme commented on GitHub (Dec 3, 2020):

For me on a RPI 3 running Hypriot

db:
image: mariadb:latest

does not work, but this does:

db:
image: yobasystems/alpine-mariadb:armhf
<!-- gh-comment-id:737963864 --> @wimmme commented on GitHub (Dec 3, 2020): For me on a RPI 3 running Hypriot ``` db: image: mariadb:latest ``` does not work, but this does: ``` db: image: yobasystems/alpine-mariadb:armhf ```
Author
Owner

@leaderit commented on GitHub (Jan 15, 2021):

  1. run
    docker exec -ti nginx-proxy-manager /bin/bash
    where 'nginx-proxy-manager' is the name of your container
  2. run
    node index
    SEE ERROR. Usualy : Error: Cannot parse config file: '/app/config/production.json': SyntaxError: Unexpected token } in JSON
    or something like this
    3.FIX FILE OR ERROR
  3. exit shell and restart the docker container
  4. PROFIT!
<!-- gh-comment-id:761030503 --> @leaderit commented on GitHub (Jan 15, 2021): 1. run docker exec -ti nginx-proxy-manager /bin/bash where 'nginx-proxy-manager' is the name of your container 2. run node index SEE ERROR. Usualy : Error: Cannot parse config file: '/app/config/production.json': SyntaxError: Unexpected token } in JSON or something like this 3.FIX FILE OR ERROR 4. exit shell and restart the docker container 5. PROFIT!
Author
Owner

@sbl05 commented on GitHub (May 2, 2021):

I had the same issues on my Raspberry 4 (armv7l) and solved it switching to SQLite using the following configuration:

version: "3"
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      - '80:80'
      - '443:443'
      - '81:81'
    environment:
      DB_SQLITE_FILE: "/data/npm.sqlite"
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
<!-- gh-comment-id:830840441 --> @sbl05 commented on GitHub (May 2, 2021): I had the same issues on my Raspberry 4 (armv7l) and solved it switching to SQLite using the following configuration: ```YAML version: "3" services: app: image: 'jc21/nginx-proxy-manager:latest' restart: always ports: - '80:80' - '443:443' - '81:81' environment: DB_SQLITE_FILE: "/data/npm.sqlite" volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt
Author
Owner

@Subline-75 commented on GitHub (May 6, 2021):

What I needed to work this it out (bad gateway on the login page):
https://www.youtube.com/watch?v=ZrS3IT7HG2Y&

Create my first SQL user & database : 👍

<!-- gh-comment-id:833949064 --> @Subline-75 commented on GitHub (May 6, 2021): What I needed to work this it out (bad gateway on the login page): https://www.youtube.com/watch?v=ZrS3IT7HG2Y& Create my first SQL user & database : 👍
Author
Owner

@markspivey commented on GitHub (Jun 2, 2021):

I had the same issues on my Raspberry 4 (armv7l) and solved it switching to SQLite using the following configuration:

version: "3"
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      - '80:80'
      - '443:443'
      - '81:81'
    environment:
      DB_SQLITE_FILE: "/data/npm.sqlite"
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

Thank you. This is the only thing that worked for me. To anyone else messing around with trying to get mariadb working on a Raspberry Pi, stop what you are doing and choose the quoted route instead.

<!-- gh-comment-id:853225714 --> @markspivey commented on GitHub (Jun 2, 2021): > > > I had the same issues on my Raspberry 4 (armv7l) and solved it switching to SQLite using the following configuration: > > ```yaml > version: "3" > services: > app: > image: 'jc21/nginx-proxy-manager:latest' > restart: always > ports: > - '80:80' > - '443:443' > - '81:81' > environment: > DB_SQLITE_FILE: "/data/npm.sqlite" > volumes: > - ./data:/data > - ./letsencrypt:/etc/letsencrypt > ``` Thank you. This is the only thing that worked for me. To anyone else messing around with trying to get mariadb working on a Raspberry Pi, stop what you are doing and choose the quoted route instead.
Author
Owner

@wbox commented on GitHub (Jan 21, 2022):

Hello everyone,

I got a similar issue i guess on Centos 8 but with a different origin

* Issue : Bad gateway for every logon try on http://localhost:81/login or http://172.2.0.3:81/login

* Possible origin: app_1 logs show cannot connect 'EHOSTUNREACH 172.23.0.2:3306) and mariadb seems running fine on port 3306 on db_1 !

After running the setup tutorial this is where i am :

* Got my two files [config.json](https://pastebin.com/s9Nb6DzV) and [docker-compose.yml](https://pastebin.com/JK95uXGr).

* This is an [ls -la and a pwd](https://pastebin.com/FkgBYhc1).

* [docker-compose up result](https://pastebin.com/dVV6xviw)

* the config.json file remain not alterred (not encrypted) after starting the application.

* This is the l[ogs output ](https://pastebin.com/P8aaUFu5)of the both docker containers app db.

* Docker [version info.](https://pastebin.com/9rMJUNUZ)

I'm available if you need further information. Thanks for your help,

I used this and the only thing I changed was the mariadb docker image from 10.4 to latest. Worked at the first try.

<!-- gh-comment-id:1018095164 --> @wbox commented on GitHub (Jan 21, 2022): > Hello everyone, > > I got a similar issue i guess on Centos 8 but with a different origin > > * Issue : Bad gateway for every logon try on http://localhost:81/login or http://172.2.0.3:81/login > > * Possible origin: app_1 logs show cannot connect 'EHOSTUNREACH 172.23.0.2:3306) and mariadb seems running fine on port 3306 on db_1 ! > > > After running the setup tutorial this is where i am : > > * Got my two files [config.json](https://pastebin.com/s9Nb6DzV) and [docker-compose.yml](https://pastebin.com/JK95uXGr). > > * This is an [ls -la and a pwd](https://pastebin.com/FkgBYhc1). > > * [docker-compose up result](https://pastebin.com/dVV6xviw) > > * the config.json file remain not alterred (not encrypted) after starting the application. > > * This is the l[ogs output ](https://pastebin.com/P8aaUFu5)of the both docker containers app db. > > * Docker [version info.](https://pastebin.com/9rMJUNUZ) > > > I'm available if you need further information. Thanks for your help, I used this and the only thing I changed was the mariadb docker image from ```10.4``` to ```latest```. Worked at the first try.
Author
Owner

@tnortman-raspi commented on GitHub (May 7, 2022):

Ok so now I only had to wait about 10 minutes, interesting!

Same for me! Waited 5 minutes after the initial deployment of the container and I was then able to login

<!-- gh-comment-id:1120300291 --> @tnortman-raspi commented on GitHub (May 7, 2022): > Ok so now I only had to wait about 10 minutes, interesting! Same for me! Waited 5 minutes after the initial deployment of the container and I was then able to login
Author
Owner

@mrki0620 commented on GitHub (Jun 9, 2022):

This is the only change that worked for me also.
Peter.
Thanks to markspivey

<!-- gh-comment-id:1150943037 --> @mrki0620 commented on GitHub (Jun 9, 2022): > This is the only change that worked for me also. Peter. Thanks to markspivey
Author
Owner

@goodvandro commented on GitHub (Jan 13, 2023):

Hello,

I had the same issues on my Raspberry 4 (armv7l) and solved it switching to SQLite using the following configuration:

version: "3"
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      - '80:80'
      - '443:443'
      - '81:81'
    environment:
      DB_SQLITE_FILE: "/data/npm.sqlite"
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

Thank you. This is the only thing that worked for me. To anyone else messing around with trying to get mariadb working on a Raspberry Pi, stop what you are doing and choose the quoted route instead.

I had the same issues on my Raspberry 4 (armv7l) and solved it switching to SQLite using the following configuration:

version: "3"
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      - '80:80'
      - '443:443'
      - '81:81'
    environment:
      DB_SQLITE_FILE: "/data/npm.sqlite"
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

Thank you. This is the only thing that worked for me. To anyone else messing around with trying to get mariadb working on a Raspberry Pi, stop what you are doing and choose the quoted route instead.

I had the same issues on my Raspberry 4 (armv7l) and solved it switching to SQLite using the following configuration:

version: "3"
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: always
    ports:
      - '80:80'
      - '443:443'
      - '81:81'
    environment:
      DB_SQLITE_FILE: "/data/npm.sqlite"
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

Hello.
This do not work for me.
Has anyone else tried another solution that works?

<!-- gh-comment-id:1381543895 --> @goodvandro commented on GitHub (Jan 13, 2023): Hello, > > I had the same issues on my Raspberry 4 (armv7l) and solved it switching to SQLite using the following configuration: > > ```yaml > > version: "3" > > services: > > app: > > image: 'jc21/nginx-proxy-manager:latest' > > restart: always > > ports: > > - '80:80' > > - '443:443' > > - '81:81' > > environment: > > DB_SQLITE_FILE: "/data/npm.sqlite" > > volumes: > > - ./data:/data > > - ./letsencrypt:/etc/letsencrypt > > ``` > > Thank you. This is the only thing that worked for me. To anyone else messing around with trying to get mariadb working on a Raspberry Pi, stop what you are doing and choose the quoted route instead. > > I had the same issues on my Raspberry 4 (armv7l) and solved it switching to SQLite using the following configuration: > > ```yaml > > version: "3" > > services: > > app: > > image: 'jc21/nginx-proxy-manager:latest' > > restart: always > > ports: > > - '80:80' > > - '443:443' > > - '81:81' > > environment: > > DB_SQLITE_FILE: "/data/npm.sqlite" > > volumes: > > - ./data:/data > > - ./letsencrypt:/etc/letsencrypt > > ``` > > Thank you. This is the only thing that worked for me. To anyone else messing around with trying to get mariadb working on a Raspberry Pi, stop what you are doing and choose the quoted route instead. > I had the same issues on my Raspberry 4 (armv7l) and solved it switching to SQLite using the following configuration: > > ```yaml > version: "3" > services: > app: > image: 'jc21/nginx-proxy-manager:latest' > restart: always > ports: > - '80:80' > - '443:443' > - '81:81' > environment: > DB_SQLITE_FILE: "/data/npm.sqlite" > volumes: > - ./data:/data > - ./letsencrypt:/etc/letsencrypt > ``` Hello. This do not work for me. Has anyone else tried another solution that works?
Author
Owner

@goodvandro commented on GitHub (Jan 13, 2023):

I was able to resolve the issue by adding the following parameters in nginx proxy manager settings.
In my case, I was returning a very large body in the http request, which was causing the problem.

I entered the container and created the file:
/data/nginx/custom/http.conf and added the following configuration.

proxy_buffers                           8 2m;
proxy_buffer_size                    12m;
proxy_busy_buffers_size        12m;
<!-- gh-comment-id:1382072650 --> @goodvandro commented on GitHub (Jan 13, 2023): I was able to resolve the issue by adding the following parameters in nginx proxy manager settings. In my case, I was returning a very large body in the http request, which was causing the problem. I entered the container and created the file: `/data/nginx/custom/http.conf ` and added the following configuration. ``` proxy_buffers 8 2m; proxy_buffer_size 12m; proxy_busy_buffers_size 12m; ```
Author
Owner

@nep2ner commented on GitHub (Feb 5, 2023):

Just waiting a bit worked for me

<!-- gh-comment-id:1417107724 --> @nep2ner commented on GitHub (Feb 5, 2023): Just waiting a bit worked for me
Author
Owner

@RobotsAreCrazy commented on GitHub (Feb 13, 2023):

I was able to resolve the issue by adding the following parameters in nginx proxy manager settings.
In my case, I was returning a very large body in the http request, which was causing the problem.

I entered the container and created the file:
/data/nginx/custom/http.conf and added the following configuration.

proxy_buffers                           8 2m;
proxy_buffer_size                    12m;
proxy_busy_buffers_size        12m;

Hi, can you give me a rough guide how to enter the container and make that change, just trying anything incase it works please?

<!-- gh-comment-id:1428062376 --> @RobotsAreCrazy commented on GitHub (Feb 13, 2023): > I was able to resolve the issue by adding the following parameters in nginx proxy manager settings. > In my case, I was returning a very large body in the http request, which was causing the problem. > > I entered the container and created the file: > `/data/nginx/custom/http.conf ` and added the following configuration. > > ``` > proxy_buffers 8 2m; > proxy_buffer_size 12m; > proxy_busy_buffers_size 12m; > ``` Hi, can you give me a rough guide how to enter the container and make that change, just trying anything incase it works please?
Author
Owner

@prakas17 commented on GitHub (Feb 22, 2023):

Lately I am having the same issue. Could someone please point out the correct way to deploy it on pi4 server.
I have tried all the work around as suggested but none of those worked.
Really appreciate it. Thanks

Bad_gateway

<!-- gh-comment-id:1440073612 --> @prakas17 commented on GitHub (Feb 22, 2023): Lately I am having the same issue. Could someone please point out the correct way to deploy it on pi4 server. I have tried all the work around as suggested but none of those worked. Really appreciate it. Thanks ![Bad_gateway](https://user-images.githubusercontent.com/27959489/220644227-924a0e70-6778-4695-86b9-c1a688529e43.png)
Author
Owner

@goodvandro commented on GitHub (Feb 23, 2023):

I was able to resolve the issue by adding the following parameters in nginx proxy manager settings.
In my case, I was returning a very large body in the http request, which was causing the problem.
I entered the container and created the file:
/data/nginx/custom/http.conf and added the following configuration.

proxy_buffers                           8 2m;
proxy_buffer_size                    12m;
proxy_busy_buffers_size        12m;

Hi, can you give me a rough guide how to enter the container and make that change, just trying anything incase it works please?

docker container exec -it <container_name> bash
cd /data/nginx
mkdir custom
nano http.conf

And past the configuration in the http.conf

<!-- gh-comment-id:1441380784 --> @goodvandro commented on GitHub (Feb 23, 2023): > > I was able to resolve the issue by adding the following parameters in nginx proxy manager settings. > > In my case, I was returning a very large body in the http request, which was causing the problem. > > I entered the container and created the file: > > `/data/nginx/custom/http.conf ` and added the following configuration. > > ``` > > proxy_buffers 8 2m; > > proxy_buffer_size 12m; > > proxy_busy_buffers_size 12m; > > ``` > > Hi, can you give me a rough guide how to enter the container and make that change, just trying anything incase it works please? ``` docker container exec -it <container_name> bash cd /data/nginx mkdir custom nano http.conf ``` And past the configuration in the http.conf
Author
Owner

@prakas17 commented on GitHub (Feb 24, 2023):

@goodvandro Still facing the same error.

image

Its driving me crazy and there is no workaround which is working. Tried all these but still the same persistent error.

What else I could try. Anyone please advice.

Thanks.

<!-- gh-comment-id:1442672243 --> @prakas17 commented on GitHub (Feb 24, 2023): @goodvandro Still facing the same error. ![image](https://user-images.githubusercontent.com/27959489/221070329-f4e3d2f0-474e-48b2-b32c-51e051a3efa7.png) Its driving me crazy and there is no workaround which is working. Tried all these but still the same persistent error. What else I could try. Anyone please advice. Thanks.
Author
Owner

@miguelwill commented on GitHub (Feb 24, 2023):

you need reload nginx or restart the container for reload changes

<!-- gh-comment-id:1442673567 --> @miguelwill commented on GitHub (Feb 24, 2023): you need reload nginx or restart the container for reload changes
Author
Owner

@prakas17 commented on GitHub (Feb 24, 2023):

Yes, I restarted the container after the change.
Also, open the private browser to reload the login page and tried but still the same error.

Do note that my pi4 is running on ssd.

Thanks

<!-- gh-comment-id:1442716510 --> @prakas17 commented on GitHub (Feb 24, 2023): Yes, I restarted the container after the change. Also, open the private browser to reload the login page and tried but still the same error. Do note that my pi4 is running on ssd. Thanks
Author
Owner

@goodvandro commented on GitHub (Feb 24, 2023):

Sim, reiniciei o container após a alteração. Além disso, abra o navegador privado para recarregar a página de login e tentei, mas ainda o mesmo erro.

Observe que meu pi4 está sendo executado no ssd.

Obrigado

Your problem is different from mine. Let me see your configuration in docker-compose.

<!-- gh-comment-id:1443880648 --> @goodvandro commented on GitHub (Feb 24, 2023): > Sim, reiniciei o container após a alteração. Além disso, abra o navegador privado para recarregar a página de login e tentei, mas ainda o mesmo erro. > > Observe que meu pi4 está sendo executado no ssd. > > Obrigado Your problem is different from mine. Let me see your configuration in docker-compose.
Author
Owner

@prakas17 commented on GitHub (Feb 25, 2023):

below is docker-compose.yml

version: "3"
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
# These ports are in format :
- '80:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '81:81' # Admin Web Port
# Add any other Stream port you want to expose
# - '21:21' # FTP
environment:
DB_MYSQL_HOST: "db"
DB_MYSQL_PORT: 3306
DB_MYSQL_USER: "npm"
DB_MYSQL_PASSWORD: "npm"
DB_MYSQL_NAME: "npm"
# Uncomment this if IPv6 is not enabled on your host
DISABLE_IPV6: 'true'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
depends_on:
- db

db:
image: 'yobasystems/alpine-mariadb:latest'
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: 'npm'
MYSQL_DATABASE: 'npm'
MYSQL_USER: 'npm'
MYSQL_PASSWORD: 'npm'
volumes:
- ./data/mysql:/var/lib/mysql

<!-- gh-comment-id:1445004027 --> @prakas17 commented on GitHub (Feb 25, 2023): below is docker-compose.yml version: "3" services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: # These ports are in format <host-port>:<container-port> - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port # Add any other Stream port you want to expose # - '21:21' # FTP environment: DB_MYSQL_HOST: "db" DB_MYSQL_PORT: 3306 DB_MYSQL_USER: "npm" DB_MYSQL_PASSWORD: "npm" DB_MYSQL_NAME: "npm" # Uncomment this if IPv6 is not enabled on your host DISABLE_IPV6: 'true' volumes: - ./config.json:/app/config/production.json - ./data:/data - ./letsencrypt:/etc/letsencrypt depends_on: - db db: image: 'yobasystems/alpine-mariadb:latest' restart: unless-stopped environment: MYSQL_ROOT_PASSWORD: 'npm' MYSQL_DATABASE: 'npm' MYSQL_USER: 'npm' MYSQL_PASSWORD: 'npm' volumes: - ./data/mysql:/var/lib/mysql
Author
Owner

@prakas17 commented on GitHub (Feb 25, 2023):

docker logs:-

[2/25/2023] [5:50:52 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables

[2/25/2023] [5:50:52 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306

[2/25/2023] [5:50:53 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables

[2/25/2023] [5:50:53 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306

[2/25/2023] [5:50:54 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables

[2/25/2023] [5:50:54 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306

[2/25/2023] [5:50:55 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables

[2/25/2023] [5:50:55 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306

[2/25/2023] [5:50:56 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables

[2/25/2023] [5:50:56 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306

[2/25/2023] [5:50:57 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables

[2/25/2023] [5:50:57 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306

[2/25/2023] [5:50:58 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables

[2/25/2023] [5:50:58 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306

[2/25/2023] [5:50:59 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables

[2/25/2023] [5:50:59 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306

<!-- gh-comment-id:1445004181 --> @prakas17 commented on GitHub (Feb 25, 2023): docker logs:- [2/25/2023] [5:50:52 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables [2/25/2023] [5:50:52 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306 [2/25/2023] [5:50:53 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables [2/25/2023] [5:50:53 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306 [2/25/2023] [5:50:54 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables [2/25/2023] [5:50:54 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306 [2/25/2023] [5:50:55 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables [2/25/2023] [5:50:55 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306 [2/25/2023] [5:50:56 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables [2/25/2023] [5:50:56 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306 [2/25/2023] [5:50:57 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables [2/25/2023] [5:50:57 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306 [2/25/2023] [5:50:58 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables [2/25/2023] [5:50:58 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306 [2/25/2023] [5:50:59 AM] [Global ] › ℹ info Manual db configuration already exists, skipping config creation from environment variables [2/25/2023] [5:50:59 AM] [Global ] › ✖ error connect ECONNREFUSED 172.21.0.2:3306
Author
Owner

@prakas17 commented on GitHub (Feb 25, 2023):

Getting below error in container:-

cat /data/logs/fallback_error.log
2023/02/25 05:48:09 [error] 281#281: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.235, server: nginxproxymanager, request: "POST /api/tokens HTTP/1.1", upstream: "http://127.0.0.1:3000/tokens", host: "192.168.1.121:81", referrer: "http://192.168.1.121:81/login"

<!-- gh-comment-id:1445005948 --> @prakas17 commented on GitHub (Feb 25, 2023): Getting below error in container:- cat /data/logs/fallback_error.log 2023/02/25 05:48:09 [error] 281#281: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.235, server: nginxproxymanager, request: "POST /api/tokens HTTP/1.1", upstream: "http://127.0.0.1:3000/tokens", host: "192.168.1.121:81", referrer: "http://192.168.1.121:81/login"
Author
Owner

@goodvandro commented on GitHub (Feb 27, 2023):

@prakas17 Check these two points.

  1. Use the docker image "jc21/mariadb-aria:latest" for DB
  2. If there is no other service on port 3306 which is being used by nginx-proxy-manager database
<!-- gh-comment-id:1446399458 --> @goodvandro commented on GitHub (Feb 27, 2023): @prakas17 Check these two points. 1. Use the docker image `"jc21/mariadb-aria:latest"` for DB 2. If there is no other service on port 3306 which is being used by nginx-proxy-manager database
Author
Owner

@prakas17 commented on GitHub (Feb 28, 2023):

Thank you @goodvandro

However, I don't see any errors or warning in docker logs for nginx-proxy-manager container and maria-db container after step (1) as you suggested.

Now looking at the /data/logs/fallback_error.log inside nginx-proxy-manager, I could see below error:-

2023/02/28 09:31:04 [error] 284#284: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.235, server: nginxproxymanager, request: "POST /api/tokens HTTP/1.1", upstream: "http://127.0.0.1:3000/tokens", host: "192.168.1.121:81", referrer: "http://192.168.1.121:81/login"
2023/02/28 09:31:04 [error] 284#284: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.235, server: nginxproxymanager, request: "POST /api/tokens HTTP/1.1", upstream: "http://127.0.0.1:3000/tokens", host: "192.168.1.121:81", referrer: "http://192.168.1.121:81/login"
2023/02/28 09:31:05 [error] 284#284: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.235, server: nginxproxymanager, request: "POST /api/tokens HTTP/1.1", upstream: "http://127.0.0.1:3000/tokens", host: "192.168.1.121:81", referrer: "http://192.168.1.121:81/login"

Could you please let me know how to troubleshoot further.

<!-- gh-comment-id:1447869252 --> @prakas17 commented on GitHub (Feb 28, 2023): Thank you @goodvandro However, I don't see any errors or warning in docker logs for nginx-proxy-manager container and maria-db container after step (1) as you suggested. Now looking at the /data/logs/fallback_error.log inside nginx-proxy-manager, I could see below error:- 2023/02/28 09:31:04 [error] 284#284: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.235, server: nginxproxymanager, request: "POST /api/tokens HTTP/1.1", upstream: "http://127.0.0.1:3000/tokens", host: "192.168.1.121:81", referrer: "http://192.168.1.121:81/login" 2023/02/28 09:31:04 [error] 284#284: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.235, server: nginxproxymanager, request: "POST /api/tokens HTTP/1.1", upstream: "http://127.0.0.1:3000/tokens", host: "192.168.1.121:81", referrer: "http://192.168.1.121:81/login" 2023/02/28 09:31:05 [error] 284#284: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.1.235, server: nginxproxymanager, request: "POST /api/tokens HTTP/1.1", upstream: "http://127.0.0.1:3000/tokens", host: "192.168.1.121:81", referrer: "http://192.168.1.121:81/login" Could you please let me know how to troubleshoot further.
Author
Owner

@ErikUden commented on GitHub (Feb 28, 2023):

I am having the same issue after I updated.

<!-- gh-comment-id:1448779135 --> @ErikUden commented on GitHub (Feb 28, 2023): I am having the same issue after I updated.
Author
Owner

@goodvandro commented on GitHub (Mar 1, 2023):

@prakas17 comment or remove the following code DISABLE_IPV6: 'true'
Make sure there is no other service using the ports configured in your docker-compose.
If the problem persists, re-share your docker compose git repository or post it here in the comments again.

<!-- gh-comment-id:1449649333 --> @goodvandro commented on GitHub (Mar 1, 2023): @prakas17 comment or remove the following code `DISABLE_IPV6: 'true'` Make sure there is no other service using the ports configured in your docker-compose. If the problem persists, re-share your docker compose git repository or post it here in the comments again.
Author
Owner

@prakas17 commented on GitHub (Mar 1, 2023):

@goodvandro

Below is my docker-compse.yml:-

version: "3"
services:
app:
image: 'jc21/nginx-proxy-manager:latest'
restart: unless-stopped
ports:
# These ports are in format :
- '80:80' # Public HTTP Port
- '443:443' # Public HTTPS Port
- '81:81' # Admin Web Port
environment:
DB_MYSQL_HOST: "db"
DB_MYSQL_PORT: 3306
DB_MYSQL_USER: "npm"
DB_MYSQL_PASSWORD: "npm"
DB_MYSQL_NAME: "npm"
# Uncomment this if IPv6 is not enabled on your host
# DISABLE_IPV6: 'true'
volumes:
- ./config.json:/app/config/production.json
- ./data:/data
- ./letsencrypt:/etc/letsencrypt
depends_on:
- db
db:
#image: 'yobasystems/alpine-mariadb:latest'
image: 'jc21/mariadb-aria:latest'
restart: unless-stopped
environment:
MYSQL_ROOT_PASSWORD: 'npm'
MYSQL_DATABASE: 'npm'
MYSQL_USER: 'npm'
MYSQL_PASSWORD: 'npm'
volumes:
- ./data/mysql:/var/lib/mysql

I deleted all the container and images, then re-deploy using docker-compose command.

Also, I checked my host to make sure no other service is running on 3306 and other ports.

Sadly, issue still persist with same errors in UI and /data/logs/fallback_error.log inside nginx-proxy-manager container.

Appreciate your assistance on this. Thanks.

<!-- gh-comment-id:1450067923 --> @prakas17 commented on GitHub (Mar 1, 2023): @goodvandro Below is my docker-compse.yml:- version: "3" services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: # These ports are in format <host-port>:<container-port> - '80:80' # Public HTTP Port - '443:443' # Public HTTPS Port - '81:81' # Admin Web Port environment: DB_MYSQL_HOST: "db" DB_MYSQL_PORT: 3306 DB_MYSQL_USER: "npm" DB_MYSQL_PASSWORD: "npm" DB_MYSQL_NAME: "npm" # Uncomment this if IPv6 is not enabled on your host # DISABLE_IPV6: 'true' volumes: - ./config.json:/app/config/production.json - ./data:/data - ./letsencrypt:/etc/letsencrypt depends_on: - db db: #image: 'yobasystems/alpine-mariadb:latest' image: 'jc21/mariadb-aria:latest' restart: unless-stopped environment: MYSQL_ROOT_PASSWORD: 'npm' MYSQL_DATABASE: 'npm' MYSQL_USER: 'npm' MYSQL_PASSWORD: 'npm' volumes: - ./data/mysql:/var/lib/mysql I deleted all the container and images, then re-deploy using docker-compose command. Also, I checked my host to make sure no other service is running on 3306 and other ports. Sadly, issue still persist with same errors in UI and /data/logs/fallback_error.log inside nginx-proxy-manager container. Appreciate your assistance on this. Thanks.
Author
Owner

@LunarLoom24 commented on GitHub (Mar 27, 2023):

i have same isuse with nginx proxy manager. When i try to login gettting error: Bad Getaway

<!-- gh-comment-id:1485824568 --> @LunarLoom24 commented on GitHub (Mar 27, 2023): i have same isuse with nginx proxy manager. When i try to login gettting error: Bad Getaway
Author
Owner

@ErikUden commented on GitHub (Mar 27, 2023):

i have same isuse with nginx proxy manager. When i try to login gettting error: Bad Getaway

Here's the solution I had from back then:

I have found my error:
I reset my database to yobasystems' maria-db, however, that did not solve it. The issue lied within four files I had:

so nginx/data/mysql/npm
In here there there is
migrations.frm
migrations.ibd
migrations_lock.frm
migrations_lock.ibd

I replaced all of these files with a backup I made of them a while ago.
Now everything works again.

I cannot explain why, but these files got corrupted after listening on the same port as my Pi-hole. For everyone reading my solution, I hope you've made a backup, if not, try deleting these files, maybe (only after having made a backup)?

Good luck!
I hope this helps!

<!-- gh-comment-id:1485841658 --> @ErikUden commented on GitHub (Mar 27, 2023): > i have same isuse with nginx proxy manager. When i try to login gettting error: Bad Getaway > Here's the solution I had from back then: I have found my error: I reset my database to yobasystems' maria-db, however, that did not solve it. The issue lied within four files I had: so nginx/data/mysql/npm In here there there is migrations.frm migrations.ibd migrations_lock.frm migrations_lock.ibd I replaced all of these files with a backup I made of them a while ago. Now everything works again. I cannot explain why, but these files got corrupted after listening on the same port as my Pi-hole. For everyone reading my solution, I hope you've made a backup, if not, try deleting these files, maybe (only after having made a backup)? Good luck! I hope this helps!
Author
Owner

@ChronoWerX82 commented on GitHub (Mar 29, 2023):

After update Nginx to 2.10 I have the same problem. Bad Gateway on login

<!-- gh-comment-id:1488252363 --> @ChronoWerX82 commented on GitHub (Mar 29, 2023): After update Nginx to 2.10 I have the same problem. Bad Gateway on login
Author
Owner

@ChronoWerX82 commented on GitHub (Mar 29, 2023):

i have same isuse with nginx proxy manager. When i try to login gettting error: Bad Getaway

Here's the solution I had from back then:

I have found my error: I reset my database to yobasystems' maria-db, however, that did not solve it. The issue lied within four files I had:

so nginx/data/mysql/npm In here there there is migrations.frm migrations.ibd migrations_lock.frm migrations_lock.ibd

I replaced all of these files with a backup I made of them a while ago. Now everything works again.

I cannot explain why, but these files got corrupted after listening on the same port as my Pi-hole. For everyone reading my solution, I hope you've made a backup, if not, try deleting these files, maybe (only after having made a backup)?

Good luck! I hope this helps!

Oh wow, delete, restart, same problem ... overwrite the new files with the backup before delete ... now it works wtf

<!-- gh-comment-id:1488571005 --> @ChronoWerX82 commented on GitHub (Mar 29, 2023): > > i have same isuse with nginx proxy manager. When i try to login gettting error: Bad Getaway > > Here's the solution I had from back then: > > I have found my error: I reset my database to yobasystems' maria-db, however, that did not solve it. The issue lied within four files I had: > > so nginx/data/mysql/npm In here there there is migrations.frm migrations.ibd migrations_lock.frm migrations_lock.ibd > > I replaced all of these files with a backup I made of them a while ago. Now everything works again. > > I cannot explain why, but these files got corrupted after listening on the same port as my Pi-hole. For everyone reading my solution, I hope you've made a backup, if not, try deleting these files, maybe (only after having made a backup)? > > Good luck! I hope this helps! Oh wow, delete, restart, same problem ... overwrite the new files with the backup before delete ... now it works wtf
Author
Owner

@ErikUden commented on GitHub (Mar 29, 2023):

@ChronoWerX82 Yes, I have no idea why, but this solves it. I'm so glad this helped! This problem ruined my day a couple of years ago too! Haha.

<!-- gh-comment-id:1488659872 --> @ErikUden commented on GitHub (Mar 29, 2023): @ChronoWerX82 Yes, I have no idea why, but this solves it. I'm so glad this helped! This problem ruined my day a couple of years ago too! Haha.
Author
Owner

@bvn13 commented on GitHub (Mar 29, 2023):

Hi!
I have the same problem on fresh install using this compose file:

version: '3.7'

networks:
  nginx-proxy-manager:
    external: true

services:
  npm:
    image: 'jc21/nginx-proxy-manager:latest'
    container_name: nginx-proxy-manager
    restart: unless-stopped
    ports:
      - '80:80'
      - '43013:81'
      - '443:443'
    networks:
      - nginx-proxy-manager
    depends_on:
      - npm-db
    environment:
      DB_MYSQL_HOST: npm-db
      DB_MYSQL_PORT: 3306
      DB_MYSQL_USER: npm
      DB_MYSQL_PASSWORD: npm
      DB_MYSQL_NAME: npm
      # Uncomment this if IPv6 is not enabled on your host
      #DISABLE_IPV6: 'true'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

  npm-db:
    image: 'jc21/mariadb-aria:latest'
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: npm
      MYSQL_DATABASE: npm
      MYSQL_USER: npm
      MYSQL_PASSWORD: npm
    networks:
      - nginx-proxy-manager
    volumes:
      - ./data/mysql:/var/lib/mysql

Any ideas?

mariadb logs:

MySQL init process done. Ready for start up.

exec /usr/bin/mysqld --user=mysql --console --skip-name-resolve --skip-networking=0
2023-03-29 17:25:33 0 [Note] /usr/bin/mysqld (mysqld 10.4.15-MariaDB) starting as process 1 ...
2023-03-29 17:25:33 0 [ERROR] mysqld: File '/var/lib/mysql/aria_log_control' not found (Errcode: 13 "Permission denied")
2023-03-29 17:25:33 0 [ERROR] mysqld: Got error 'Can't open file' when trying to use aria control file '/var/lib/mysql/aria_log_control'
2023-03-29 17:25:33 0 [ERROR] Plugin 'Aria' init function returned error.
2023-03-29 17:25:33 0 [ERROR] Plugin 'Aria' registration as a STORAGE ENGINE failed.
2023-03-29 17:25:33 0 [Note] Plugin 'InnoDB' is disabled.
2023-03-29 17:25:33 0 [Note] Plugin 'FEEDBACK' is disabled.
2023-03-29 17:25:33 0 [ERROR] Could not open mysql.plugin table. Some plugins may be not loaded
2023-03-29 17:25:33 0 [ERROR] Failed to initialize plugins.
2023-03-29 17:25:33 0 [ERROR] Aborting
[i] pre-init.d - processing /scripts/pre-init.d/01_secret-init.sh
[i] mysqld already present, skipping creation
[i] MySQL directory already present, skipping creation
2023-03-29 17:25:34 0 [Note] /usr/bin/mysqld (mysqld 10.4.15-MariaDB) starting as process 1 ...
2023-03-29 17:25:34 0 [Note] Plugin 'InnoDB' is disabled.
2023-03-29 17:25:34 0 [Note] Plugin 'FEEDBACK' is disabled.
2023-03-29 17:25:34 0 [Note] Server socket created on IP: '::'.
2023-03-29 17:25:34 0 [Warning] 'user' entry '@3726bb8bb89f' ignored in --skip-name-resolve mode.
2023-03-29 17:25:34 0 [Warning] 'proxies_priv' entry '@% root@3726bb8bb89f' ignored in --skip-name-resolve mode.
2023-03-29 17:25:34 0 [Note] Reading of all Master_info entries succeeded
2023-03-29 17:25:34 0 [Note] Added new Master_info '' to hash table
2023-03-29 17:25:34 0 [Note] /usr/bin/mysqld: ready for connections.
Version: '10.4.15-MariaDB'  socket: '/run/mysqld/mysqld.sock'  port: 3306  MariaDB Server
2023-03-29 17:25:39 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication)
2023-03-29 17:25:40 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication)
2023-03-29 17:25:41 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication)
2023-03-29 17:25:42 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication)
2023-03-29 17:25:43 7 [Warning] Aborted connection 7 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication)

NPM logs:

❯ Starting nginx ...
❯ Starting backend ...
s6-rc: info: service frontend successfully started
s6-rc: info: service nginx successfully started
s6-rc: info: service backend successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
[3/29/2023] [5:25:33 PM] [Global   ] › ℹ  info      Using MySQL configuration
[3/29/2023] [5:25:33 PM] [Global   ] › ℹ  info      Creating a new JWT key pair...
[3/29/2023] [5:25:38 PM] [Global   ] › ℹ  info      Wrote JWT key pair to config file: /data/keys.json
[3/29/2023] [5:25:39 PM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/29/2023] [5:25:40 PM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/29/2023] [5:25:41 PM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/29/2023] [5:25:42 PM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/29/2023] [5:25:43 PM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/29/2023] [5:25:44 PM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/29/2023] [5:25:45 PM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
[3/29/2023] [5:25:46 PM] [Global   ] › ✖  error     Packets out of order. Got: 1 Expected: 0
<!-- gh-comment-id:1489015281 --> @bvn13 commented on GitHub (Mar 29, 2023): Hi! I have the same problem on fresh install using this compose file: ``` version: '3.7' networks: nginx-proxy-manager: external: true services: npm: image: 'jc21/nginx-proxy-manager:latest' container_name: nginx-proxy-manager restart: unless-stopped ports: - '80:80' - '43013:81' - '443:443' networks: - nginx-proxy-manager depends_on: - npm-db environment: DB_MYSQL_HOST: npm-db DB_MYSQL_PORT: 3306 DB_MYSQL_USER: npm DB_MYSQL_PASSWORD: npm DB_MYSQL_NAME: npm # Uncomment this if IPv6 is not enabled on your host #DISABLE_IPV6: 'true' volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt npm-db: image: 'jc21/mariadb-aria:latest' restart: unless-stopped environment: MYSQL_ROOT_PASSWORD: npm MYSQL_DATABASE: npm MYSQL_USER: npm MYSQL_PASSWORD: npm networks: - nginx-proxy-manager volumes: - ./data/mysql:/var/lib/mysql ``` Any ideas? mariadb logs: ``` MySQL init process done. Ready for start up. exec /usr/bin/mysqld --user=mysql --console --skip-name-resolve --skip-networking=0 2023-03-29 17:25:33 0 [Note] /usr/bin/mysqld (mysqld 10.4.15-MariaDB) starting as process 1 ... 2023-03-29 17:25:33 0 [ERROR] mysqld: File '/var/lib/mysql/aria_log_control' not found (Errcode: 13 "Permission denied") 2023-03-29 17:25:33 0 [ERROR] mysqld: Got error 'Can't open file' when trying to use aria control file '/var/lib/mysql/aria_log_control' 2023-03-29 17:25:33 0 [ERROR] Plugin 'Aria' init function returned error. 2023-03-29 17:25:33 0 [ERROR] Plugin 'Aria' registration as a STORAGE ENGINE failed. 2023-03-29 17:25:33 0 [Note] Plugin 'InnoDB' is disabled. 2023-03-29 17:25:33 0 [Note] Plugin 'FEEDBACK' is disabled. 2023-03-29 17:25:33 0 [ERROR] Could not open mysql.plugin table. Some plugins may be not loaded 2023-03-29 17:25:33 0 [ERROR] Failed to initialize plugins. 2023-03-29 17:25:33 0 [ERROR] Aborting [i] pre-init.d - processing /scripts/pre-init.d/01_secret-init.sh [i] mysqld already present, skipping creation [i] MySQL directory already present, skipping creation 2023-03-29 17:25:34 0 [Note] /usr/bin/mysqld (mysqld 10.4.15-MariaDB) starting as process 1 ... 2023-03-29 17:25:34 0 [Note] Plugin 'InnoDB' is disabled. 2023-03-29 17:25:34 0 [Note] Plugin 'FEEDBACK' is disabled. 2023-03-29 17:25:34 0 [Note] Server socket created on IP: '::'. 2023-03-29 17:25:34 0 [Warning] 'user' entry '@3726bb8bb89f' ignored in --skip-name-resolve mode. 2023-03-29 17:25:34 0 [Warning] 'proxies_priv' entry '@% root@3726bb8bb89f' ignored in --skip-name-resolve mode. 2023-03-29 17:25:34 0 [Note] Reading of all Master_info entries succeeded 2023-03-29 17:25:34 0 [Note] Added new Master_info '' to hash table 2023-03-29 17:25:34 0 [Note] /usr/bin/mysqld: ready for connections. Version: '10.4.15-MariaDB' socket: '/run/mysqld/mysqld.sock' port: 3306 MariaDB Server 2023-03-29 17:25:39 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication) 2023-03-29 17:25:40 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication) 2023-03-29 17:25:41 5 [Warning] Aborted connection 5 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication) 2023-03-29 17:25:42 6 [Warning] Aborted connection 6 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication) 2023-03-29 17:25:43 7 [Warning] Aborted connection 7 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication) ``` NPM logs: ``` ❯ Starting nginx ... ❯ Starting backend ... s6-rc: info: service frontend successfully started s6-rc: info: service nginx successfully started s6-rc: info: service backend successfully started s6-rc: info: service legacy-services: starting s6-rc: info: service legacy-services successfully started [3/29/2023] [5:25:33 PM] [Global ] › ℹ info Using MySQL configuration [3/29/2023] [5:25:33 PM] [Global ] › ℹ info Creating a new JWT key pair... [3/29/2023] [5:25:38 PM] [Global ] › ℹ info Wrote JWT key pair to config file: /data/keys.json [3/29/2023] [5:25:39 PM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/29/2023] [5:25:40 PM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/29/2023] [5:25:41 PM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/29/2023] [5:25:42 PM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/29/2023] [5:25:43 PM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/29/2023] [5:25:44 PM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/29/2023] [5:25:45 PM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 [3/29/2023] [5:25:46 PM] [Global ] › ✖ error Packets out of order. Got: 1 Expected: 0 ```
Author
Owner

@ErikUden commented on GitHub (Mar 29, 2023):

@bvn13 As I've described, you should switch to Yobasystem's MariaDB! Fixed it for me.

<!-- gh-comment-id:1489318716 --> @ErikUden commented on GitHub (Mar 29, 2023): @bvn13 As I've described, you should switch to Yobasystem's MariaDB! Fixed it for me.
Author
Owner

@bvn13 commented on GitHub (Mar 29, 2023):

@ErikUden It does not help me

❯ Starting backend ...
❯ Starting nginx ...
s6-rc: info: service frontend successfully started
s6-rc: info: service backend successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
[3/29/2023] [9:15:52 PM] [Global   ] › ℹ  info      Using MySQL configuration
[3/29/2023] [9:15:53 PM] [Global   ] › ✖  error     ER_HOST_NOT_PRIVILEGED: Host '172.19.0.3' is not allowed to connect to this MariaDB server
[3/29/2023] [9:15:54 PM] [Global   ] › ✖  error     ER_HOST_NOT_PRIVILEGED: Host '172.19.0.3' is not allowed to connect to this MariaDB server
[3/29/2023] [9:15:55 PM] [Global   ] › ✖  error     ER_HOST_NOT_PRIVILEGED: Host '172.19.0.3' is not allowed to connect to this MariaDB server
[i] mysqld not found, creating....
[i] MySQL directory already present, skipping creation
2023-03-29 21:15:51 0 [Note] Starting MariaDB 10.6.12-MariaDB source revision 4c79e15cc3716f69c044d4287ad2160da8101cdc as process 1
2023-03-29 21:15:51 0 [Note] InnoDB: The first data file './ibdata1' did not exist. A new tablespace will be created!
2023-03-29 21:15:51 0 [Note] InnoDB: Compressed tables use zlib 1.2.13
2023-03-29 21:15:51 0 [Note] InnoDB: Using transactional memory
2023-03-29 21:15:51 0 [Note] InnoDB: Number of pools: 1
2023-03-29 21:15:51 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions
2023-03-29 21:15:51 0 [Note] mysqld: O_TMPFILE is not supported on /var/tmp (disabling future attempts)
2023-03-29 21:15:51 0 [Note] InnoDB: Using Linux native AIO
2023-03-29 21:15:51 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728
2023-03-29 21:15:51 0 [Note] InnoDB: Completed initialization of buffer pool
2023-03-29 21:15:51 0 [Note] InnoDB: Setting file './ibdata1' size to 12 MB. Physically writing the file full; Please wait ...
2023-03-29 21:15:51 0 [Note] InnoDB: File './ibdata1' size is now 12 MB.
2023-03-29 21:15:51 0 [Note] InnoDB: Setting log file ./ib_logfile101 size to 100663296 bytes
2023-03-29 21:15:51 0 [Note] InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
2023-03-29 21:15:51 0 [Note] InnoDB: New log file created, LSN=10313
2023-03-29 21:15:51 0 [Note] InnoDB: Doublewrite buffer not found: creating new
2023-03-29 21:15:51 0 [Note] InnoDB: Doublewrite buffer created
2023-03-29 21:15:51 0 [Note] InnoDB: 128 rollback segments are active.
2023-03-29 21:15:51 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2023-03-29 21:15:51 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2023-03-29 21:15:51 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2023-03-29 21:15:51 0 [Note] InnoDB: 10.6.12 started; log sequence number 0; transaction id 3
2023-03-29 21:15:51 0 [Note] Plugin 'FEEDBACK' is disabled.
2023-03-29 21:15:51 0 [Note] Server socket created on IP: '0.0.0.0'.
2023-03-29 21:15:51 0 [Note] Server socket created on IP: '::'.
2023-03-29 21:15:51 0 [Warning] 'user' entry '@3726bb8bb89f' ignored in --skip-name-resolve mode.
2023-03-29 21:15:51 0 [Warning] 'proxies_priv' entry '@% root@3726bb8bb89f' ignored in --skip-name-resolve mode.
2023-03-29 21:15:51 0 [ERROR] Incorrect definition of table mysql.event: expected column 'definer' at position 3 to have type varchar(, found type char(141).
2023-03-29 21:15:51 0 [ERROR] mysqld: Event Scheduler: An error occurred when initializing system tables. Disabling the Event Scheduler.
2023-03-29 21:15:51 0 [Note] /usr/bin/mysqld: ready for connections.
Version: '10.6.12-MariaDB'  socket: '/run/mysqld/mysqld.sock'  port: 3306  MariaDB Server
2023-03-29 21:15:53 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication)
2023-03-29 21:15:54 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication)
version: '3.7'

networks:
  nginx-proxy-manager:
    external: true

services:
  npm:
    image: 'jc21/nginx-proxy-manager:latest'
    container_name: nginx-proxy-manager
    restart: unless-stopped
    ports:
      - '80:80'
      - '43013:81'
      - '443:443'
    networks:
      - nginx-proxy-manager
    depends_on:
      - npm-db
    environment:
      DB_MYSQL_HOST: npm-db
      DB_MYSQL_PORT: 3306
      DB_MYSQL_USER: npm
      DB_MYSQL_PASSWORD: npm
      DB_MYSQL_NAME: npm
      # Uncomment this if IPv6 is not enabled on your host
      #DISABLE_IPV6: 'true'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt

  npm-db:
    image: 'yobasystems/alpine-mariadb:latest'
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: npm
      MYSQL_DATABASE: npm
      MYSQL_USER: npm
      MYSQL_PASSWORD: npm
    networks:
      - nginx-proxy-manager
    volumes:
      - ./data/mysql:/var/lib/mysql
<!-- gh-comment-id:1489338836 --> @bvn13 commented on GitHub (Mar 29, 2023): @ErikUden It does not help me ``` ❯ Starting backend ... ❯ Starting nginx ... s6-rc: info: service frontend successfully started s6-rc: info: service backend successfully started s6-rc: info: service legacy-services: starting s6-rc: info: service legacy-services successfully started [3/29/2023] [9:15:52 PM] [Global ] › ℹ info Using MySQL configuration [3/29/2023] [9:15:53 PM] [Global ] › ✖ error ER_HOST_NOT_PRIVILEGED: Host '172.19.0.3' is not allowed to connect to this MariaDB server [3/29/2023] [9:15:54 PM] [Global ] › ✖ error ER_HOST_NOT_PRIVILEGED: Host '172.19.0.3' is not allowed to connect to this MariaDB server [3/29/2023] [9:15:55 PM] [Global ] › ✖ error ER_HOST_NOT_PRIVILEGED: Host '172.19.0.3' is not allowed to connect to this MariaDB server ``` ``` [i] mysqld not found, creating.... [i] MySQL directory already present, skipping creation 2023-03-29 21:15:51 0 [Note] Starting MariaDB 10.6.12-MariaDB source revision 4c79e15cc3716f69c044d4287ad2160da8101cdc as process 1 2023-03-29 21:15:51 0 [Note] InnoDB: The first data file './ibdata1' did not exist. A new tablespace will be created! 2023-03-29 21:15:51 0 [Note] InnoDB: Compressed tables use zlib 1.2.13 2023-03-29 21:15:51 0 [Note] InnoDB: Using transactional memory 2023-03-29 21:15:51 0 [Note] InnoDB: Number of pools: 1 2023-03-29 21:15:51 0 [Note] InnoDB: Using crc32 + pclmulqdq instructions 2023-03-29 21:15:51 0 [Note] mysqld: O_TMPFILE is not supported on /var/tmp (disabling future attempts) 2023-03-29 21:15:51 0 [Note] InnoDB: Using Linux native AIO 2023-03-29 21:15:51 0 [Note] InnoDB: Initializing buffer pool, total size = 134217728, chunk size = 134217728 2023-03-29 21:15:51 0 [Note] InnoDB: Completed initialization of buffer pool 2023-03-29 21:15:51 0 [Note] InnoDB: Setting file './ibdata1' size to 12 MB. Physically writing the file full; Please wait ... 2023-03-29 21:15:51 0 [Note] InnoDB: File './ibdata1' size is now 12 MB. 2023-03-29 21:15:51 0 [Note] InnoDB: Setting log file ./ib_logfile101 size to 100663296 bytes 2023-03-29 21:15:51 0 [Note] InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0 2023-03-29 21:15:51 0 [Note] InnoDB: New log file created, LSN=10313 2023-03-29 21:15:51 0 [Note] InnoDB: Doublewrite buffer not found: creating new 2023-03-29 21:15:51 0 [Note] InnoDB: Doublewrite buffer created 2023-03-29 21:15:51 0 [Note] InnoDB: 128 rollback segments are active. 2023-03-29 21:15:51 0 [Note] InnoDB: Creating shared tablespace for temporary tables 2023-03-29 21:15:51 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ... 2023-03-29 21:15:51 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB. 2023-03-29 21:15:51 0 [Note] InnoDB: 10.6.12 started; log sequence number 0; transaction id 3 2023-03-29 21:15:51 0 [Note] Plugin 'FEEDBACK' is disabled. 2023-03-29 21:15:51 0 [Note] Server socket created on IP: '0.0.0.0'. 2023-03-29 21:15:51 0 [Note] Server socket created on IP: '::'. 2023-03-29 21:15:51 0 [Warning] 'user' entry '@3726bb8bb89f' ignored in --skip-name-resolve mode. 2023-03-29 21:15:51 0 [Warning] 'proxies_priv' entry '@% root@3726bb8bb89f' ignored in --skip-name-resolve mode. 2023-03-29 21:15:51 0 [ERROR] Incorrect definition of table mysql.event: expected column 'definer' at position 3 to have type varchar(, found type char(141). 2023-03-29 21:15:51 0 [ERROR] mysqld: Event Scheduler: An error occurred when initializing system tables. Disabling the Event Scheduler. 2023-03-29 21:15:51 0 [Note] /usr/bin/mysqld: ready for connections. Version: '10.6.12-MariaDB' socket: '/run/mysqld/mysqld.sock' port: 3306 MariaDB Server 2023-03-29 21:15:53 3 [Warning] Aborted connection 3 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication) 2023-03-29 21:15:54 4 [Warning] Aborted connection 4 to db: 'unconnected' user: 'unauthenticated' host: '172.19.0.3' (This connection closed normally without authentication) ``` ``` version: '3.7' networks: nginx-proxy-manager: external: true services: npm: image: 'jc21/nginx-proxy-manager:latest' container_name: nginx-proxy-manager restart: unless-stopped ports: - '80:80' - '43013:81' - '443:443' networks: - nginx-proxy-manager depends_on: - npm-db environment: DB_MYSQL_HOST: npm-db DB_MYSQL_PORT: 3306 DB_MYSQL_USER: npm DB_MYSQL_PASSWORD: npm DB_MYSQL_NAME: npm # Uncomment this if IPv6 is not enabled on your host #DISABLE_IPV6: 'true' volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt npm-db: image: 'yobasystems/alpine-mariadb:latest' restart: unless-stopped environment: MYSQL_ROOT_PASSWORD: npm MYSQL_DATABASE: npm MYSQL_USER: npm MYSQL_PASSWORD: npm networks: - nginx-proxy-manager volumes: - ./data/mysql:/var/lib/mysql ```
Author
Owner

@Bolex80 commented on GitHub (Apr 1, 2023):

Anyone managed to fix the issue.
I tried changing from yobasystems/alpine-mariadb:latest to jc21/mariadb-aria:latest
And vice versa.
I still get bad geteway when logging in.
My services are working correctly, but I cannot add anything new.
I hope someone has some other option I could try.
Unfortunately, my last snapshot has already this fault and I have nowhere to revert to that works correctly.
Please help!

<!-- gh-comment-id:1492968350 --> @Bolex80 commented on GitHub (Apr 1, 2023): Anyone managed to fix the issue. I tried changing from yobasystems/alpine-mariadb:latest to jc21/mariadb-aria:latest And vice versa. I still get bad geteway when logging in. My services are working correctly, but I cannot add anything new. I hope someone has some other option I could try. Unfortunately, my last snapshot has already this fault and I have nowhere to revert to that works correctly. Please help!
Author
Owner

@DonBamboo commented on GitHub (Apr 2, 2023):

Whew! after so many hours of figuring out what the problem is my solution is use the npm version 2.9.18 if you use latest then it will return error I'm not sure why?

version: "3.8"

services:
mariadb:
container_name: MariaDB
image: "jc21/mariadb-aria:latest"
restart: always
healthcheck:
test: mysqladmin ping -h mariadb --password=${MYSQL_ROOT_PASSWORD}
interval: 1s
retries: 15
environment:

  • MYSQL_ROOT_PASSWORD
  • MYSQL_DATABASE
  • MYSQL_USER
  • MYSQL_PASSWORD
    volumes:
  • ./data/mysql:/var/lib/mysql
    networks:
  • nginx_network

nginx:
container_name: Nginx-Proxy-Manager
image: jc21/nginx-proxy-manager:2.9.18
restart: always
ports:

  • "80:80"
  • "81:81"
  • "443:443"
    environment:
  • DB_MYSQL_HOST
  • DB_MYSQL_PORT
  • DB_MYSQL_USER
  • DB_MYSQL_PASSWORD
  • DB_MYSQL_NAME
    volumes:
  • ./data:/data
  • ./letsencrypt:/etc/letsencrypt
    networks:
  • nginx_network

networks:
nginx_network:
external: true

<!-- gh-comment-id:1493255535 --> @DonBamboo commented on GitHub (Apr 2, 2023): Whew! after so many hours of figuring out what the problem is my solution is use the npm version 2.9.18 if you use latest then it will return error I'm not sure why? version: "3.8" services: mariadb: container_name: MariaDB image: "jc21/mariadb-aria:latest" restart: always healthcheck: test: mysqladmin ping -h mariadb --password=${MYSQL_ROOT_PASSWORD} interval: 1s retries: 15 environment: - MYSQL_ROOT_PASSWORD - MYSQL_DATABASE - MYSQL_USER - MYSQL_PASSWORD volumes: - ./data/mysql:/var/lib/mysql networks: - nginx_network nginx: container_name: Nginx-Proxy-Manager image: jc21/nginx-proxy-manager:2.9.18 restart: always ports: - "80:80" - "81:81" - "443:443" environment: - DB_MYSQL_HOST - DB_MYSQL_PORT - DB_MYSQL_USER - DB_MYSQL_PASSWORD - DB_MYSQL_NAME volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt networks: - nginx_network networks: nginx_network: external: true
Author
Owner

@Sonnenbrand commented on GitHub (Apr 2, 2023):

SOLUTION:
Hi all, if just want to share my solution for the Bad Gateway Login issue after updating to V2.10:
I had one /data directory for both nginx and mysql in my docker-compose.yml. I created a second data_db folder, copied the /data/mysql folder and changed the yml file. Restart the containers and it worked.

Same here, after update to 2.10.2 (from 2.19.X) bad login error with this in the fallback_error.log

*107 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.178.127, server: nginxproxymanager, request: "POST /api/tokens HTTP/1.1", upstream: "http://127.0.0.1:3000/tokens", host: "192.168.178.111:81", referrer: "http://192.168.178.111:81/login"

I assume the issue seems to be that it wants to connect to the 127.0.0.1:3000 which is refused. I tried it on the host machine as well as inside the container and that does not work. So that seems to be the issue here, right?

<!-- gh-comment-id:1493265983 --> @Sonnenbrand commented on GitHub (Apr 2, 2023): **SOLUTION**: Hi all, if just want to share my solution for the Bad Gateway Login issue after updating to V2.10: I had one /data directory for both nginx and mysql in my docker-compose.yml. I created a second data_db folder, copied the /data/mysql folder and changed the yml file. Restart the containers and it worked. Same here, after update to 2.10.2 (from 2.19.X) bad login error with this in the fallback_error.log ```shell *107 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.178.127, server: nginxproxymanager, request: "POST /api/tokens HTTP/1.1", upstream: "http://127.0.0.1:3000/tokens", host: "192.168.178.111:81", referrer: "http://192.168.178.111:81/login" ``` I assume the issue seems to be that it wants to connect to the 127.0.0.1:3000 which is refused. I tried it on the host machine as well as inside the container and that does not work. So that seems to be the issue here, right?
Author
Owner

@Bolex80 commented on GitHub (Apr 2, 2023):

I was able to fix it following theese instructions HERE

<!-- gh-comment-id:1493379714 --> @Bolex80 commented on GitHub (Apr 2, 2023): I was able to fix it following theese instructions [HERE](https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2774#issuecomment-1490100266)
Author
Owner

@FroggMaster commented on GitHub (Apr 6, 2023):

The quickest/easiest workaround for this that doesn't require direct config changes is to change the permissions on the mysql folder:
chmod 777 -R <mysql folder>

Permissions can be adjusted back to 755 once the issue has been resolved.

<!-- gh-comment-id:1498795197 --> @FroggMaster commented on GitHub (Apr 6, 2023): The quickest/easiest workaround for this that doesn't require direct config changes is to change the permissions on the mysql folder: `chmod 777 -R <mysql folder>` Permissions can be adjusted back to 755 once the issue has been resolved.
Author
Owner

@TokugawaHeavyIndustries commented on GitHub (Apr 12, 2023):

This randomly cropped up for me this week. Weird. @FroggMaster 's chmod recommendation fixed it.

<!-- gh-comment-id:1504447886 --> @TokugawaHeavyIndustries commented on GitHub (Apr 12, 2023): This randomly cropped up for me this week. Weird. @FroggMaster 's chmod recommendation fixed it.
Author
Owner

@manuelmartin-developer commented on GitHub (Apr 27, 2023):

The quickest/easiest workaround for this that doesn't require direct config changes is to change the permissions on the mysql folder: chmod 777 -R <mysql folder>

Permissions can be adjusted back to 755 once the issue has been resolved.

This worked!

Thanks to @FroggMaster

<!-- gh-comment-id:1526206086 --> @manuelmartin-developer commented on GitHub (Apr 27, 2023): > The quickest/easiest workaround for this that doesn't require direct config changes is to change the permissions on the mysql folder: `chmod 777 -R <mysql folder>` > > Permissions can be adjusted back to 755 once the issue has been resolved. This worked! Thanks to @FroggMaster
Author
Owner

@FroggMaster commented on GitHub (Apr 30, 2023):

I ended up having further issues with my MariaDB; So I ended up converting it to a SQLite DB. This resolved not being able to logon, as well as an "npm user not found" startup error I was facing due to the MariaDB. So far I've found that the docker configuration is easier and it's an overall easier DB format to manage.

I'll share an easy step by step that can be followed to do the same.

  1. First we're going to prepare the Pre-Requisites to be able to follow these steps. You require the following packages installed:
    mysqldump, sqlite3 and the script mysql2sqlite
    a) If you're on FreeBSD the packages can be installed VIA:
    pkg install mysqldump sqlite3
    b) If you're on Ubuntu/Debian the packages can be installed VIA:
    apt-get install mysqldump sqlite3

  2. Download the mysql2sqlite script.
    git clone https://github.com/dumblob/mysql2sqlite.git

  3. Export your MariaDB MYSQL Database from your NPM DB Container
    docker exec -it [db-container-name] mysqldump --user=[mysql-user] --password=[mysql-password] [mysql-db-name] -h 127.0.0.1 > npm-export.sql

  4. Convert the .SQL file to a .SQLite file
    ./mysql2sqlite npm-export.sql | sqlite3 database.sqlite

  5. Stop the NPM / DB Containers if they're running.
    docker-compose stop [container-name]

  6. Copy the database.sqlite file into your container's persistent storage location. For me and most that followed the default configurations this will be the data directory.

  7. Update the docker-compose.yaml and remove every DB_MYSQL_* environment entry.

  8. OPTIONAL STEP: Update the docker-compose.yaml and add the following environment entry
    DB_SQLITE_FILE: "/data/database.sqlite"

Note: This step is OPTIONAL as /data/database.sqlite is the default location NPM will read the database file from. You only really need this line in the YAML if you want to relocate the database.sqlite file. If so, you can adjust the above path to something else.

  1. Start your NPM container
    docker-compose up -d

  2. Check the docker Logs for the following line to validate the SQLlite file is being used. Using Sqlite: /data/database.sqlite This can be done easily with the following command:
    docker compose logs|grep -i database.sqlite

  3. Rejoice everything should hopefully be working now.

<!-- gh-comment-id:1528968487 --> @FroggMaster commented on GitHub (Apr 30, 2023): I ended up having further issues with my MariaDB; So I ended up converting it to a SQLite DB. This resolved not being able to logon, as well as an "npm user not found" startup error I was facing due to the MariaDB. So far I've found that the docker configuration is easier and it's an overall easier DB format to manage. I'll share an easy step by step that can be followed to do the same. 1) First we're going to prepare the Pre-Requisites to be able to follow these steps. You require the following packages installed: **mysqldump**, **sqlite3** and the script **mysql2sqlite** a) If you're on **FreeBSD** the packages can be installed VIA: ```pkg install mysqldump sqlite3``` b) If you're on **Ubuntu/Debian** the packages can be installed VIA: ```apt-get install mysqldump sqlite3``` 2) Download the mysql2sqlite script. ```git clone https://github.com/dumblob/mysql2sqlite.git``` 3) Export your MariaDB MYSQL Database from your NPM DB Container ```docker exec -it [db-container-name] mysqldump --user=[mysql-user] --password=[mysql-password] [mysql-db-name] -h 127.0.0.1 > npm-export.sql``` 4) Convert the .SQL file to a .SQLite file ```./mysql2sqlite npm-export.sql | sqlite3 database.sqlite``` 5) Stop the NPM / DB Containers if they're running. ```docker-compose stop [container-name]``` 6) Copy the database.sqlite file into your container's persistent storage location. For me and most that followed the default configurations this will be the **data** directory. 7) Update the docker-compose.yaml and remove every DB_MYSQL_* environment entry. 8) **OPTIONAL STEP:** Update the docker-compose.yaml and add the following environment entry ```DB_SQLITE_FILE: "/data/database.sqlite"``` **Note:** This step is **OPTIONAL** as /data/database.sqlite is the default location NPM will read the database file from. You only really need this line in the YAML if you want to relocate the database.sqlite file. If so, you can adjust the above path to something else. 9) Start your NPM container ```docker-compose up -d``` 10) Check the docker Logs for the following line to validate the SQLlite file is being used. **Using Sqlite: /data/database.sqlite** This can be done easily with the following command: ```docker compose logs|grep -i database.sqlite``` 11) Rejoice everything _should_ hopefully be working now.
Author
Owner

@alamoudimoh commented on GitHub (May 12, 2023):

three years later and still users have to go through this again and read many articles!!! with due respect what are the developers really doing? is this develop as an open source project to ease the life of people and to make them suffer!!!!

<!-- gh-comment-id:1545643734 --> @alamoudimoh commented on GitHub (May 12, 2023): three years later and still users have to go through this again and read many articles!!! with due respect what are the developers really doing? is this develop as an open source project to ease the life of people and to make them suffer!!!!
Author
Owner

@ErikUden commented on GitHub (May 12, 2023):

three years later and still users have to go through this again and read many articles!!! with due respect what are the developers really doing? is this develop as an open source project to ease the life of people and to make them suffer!!!!

Please. The developers here work for free. It is an issue, yes. Software is complex, it functions differently on every device: ERRORS HAPPEN! I understand your frustration, do not let it out on the people who, without anyone asking them to, provide their time and effort for FREE. Sure, without these developers you would not have this error, because without these developers you wouldn't even be able to complain about it in the first place because the software would not exist.

Let out your frustration on someone / something else, not these wonderful human beings who brought us the NGINX Proxy Manager.

<!-- gh-comment-id:1545647858 --> @ErikUden commented on GitHub (May 12, 2023): > three years later and still users have to go through this again and read many articles!!! with due respect what are the developers really doing? is this develop as an open source project to ease the life of people and to make them suffer!!!! Please. The developers here work for free. It is an issue, yes. Software is complex, it functions differently on every device: ERRORS HAPPEN! I understand your frustration, do not let it out on the people who, without anyone asking them to, provide their time and effort for FREE. Sure, without these developers you would not have this error, because without these developers you wouldn't even be able to complain about it in the first place **because the software would not exist.** Let out your frustration on someone / something else, not these wonderful human beings who brought us the NGINX Proxy Manager.
Author
Owner

@delacosta456 commented on GitHub (May 12, 2023):

Totally agreed with @ErikUden

<!-- gh-comment-id:1545904778 --> @delacosta456 commented on GitHub (May 12, 2023): Totally agreed with @ErikUden
Author
Owner

@alamoudimoh commented on GitHub (May 12, 2023):

three years later and still users have to go through this again and read many articles!!! with due respect what are the developers really doing? is this develop as an open source project to ease the life of people and to make them suffer!!!!

Please. The developers here work for free. It is an issue, yes. Software is complex, it functions differently on every device: ERRORS HAPPEN! I understand your frustration, do not let it out on the people who, without anyone asking them to, provide their time and effort for FREE. Sure, without these developers you would not have this error, because without these developers you wouldn't even be able to complain about it in the first place because the software would not exist.

Let out your frustration on someone / something else, not these wonderful human beings who brought us the NGINX Proxy Manager.

no one is disagreeing on that fact, i do appreciate every single second you guys spent, however, we cannot focus of new feature development only and forget the repeated issues! i spent 4 hours just searching the old issues for the solution, and it could be seconds if that was listed as a known issues that might occur at the installation guide, since it is reported more frequently.

BTW...i am an IT person specialized in IT GRC, and i understand developers sometimes focus on something and forgets to mention this known error somewhere, where people would spend time trying to solve it.

<!-- gh-comment-id:1546100086 --> @alamoudimoh commented on GitHub (May 12, 2023): > > three years later and still users have to go through this again and read many articles!!! with due respect what are the developers really doing? is this develop as an open source project to ease the life of people and to make them suffer!!!! > > Please. The developers here work for free. It is an issue, yes. Software is complex, it functions differently on every device: ERRORS HAPPEN! I understand your frustration, do not let it out on the people who, without anyone asking them to, provide their time and effort for FREE. Sure, without these developers you would not have this error, because without these developers you wouldn't even be able to complain about it in the first place **because the software would not exist.** > > Let out your frustration on someone / something else, not these wonderful human beings who brought us the NGINX Proxy Manager. no one is disagreeing on that fact, i do appreciate every single second you guys spent, however, we cannot focus of new feature development only and forget the repeated issues! i spent 4 hours just searching the old issues for the solution, and it could be seconds if that was listed as a known issues that might occur at the installation guide, since it is reported more frequently. BTW...i am an IT person specialized in IT GRC, and i understand developers sometimes focus on something and forgets to mention this known error somewhere, where people would spend time trying to solve it.
Author
Owner

@ErikUden commented on GitHub (May 12, 2023):

three years later and still users have to go through this again and read many articles!!! with due respect what are the developers really doing? is this develop as an open source project to ease the life of people and to make them suffer!!!!

Please. The developers here work for free. It is an issue, yes. Software is complex, it functions differently on every device: ERRORS HAPPEN! I understand your frustration, do not let it out on the people who, without anyone asking them to, provide their time and effort for FREE. Sure, without these developers you would not have this error, because without these developers you wouldn't even be able to complain about it in the first place because the software would not exist.
Let out your frustration on someone / something else, not these wonderful human beings who brought us the NGINX Proxy Manager.

no one is disagreeing on that fact, i do appreciate every single second you guys spent, however, we cannot focus of new feature development only and forget the repeated issues! i spent 4 hours just searching the old issues for the solution, and it could be seconds if that was listed as a known issues that might occur at the installation guide, since it is reported more frequently.

BTW...i am an IT person specialized in IT GRC, and i understand developers sometimes focus on something and forgets to mention this known error somewhere, where people would spend time trying to solve it.

Then let me put what you wanted to say in more constructive words:

Hey, I've encountered this issue many times and each time I do I spend hours searching for the solution. Could the following be listed on the "common errors" section of the NGINX Proxy Manager? I think this would save a lot of time for everyone who might also encounter this error in the future! Thanks.

Just write it like that! If you simply attack developers and question whether they deliberately make software in order to cause suffering, no one will take you seriously.

<!-- gh-comment-id:1546129560 --> @ErikUden commented on GitHub (May 12, 2023): > > > three years later and still users have to go through this again and read many articles!!! with due respect what are the developers really doing? is this develop as an open source project to ease the life of people and to make them suffer!!!! > > > > > > Please. The developers here work for free. It is an issue, yes. Software is complex, it functions differently on every device: ERRORS HAPPEN! I understand your frustration, do not let it out on the people who, without anyone asking them to, provide their time and effort for FREE. Sure, without these developers you would not have this error, because without these developers you wouldn't even be able to complain about it in the first place **because the software would not exist.** > > Let out your frustration on someone / something else, not these wonderful human beings who brought us the NGINX Proxy Manager. > > no one is disagreeing on that fact, i do appreciate every single second you guys spent, however, we cannot focus of new feature development only and forget the repeated issues! i spent 4 hours just searching the old issues for the solution, and it could be seconds if that was listed as a known issues that might occur at the installation guide, since it is reported more frequently. > > BTW...i am an IT person specialized in IT GRC, and i understand developers sometimes focus on something and forgets to mention this known error somewhere, where people would spend time trying to solve it. Then let me put what you wanted to say in more constructive words: Hey, I've encountered this issue many times and each time I do I spend hours searching for the solution. Could the following <steps to fix error> be listed on the "common errors" section of the NGINX Proxy Manager? I think this would save a lot of time for everyone who might also encounter this error in the future! Thanks. Just write it like that! If you simply attack developers and question whether they deliberately make software in order to cause suffering, no one will take you seriously.
Author
Owner

@alamoudimoh commented on GitHub (May 12, 2023):

three years later and still users have to go through this again and read many articles!!! with due respect what are the developers really doing? is this develop as an open source project to ease the life of people and to make them suffer!!!!

Please. The developers here work for free. It is an issue, yes. Software is complex, it functions differently on every device: ERRORS HAPPEN! I understand your frustration, do not let it out on the people who, without anyone asking them to, provide their time and effort for FREE. Sure, without these developers you would not have this error, because without these developers you wouldn't even be able to complain about it in the first place because the software would not exist.
Let out your frustration on someone / something else, not these wonderful human beings who brought us the NGINX Proxy Manager.

no one is disagreeing on that fact, i do appreciate every single second you guys spent, however, we cannot focus of new feature development only and forget the repeated issues! i spent 4 hours just searching the old issues for the solution, and it could be seconds if that was listed as a known issues that might occur at the installation guide, since it is reported more frequently.
BTW...i am an IT person specialized in IT GRC, and i understand developers sometimes focus on something and forgets to mention this known error somewhere, where people would spend time trying to solve it.

Then let me put what you wanted to say in more constructive words:

Hey, I've encountered this issue many times and each time I do I spend hours searching for the solution. Could the following be listed on the "common errors" section of the NGINX Proxy Manager? I think this would save a lot of time for everyone who might also encounter this error in the future! Thanks.

Just write it like that! If you simply attack developers and question whether they deliberately make software in order to cause suffering, no one will take you seriously.

Totally agree.

My bad!

<!-- gh-comment-id:1546207942 --> @alamoudimoh commented on GitHub (May 12, 2023): > > > > three years later and still users have to go through this again and read many articles!!! with due respect what are the developers really doing? is this develop as an open source project to ease the life of people and to make them suffer!!!! > > > > > > > > > Please. The developers here work for free. It is an issue, yes. Software is complex, it functions differently on every device: ERRORS HAPPEN! I understand your frustration, do not let it out on the people who, without anyone asking them to, provide their time and effort for FREE. Sure, without these developers you would not have this error, because without these developers you wouldn't even be able to complain about it in the first place **because the software would not exist.** > > > Let out your frustration on someone / something else, not these wonderful human beings who brought us the NGINX Proxy Manager. > > > > > > no one is disagreeing on that fact, i do appreciate every single second you guys spent, however, we cannot focus of new feature development only and forget the repeated issues! i spent 4 hours just searching the old issues for the solution, and it could be seconds if that was listed as a known issues that might occur at the installation guide, since it is reported more frequently. > > BTW...i am an IT person specialized in IT GRC, and i understand developers sometimes focus on something and forgets to mention this known error somewhere, where people would spend time trying to solve it. > > Then let me put what you wanted to say in more constructive words: > > Hey, I've encountered this issue many times and each time I do I spend hours searching for the solution. Could the following be listed on the "common errors" section of the NGINX Proxy Manager? I think this would save a lot of time for everyone who might also encounter this error in the future! Thanks. > > Just write it like that! If you simply attack developers and question whether they deliberately make software in order to cause suffering, no one will take you seriously. Totally agree. My bad!
Author
Owner

@ademalidurmus commented on GitHub (Jun 12, 2023):

I had the same problem and resolved it with mysql data directory owner update chown -R 100:101 ./data/mysql.

[root@dev-2-183593 data]# pwd
/projects/nginxproxymanager/data
[root@dev-2-183593 data]# chown -R 100:101 mysql/
[root@dev-2-183593 data]# ls -al
total 4
drwxr-xr-x. 8 root root  127 Jun 12 17:34 .
drwxr-xr-x. 4 root root   63 Jun 12 17:33 ..
drwxr-xr-x. 2 root root    6 Jun 12 17:25 access
drwxr-xr-x. 2 root root    6 Jun 12 17:25 custom_ssl
-rw-r--r--. 1 root root 2190 Jun 12 17:26 keys.json
drwxr-xr-x. 2 root root    6 Jun 12 17:25 letsencrypt-acme-challenge
drwxr-xr-x. 2 root root  120 Jun 12 17:39 logs
drwxr-xr-x. 5  100  101  154 Jun 12 17:34 mysql
drwxr-xr-x. 9 root root  130 Jun 12 17:25 nginx
<!-- gh-comment-id:1587794251 --> @ademalidurmus commented on GitHub (Jun 12, 2023): I had the same problem and resolved it with mysql data directory owner update `chown -R 100:101 ./data/mysql`. ``` [root@dev-2-183593 data]# pwd /projects/nginxproxymanager/data [root@dev-2-183593 data]# chown -R 100:101 mysql/ [root@dev-2-183593 data]# ls -al total 4 drwxr-xr-x. 8 root root 127 Jun 12 17:34 . drwxr-xr-x. 4 root root 63 Jun 12 17:33 .. drwxr-xr-x. 2 root root 6 Jun 12 17:25 access drwxr-xr-x. 2 root root 6 Jun 12 17:25 custom_ssl -rw-r--r--. 1 root root 2190 Jun 12 17:26 keys.json drwxr-xr-x. 2 root root 6 Jun 12 17:25 letsencrypt-acme-challenge drwxr-xr-x. 2 root root 120 Jun 12 17:39 logs drwxr-xr-x. 5 100 101 154 Jun 12 17:34 mysql drwxr-xr-x. 9 root root 130 Jun 12 17:25 nginx ```
Author
Owner

@Antassium commented on GitHub (Aug 3, 2023):

FYI!!!
For anyone who was never helped by any of these solutions:

I removed the database section entirely as it's utterly unnecessary and poory functioning.

Here's the docker-compose.yaml file I used and it worked as it always does to simply get the proxy manager to allow me to login with default credentials:

version: "2.1"
services:
app:
image: jc21/nginx-proxy-manager:latest
container_name: nginx-proxy
volumes:
- ./data/app:/data
- ./letsencrypt:/etc/letsencrypt
ports:
- 80:80
- 443:443
- 81:81
restart: unless-stopped

<!-- gh-comment-id:1664024692 --> @Antassium commented on GitHub (Aug 3, 2023): FYI!!! For anyone who was never helped by any of these solutions: I removed the database section entirely as it's utterly unnecessary and poory functioning. Here's the docker-compose.yaml file I used and it worked as it always does to simply get the proxy manager to allow me to login with default credentials: version: "2.1" services: app: image: jc21/nginx-proxy-manager:latest container_name: nginx-proxy volumes: - ./data/app:/data - ./letsencrypt:/etc/letsencrypt ports: - 80:80 - 443:443 - 81:81 restart: unless-stopped
Author
Owner

@wimmme commented on GitHub (Aug 3, 2023):

FYI!!! For anyone who was never helped by any of these solutions:

I removed the database section entirely as it's utterly unnecessary and poory functioning.

Here's the docker-compose.yaml file I used and it worked as it always does to simply get the proxy manager to allow me to login with default credentials:

version: "2.1" services: app: image: jc21/nginx-proxy-manager:latest container_name: nginx-proxy volumes: - ./data/app:/data - ./letsencrypt:/etc/letsencrypt ports: - 80:80 - 443:443 - 81:81 restart: unless-stopped

Just a question. The workarround (mysql data directory owner update chown -R 100:101 ./data/mysql. )did work for me, the problem is I have to do it everytime NPM updates.

If we can eliminate the database completely that would solve the problem too off course :-)
Did you change form DB to Non-DB config without losing data/config ?

EDIT: probably have to use this procedure:https://github.com/NginxProxyManager/nginx-proxy-manager/discussions/1529#migrate-mariadb-to-sqlite-dbeaver

<!-- gh-comment-id:1664125882 --> @wimmme commented on GitHub (Aug 3, 2023): > FYI!!! For anyone who was never helped by any of these solutions: > > I removed the database section entirely as it's utterly unnecessary and poory functioning. > > Here's the docker-compose.yaml file I used and it worked as it always does to simply get the proxy manager to allow me to login with default credentials: > > version: "2.1" services: app: image: jc21/nginx-proxy-manager:latest container_name: nginx-proxy volumes: - ./data/app:/data - ./letsencrypt:/etc/letsencrypt ports: - 80:80 - 443:443 - 81:81 restart: unless-stopped Just a question. The workarround (mysql data directory owner update chown -R 100:101 ./data/mysql. )did work for me, the problem is I have to do it everytime NPM updates. If we can eliminate the database completely that would solve the problem too off course :-) Did you change form DB to Non-DB config without losing data/config ? EDIT: probably have to use this procedure:[https://github.com/NginxProxyManager/nginx-proxy-manager/discussions/1529#migrate-mariadb-to-sqlite-dbeaver](https://github.com/NginxProxyManager/nginx-proxy-manager/discussions/1529#migrate-mariadb-to-sqlite-dbeaver)
Author
Owner

@MaPaLo76 commented on GitHub (Sep 3, 2023):

:2.9.18

this worked also for me. Thanks.

<!-- gh-comment-id:1704412074 --> @MaPaLo76 commented on GitHub (Sep 3, 2023): > :2.9.18 this worked also for me. Thanks.
Author
Owner

@deminart commented on GitHub (Sep 14, 2023):

I suffered for a couple of days...
I tried everything that was written here, nothing helped....
Changing the rights chmod 777 -R helped at the moment, but after restarting, everything broke again...
In general, what helped personally on my configuration.
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal

Maybe it will help someone)
The process is not complicated

Deleted all associated containers, volumes, networks and directories...

Then I recreated everything anew with this configuration.

cd /home/containers/npm/
nano docker-compose.yml

version: "3.8"
services:
  app:
    container_name: nginx-proxy-manager
    image: jc21/nginx-proxy-manager
    hostname: nginx-proxy-manager
    restart: unless-stopped
    ports:
      # Public HTTP Port:
      - 80:80
      # Public HTTPS Port:
      - 443:443
      # Admin Web Port:
      - 81:81
    environment:
      # Uncomment this if IPv6 is not enabled on your host
      DISABLE_IPV6: 'true'
    volumes:
      # Make sure this config.json file exists as per instructions below:
      - ./config.json:/app/config/production.json
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
    depends_on:
      - db
    healthcheck:
      test: ["CMD", "/bin/check-health"]
      interval: 10s
      timeout: 3s
  db:
    container_name: proxy-mariadb
    image: yobasystems/alpine-mariadb:10.5.11
    restart: unless-stopped
    environment:
      MYSQL_ROOT_PASSWORD: 'SOMETHING'
      MYSQL_DATABASE: 'npm'
      MYSQL_USER: 'proxyManager'
      MYSQL_PASSWORD: 'SOMETHING'
    volumes:
      - ./data/mysql:/var/lib/mysql

networks:
  default:
      name: proxy

Save the file.
Run the following command.

nano config.json

{
  "database": {
    "engine": "mysql",
    "host": "db",
    "name": "npm",
    "user": "proxyManager",
    "password": "SOMETHING", #Only change the password to what you put in the `docker-compose.yml` file.
    "port": 3306
  }
}

Save the file and run:

docker-compose up -d

only after that I was able to log in)

<!-- gh-comment-id:1719199977 --> @deminart commented on GitHub (Sep 14, 2023): I suffered for a couple of days... I tried everything that was written here, nothing helped.... Changing the rights chmod 777 -R <mysql folder> helped at the moment, but after restarting, everything broke again... In general, what helped personally on my configuration. Description: Ubuntu 20.04.6 LTS Release: 20.04 Codename: focal Maybe it will help someone) The process is not complicated Deleted all associated containers, volumes, networks and directories... **Then I recreated everything anew with this configuration.** cd /home/containers/npm/ nano docker-compose.yml ``` version: "3.8" services: app: container_name: nginx-proxy-manager image: jc21/nginx-proxy-manager hostname: nginx-proxy-manager restart: unless-stopped ports: # Public HTTP Port: - 80:80 # Public HTTPS Port: - 443:443 # Admin Web Port: - 81:81 environment: # Uncomment this if IPv6 is not enabled on your host DISABLE_IPV6: 'true' volumes: # Make sure this config.json file exists as per instructions below: - ./config.json:/app/config/production.json - ./data:/data - ./letsencrypt:/etc/letsencrypt depends_on: - db healthcheck: test: ["CMD", "/bin/check-health"] interval: 10s timeout: 3s db: container_name: proxy-mariadb image: yobasystems/alpine-mariadb:10.5.11 restart: unless-stopped environment: MYSQL_ROOT_PASSWORD: 'SOMETHING' MYSQL_DATABASE: 'npm' MYSQL_USER: 'proxyManager' MYSQL_PASSWORD: 'SOMETHING' volumes: - ./data/mysql:/var/lib/mysql networks: default: name: proxy ``` Save the file. Run the following command. nano config.json ``` { "database": { "engine": "mysql", "host": "db", "name": "npm", "user": "proxyManager", "password": "SOMETHING", #Only change the password to what you put in the `docker-compose.yml` file. "port": 3306 } } ``` Save the file and run: docker-compose up -d only after that I was able to log in)
Author
Owner

@ademalidurmus commented on GitHub (Sep 15, 2023):

I'm using this version; I've removed the database part, and it's still working fine. When you can't define any database service, it will use the SQLite database. If you're trying to do a fresh install, you can choose this option. If you already have an installation, you can migrate by following @wimmme's suggestion here: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/310#issuecomment-1664125882

version: '3.8'
services:
  app:
    image: 'jc21/nginx-proxy-manager:latest'
    restart: unless-stopped
    ports:
      - '80:80'
      - '81:81'
      - '443:443'
    volumes:
      - ./data:/data
      - ./letsencrypt:/etc/letsencrypt
<!-- gh-comment-id:1721343388 --> @ademalidurmus commented on GitHub (Sep 15, 2023): I'm using this version; I've removed the database part, and it's still working fine. When you can't define any database service, it will use the SQLite database. If you're trying to do a fresh install, you can choose this option. If you already have an installation, you can migrate by following @wimmme's suggestion here: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/310#issuecomment-1664125882 ``` version: '3.8' services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped ports: - '80:80' - '81:81' - '443:443' volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt ```
Author
Owner

@silkyclouds commented on GitHub (Jan 20, 2024):

Same problem here. I already got it but can't remember how I fixed it...

nginx updated last night, here is what I can see in the logs :

2024/01/20 08:35:30 [error] 185#185: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.3.6, server: nginxproxymanager, request: "POST /api/tokens HTTP/1.1", upstream: "http://127.0.0.1:3000/tokens", host: "192.168.3.19:81", referrer: "http://192.168.3.19:81/login"

On the other hand, npm starts and redirects the traffic as it should. I can reach all my web services.

I've recovered backups, same thing. - bad gateway at login screen.

I'm NOT using any separate database to run npm butthe integrated sqlite DB :

❯ Starting nginx ...
❯ Starting backend ...
[1/20/2024] [8:34:59 AM] [Global   ] › ℹ  info      Using Sqlite: /data/database.sqlite
[1/20/2024] [8:35:02 AM] [Migrate  ] › ℹ  info      Current database version: none

I will try to go back to previous version to see if it helps... but this problem happening from time to time makes me lesser confident in npm and I think I'm just gonna move to Traefik.

<!-- gh-comment-id:1901872602 --> @silkyclouds commented on GitHub (Jan 20, 2024): Same problem here. I already got it but can't remember how I fixed it... nginx updated last night, here is what I can see in the logs : ```` 2024/01/20 08:35:30 [error] 185#185: *3 connect() failed (111: Connection refused) while connecting to upstream, client: 192.168.3.6, server: nginxproxymanager, request: "POST /api/tokens HTTP/1.1", upstream: "http://127.0.0.1:3000/tokens", host: "192.168.3.19:81", referrer: "http://192.168.3.19:81/login" ```` On the other hand, npm starts and redirects the traffic as it should. I can reach all my web services. I've recovered backups, same thing. - bad gateway at login screen. I'm NOT using any separate database to run npm butthe integrated sqlite DB : ```` ❯ Starting nginx ... ❯ Starting backend ... [1/20/2024] [8:34:59 AM] [Global ] › ℹ info Using Sqlite: /data/database.sqlite [1/20/2024] [8:35:02 AM] [Migrate ] › ℹ info Current database version: none ```` I will try to go back to previous version to see if it helps... but this problem happening from time to time makes me lesser confident in npm and I think I'm just gonna move to Traefik.
Author
Owner

@goppinath commented on GitHub (Jan 21, 2024):

I am also using SQLite and getting the same error as @silkyclouds saying Bad Gateway. I have reverted to the image jc21/nginx-proxy-manager:2.10.4 and I can log in again.

<!-- gh-comment-id:1902559600 --> @goppinath commented on GitHub (Jan 21, 2024): I am also using SQLite and getting the same error as @silkyclouds saying `Bad Gateway`. I have reverted to the image `jc21/nginx-proxy-manager:2.10.4` and I can log in again.
Author
Owner

@silkyclouds commented on GitHub (Jan 21, 2024):

I am also using SQLite and getting the same error as @silkyclouds saying Bad Gateway. I have reverted to the image jc21/nginx-proxy-manager:2.10.4 and I can log in again.

Well I guess a downgrade is the only thing to go for, for now... Thanks for letting us know a downgrade worked for you !

<!-- gh-comment-id:1902563352 --> @silkyclouds commented on GitHub (Jan 21, 2024): > I am also using SQLite and getting the same error as @silkyclouds saying `Bad Gateway`. I have reverted to the image `jc21/nginx-proxy-manager:2.10.4` and I can log in again. Well I guess a downgrade is the only thing to go for, for now... Thanks for letting us know a downgrade worked for you !
Author
Owner

@silkyclouds commented on GitHub (Jan 21, 2024):

so I did downgrade to 2.10.4 as you recommended @goppinath but unfortunately I'm still geting bad gateway at login.

and I also have some env variable issue now as I can see the backend is not starting at all and logs loops on this :

❯ Starting backend ...
node: --openssl-legacy-provider is not allowed in NODE_OPTIONS
❯ Starting backend ...
node: --openssl-legacy-provider is not allowed in NODE_OPTIONS
<!-- gh-comment-id:1902564438 --> @silkyclouds commented on GitHub (Jan 21, 2024): so I did downgrade to 2.10.4 as you recommended @goppinath but unfortunately I'm still geting bad gateway at login. and I also have some env variable issue now as I can see the backend is not starting at all and logs loops on this : ```` ❯ Starting backend ... node: --openssl-legacy-provider is not allowed in NODE_OPTIONS ❯ Starting backend ... node: --openssl-legacy-provider is not allowed in NODE_OPTIONS ````
Author
Owner

@skatespeare commented on GitHub (Jan 21, 2024):

so I did downgrade to 2.10.4 as you recommended @goppinath but unfortunately I'm still geting bad gateway at login.

and I also have some env variable issue now as I can see the backend is not starting at all and logs loops on this :

❯ Starting backend ...
node: --openssl-legacy-provider is not allowed in NODE_OPTIONS
❯ Starting backend ...
node: --openssl-legacy-provider is not allowed in NODE_OPTIONS

Removing the node_options from the env variables did the trick for me. Not sure if it broke anything else, but Let's Encrypt still works.

<!-- gh-comment-id:1902577517 --> @skatespeare commented on GitHub (Jan 21, 2024): > so I did downgrade to 2.10.4 as you recommended @goppinath but unfortunately I'm still geting bad gateway at login. > > and I also have some env variable issue now as I can see the backend is not starting at all and logs loops on this : > > ``` > ❯ Starting backend ... > node: --openssl-legacy-provider is not allowed in NODE_OPTIONS > ❯ Starting backend ... > node: --openssl-legacy-provider is not allowed in NODE_OPTIONS > ``` Removing the node_options from the env variables did the trick for me. Not sure if it broke anything else, but Let's Encrypt still works.
Author
Owner

@goppinath commented on GitHub (Jan 21, 2024):

@silkyclouds I have two NPM instances one on Debian 12 as a VPS and the other on the Raspberry Pi OS based on Debian 12. The VPS one gave the issue and the RPi one is running without any issue. Maybe try downgrading one more version as a workaround. I am seriously thinking of switching to a more stable reverse proxy manager and/or making the weekly incremental backup too.

<!-- gh-comment-id:1902606358 --> @goppinath commented on GitHub (Jan 21, 2024): @silkyclouds I have two NPM instances one on Debian 12 as a VPS and the other on the Raspberry Pi OS based on Debian 12. The VPS one gave the issue and the RPi one is running without any issue. Maybe try downgrading one more version as a workaround. I am seriously thinking of switching to a more stable reverse proxy manager and/or making the weekly incremental backup too.
Author
Owner

@goppinath commented on GitHub (Jan 21, 2024):

Thank you @jc21 I have updated to version 2.11.1 which was released just an hour ago and this has resolved my issue. @silkyclouds @skatespeare Give it a try. Let's appreciate the developers and support the project.

<!-- gh-comment-id:1902613130 --> @goppinath commented on GitHub (Jan 21, 2024): Thank you @jc21 I have updated to version 2.11.1 which was released just an hour ago and this has resolved my issue. @silkyclouds @skatespeare Give it a try. Let's appreciate the developers and support the project.
Author
Owner

@silkyclouds commented on GitHub (Jan 21, 2024):

@silkyclouds I have two NPM instances one on Debian 12 as a VPS and the other on the Raspberry Pi OS based on Debian 12. The VPS one gave the issue and the RPi one is running without any issue. Maybe try downgrading one more version as a workaround. I am seriously thinking of switching to a more stable reverse proxy manager and/or making the weekly incremental backup too.

well, I also have two broken instances (one running in a datacenter, under Debian vs. The one I just fixed using @skatespeare workaround (downgrading + removing the env. variable that refers to the legacy provider.

Now I just need to get the debian one running, but the same trick doesn't seem to help at all, it ain't starting anymore. :)

By the way, you could use the docker backup script I wrote if you plan to backup your docker containers ;)

<!-- gh-comment-id:1902635563 --> @silkyclouds commented on GitHub (Jan 21, 2024): > @silkyclouds I have two NPM instances one on Debian 12 as a VPS and the other on the Raspberry Pi OS based on Debian 12. The VPS one gave the issue and the RPi one is running without any issue. Maybe try downgrading one more version as a workaround. I am seriously thinking of switching to a more stable reverse proxy manager and/or making the weekly incremental backup too. well, I also have two broken instances (one running in a datacenter, under Debian vs. The one I just fixed using @skatespeare workaround (downgrading + removing the env. variable that refers to the legacy provider. Now I just need to get the debian one running, but the same trick doesn't seem to help at all, it ain't starting anymore. :) By the way, you could use the docker backup script I wrote if you plan to backup your docker containers ;)
Author
Owner

@silkyclouds commented on GitHub (Jan 21, 2024):

Thank you @jc21 I have updated to version 2.11.1 which was released just an hour ago and this has resolved my issue. @silkyclouds @skatespeare Give it a try. Let's appreciate the developers and support the project.

Nope, as soon I pull the latest again, I now have a lot of other cert renewal issues. I guess it's because I removed the legacy thing...

<!-- gh-comment-id:1902645955 --> @silkyclouds commented on GitHub (Jan 21, 2024): > Thank you @jc21 I have updated to version 2.11.1 which was released just an hour ago and this has resolved my issue. @silkyclouds @skatespeare Give it a try. Let's appreciate the developers and support the project. Nope, as soon I pull the latest again, I now have a lot of other cert renewal issues. I guess it's because I removed the legacy thing...
Author
Owner

@andreaswilli commented on GitHub (Jan 21, 2024):

Just encountered this issue with version 2.11.0. Upgrading to 2.11.1 fixed the issue for me.

<!-- gh-comment-id:1902757006 --> @andreaswilli commented on GitHub (Jan 21, 2024): Just encountered this issue with version `2.11.0`. Upgrading to `2.11.1` fixed the issue for me.
Author
Owner

@DominikRoB commented on GitHub (Sep 3, 2024):

Encountered this error message as well after months of running without changes to any config or file. But a quick compose down. compose up fixed it.

<!-- gh-comment-id:2325711842 --> @DominikRoB commented on GitHub (Sep 3, 2024): Encountered this error message as well after months of running without changes to any config or file. But a quick compose down. compose up fixed it.
Author
Owner

@MarkRotNF commented on GitHub (Sep 8, 2024):

I also had an gateway error.

docker ps showed that " nginxproxymanager-db-1" (jc21/mariadb-aria:latest) ist "restarting" (in loop).

docker-compose up (without -d) returned:

app-1  | ❯ Starting backend ...
app-1  | ❯ Starting nginx ...
db-1   | 2024-09-08 10:59:54 0 [Note] Starting MariaDB 10.11.5-MariaDB source revision 7875294b6b74b53dd3aaa723e6cc103d2bb47b2c as process 1
db-1   | Cannot find checkpoint record at LSN (1,0x361f80)
db-1   | 2024-09-08 10:59:54 0 [ERROR] mysqld: Aria recovery failed. Please run aria_chk -r on all Aria tables (*.MAI) and delete all aria_log.######## files
db-1   | 2024-09-08 10:59:54 0 [ERROR] Plugin 'Aria' registration as a STORAGE ENGINE failed.
db-1   | 2024-09-08 10:59:54 0 [Note] Plugin 'InnoDB' is disabled.
db-1   | 2024-09-08 10:59:54 0 [Note] Plugin 'FEEDBACK' is disabled.
db-1   | 2024-09-08 10:59:54 0 [ERROR] Could not open mysql.plugin table: "Unknown storage engine 'Aria'". Some plugins may be not loaded
db-1   | 2024-09-08 10:59:54 0 [ERROR] Failed to initialize plugins.

I was able to fix it with the following:
cd to the mysql folder (should be the path thats in the docker-compose.yaml "db:" -> "volumes:". E.g. "./mysql" as in my case.
image

docker-compose down
sudo mv aria_log. XXXXXXXXX bak.aria_log.XXXXXXXXX
docker-compose up -d
(similar to described here: https://serverfault.com/questions/893626/mysql-wont-start-mysqld-got-signal-11)

(I'm not sure if this was due to insufficient free storage space. Mine was almost full.)

<!-- gh-comment-id:2336677719 --> @MarkRotNF commented on GitHub (Sep 8, 2024): I also had an gateway error. ``` docker ps``` showed that " nginxproxymanager-db-1" (jc21/mariadb-aria:latest) ist "restarting" (in loop). ``` docker-compose up``` (without -d) returned: ``` app-1 | ❯ Starting backend ... app-1 | ❯ Starting nginx ... db-1 | 2024-09-08 10:59:54 0 [Note] Starting MariaDB 10.11.5-MariaDB source revision 7875294b6b74b53dd3aaa723e6cc103d2bb47b2c as process 1 db-1 | Cannot find checkpoint record at LSN (1,0x361f80) db-1 | 2024-09-08 10:59:54 0 [ERROR] mysqld: Aria recovery failed. Please run aria_chk -r on all Aria tables (*.MAI) and delete all aria_log.######## files db-1 | 2024-09-08 10:59:54 0 [ERROR] Plugin 'Aria' registration as a STORAGE ENGINE failed. db-1 | 2024-09-08 10:59:54 0 [Note] Plugin 'InnoDB' is disabled. db-1 | 2024-09-08 10:59:54 0 [Note] Plugin 'FEEDBACK' is disabled. db-1 | 2024-09-08 10:59:54 0 [ERROR] Could not open mysql.plugin table: "Unknown storage engine 'Aria'". Some plugins may be not loaded db-1 | 2024-09-08 10:59:54 0 [ERROR] Failed to initialize plugins. ``` I was able to fix it with the following: cd to the mysql folder (should be the path thats in the docker-compose.yaml "db:" -> "volumes:". E.g. "./mysql" as in my case. ![image](https://github.com/user-attachments/assets/38c24d7f-91fd-4baa-87db-65bd6002340d) ``` docker-compose down``` ``` sudo mv aria_log. XXXXXXXXX bak.aria_log.XXXXXXXXX``` ``` docker-compose up -d``` (similar to described here: [https://serverfault.com/questions/893626/mysql-wont-start-mysqld-got-signal-11](https://serverfault.com/questions/893626/mysql-wont-start-mysqld-got-signal-1)) (I'm not sure if this was due to insufficient free storage space. Mine was almost full.)
Author
Owner

@jqknono commented on GitHub (Sep 26, 2024):

The Fetching https://ip-ranges.amazonaws.com/ip-ranges.json blocked at startup. Some times this will cost many time.

If you're blocked by this fetching too, just run:

NPM_CTR_NAME=nginxproxymanager
docker exec $NPM_CTR_NAME sed -i 's/\.then(internalIpRanges\.fetch)//g' /app/index.js
docker restart $NPM_CTR_NAME

This should skip the fetching of the startup.

<!-- gh-comment-id:2376708728 --> @jqknono commented on GitHub (Sep 26, 2024): The **Fetching https://ip-ranges.amazonaws.com/ip-ranges.json** blocked at startup. Some times this will cost many time. If you're blocked by this fetching too, just run: ```bash NPM_CTR_NAME=nginxproxymanager docker exec $NPM_CTR_NAME sed -i 's/\.then(internalIpRanges\.fetch)//g' /app/index.js docker restart $NPM_CTR_NAME ``` This should skip the fetching of the startup.
Author
Owner

@RichardSchulz52 commented on GitHub (Dec 21, 2024):

Just a hint for anyone also stuck here at some point. In my case the problem was using a cifs volume where the sqlite DB is placed. For some reason sqlite and cifs volumes do not work together. I fixed this setting up npm to use Maria-DB.

<!-- gh-comment-id:2558243341 --> @RichardSchulz52 commented on GitHub (Dec 21, 2024): Just a hint for anyone also stuck here at some point. In my case the problem was using a cifs volume where the sqlite DB is placed. For some reason sqlite and cifs volumes do not work together. I fixed this setting up npm to use Maria-DB.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/nginx-proxy-manager-NginxProxyManager#274
No description provided.