[GH-ISSUE #115] Proxy Host not rechable #101

Closed
opened 2026-02-26 05:34:32 +03:00 by kerem · 13 comments
Owner

Originally created by @cm86 on GitHub (Apr 4, 2019).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/115

Hello,
i've installed the latest build of nginx-proxy-manager via docker.
i have changed the Ports to

  • admin-ui -->8888

  • port 80 --> port 80

  • port 443 --> port 443

and i have add the option --restart=unless stoped to the container

now when i'm creating a proxyhost, the host is showing up as offline
but the service is online, and is rechable
i have already gone in the container and pinged the host... it's pingable/rechable to from inside the container.

how can i solve this?

mfg
Chris

Originally created by @cm86 on GitHub (Apr 4, 2019). Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/115 Hello, i've installed the latest build of nginx-proxy-manager via docker. i have changed the Ports to - admin-ui -->8888 - port 80 --> port 80 - port 443 --> port 443 and i have add the option --restart=unless stoped to the container now when i'm creating a proxyhost, the host is showing up as offline but the service is online, and is rechable i have already gone in the container and pinged the host... it's pingable/rechable to from inside the container. how can i solve this? mfg Chris
kerem closed this issue 2026-02-26 05:34:32 +03:00
Author
Owner

@jc21 commented on GitHub (Apr 4, 2019):

Hover the "Offline" text for the host and you should see why it failed to come online.

<!-- gh-comment-id:479792462 --> @jc21 commented on GitHub (Apr 4, 2019): Hover the "Offline" text for the host and you should see why it failed to come online.
Author
Owner

@cm86 commented on GitHub (Apr 4, 2019):

Hello,
Thanks for the reply.
Thats is what i get there

IMG_20190404_101045

is it possible to give the "server_names_hash_bucket_size" as envirementvariable or something like that?!

edit:
OS: Ubuntu 18.04.2 LTS

<!-- gh-comment-id:479796353 --> @cm86 commented on GitHub (Apr 4, 2019): Hello, Thanks for the reply. Thats is what i get there ![IMG_20190404_101045](https://user-images.githubusercontent.com/14919688/55539891-0a6bec80-56c2-11e9-8db8-69bda0932681.jpg) is it possible to give the "server_names_hash_bucket_size" as envirementvariable or something like that?! edit: OS: Ubuntu 18.04.2 LTS
Author
Owner

@jc21 commented on GitHub (Apr 4, 2019):

That setting is already 64 in the base configuration. Are you running on anything small like a Pi?

<!-- gh-comment-id:479838083 --> @jc21 commented on GitHub (Apr 4, 2019): That setting is already 64 in the [base configuration](https://github.com/jc21/nginx-proxy-manager/blob/master/rootfs/etc/nginx/nginx.conf#L34). Are you running on anything small like a Pi?
Author
Owner

@cm86 commented on GitHub (Apr 4, 2019):

No its a VPS with x64 6 cores and 8 GB of RAM.

<!-- gh-comment-id:479841185 --> @cm86 commented on GitHub (Apr 4, 2019): No its a VPS with x64 6 cores and 8 GB of RAM.
Author
Owner

@jc21 commented on GitHub (Apr 4, 2019):

ok try entering your running container can inspecting the /etc/nginx/nginx.conf file to see if that value is definitely there and set to 64? Just want to check you have the latest image

<!-- gh-comment-id:480088185 --> @jc21 commented on GitHub (Apr 4, 2019): ok try entering your running container can inspecting the `/etc/nginx/nginx.conf` file to see if that value is definitely there and set to 64? Just want to check you have the latest image
Author
Owner

@cm86 commented on GitHub (Apr 5, 2019):

Hello,
here is the nginx.conf.

# run nginx in foreground
daemon off;
load_module /usr/lib/nginx/modules/ngx_stream_module.so;

#user root;

# Set number of worker processes automatically based on number of CPU cores.
worker_processes auto;

# Enables the use of JIT for regular expressions to speed-up their processing.
pcre_jit on;

error_log /data/logs/error.log warn;

# Includes files with directives to load dynamic modules.
include /etc/nginx/modules/*.conf;

events {
    worker_connections  1024;
}

http {
  include                   /etc/nginx/mime.types;
  default_type              application/octet-stream;
  sendfile                  on;
  server_tokens             off;
  tcp_nopush                on;
  tcp_nodelay               on;
  client_body_temp_path     /tmp/nginx/body 1 2;
  keepalive_timeout         65;
  ssl_prefer_server_ciphers on;
  gzip                      on;
  proxy_ignore_client_abort off;
  client_max_body_size      2000m;
  proxy_http_version        1.1;
  proxy_set_header          X-Forwarded-Scheme $scheme;
  proxy_set_header          X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header          Accept-Encoding "";
  proxy_cache               off;
  proxy_cache_path          /var/lib/nginx/cache/public  levels=1:2 keys_zone=public-cache:30m max_size=192m;
  proxy_cache_path          /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m;

  # MISS
  # BYPASS
  # EXPIRED - expired, request was passed to backend
  # UPDATING - expired, stale response was used due to proxy/fastcgi_cache_use_stale updating
  # STALE - expired, stale response was used due to proxy/fastcgi_cache_use_stale
  # HIT
  # - (dash) - request never reached to upstream module. Most likely it was processed at Nginx-level only (e.g. forbidden, redirects, etc) (Ref: Mail Thread
  log_format proxy '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"';
  log_format standard '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"';

  access_log /data/logs/default.log proxy;

  # Dynamically generated resolvers file
  include /etc/nginx/conf.d/include/resolvers.conf;

  # Default upstream scheme
  map $host $forward_scheme {
    default http;
  }

  # Real IP Determination
  # Docker subnet:
  set_real_ip_from 172.0.0.0/8;
  # NPM generated CDN ip ranges:
  include conf.d/include/ip_ranges.conf;
  # always put the following 2 lines after ip subnets:
  real_ip_header X-Forwarded-For;
  real_ip_recursive on;

  # Files generated by NPM
  include /etc/nginx/conf.d/*.conf;
  include /data/nginx/default_host/*.conf;
  include /data/nginx/proxy_host/*.conf;
  include /data/nginx/redirection_host/*.conf;
  include /data/nginx/dead_host/*.conf;
  include /data/nginx/temp/*.conf;
}

stream {
    # Files generated by NPM
    include /data/nginx/stream/*.conf;
}

And here what's in the footer of the adminpanel...
Bildschirmfoto 2019-04-05 um 05 51 41

mfg
Chris

<!-- gh-comment-id:480139833 --> @cm86 commented on GitHub (Apr 5, 2019): Hello, here is the nginx.conf. ```/tmp # cat /etc/nginx/nginx.conf # run nginx in foreground daemon off; load_module /usr/lib/nginx/modules/ngx_stream_module.so; #user root; # Set number of worker processes automatically based on number of CPU cores. worker_processes auto; # Enables the use of JIT for regular expressions to speed-up their processing. pcre_jit on; error_log /data/logs/error.log warn; # Includes files with directives to load dynamic modules. include /etc/nginx/modules/*.conf; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; server_tokens off; tcp_nopush on; tcp_nodelay on; client_body_temp_path /tmp/nginx/body 1 2; keepalive_timeout 65; ssl_prefer_server_ciphers on; gzip on; proxy_ignore_client_abort off; client_max_body_size 2000m; proxy_http_version 1.1; proxy_set_header X-Forwarded-Scheme $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; proxy_cache off; proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m; proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m; # MISS # BYPASS # EXPIRED - expired, request was passed to backend # UPDATING - expired, stale response was used due to proxy/fastcgi_cache_use_stale updating # STALE - expired, stale response was used due to proxy/fastcgi_cache_use_stale # HIT # - (dash) - request never reached to upstream module. Most likely it was processed at Nginx-level only (e.g. forbidden, redirects, etc) (Ref: Mail Thread log_format proxy '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"'; log_format standard '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"'; access_log /data/logs/default.log proxy; # Dynamically generated resolvers file include /etc/nginx/conf.d/include/resolvers.conf; # Default upstream scheme map $host $forward_scheme { default http; } # Real IP Determination # Docker subnet: set_real_ip_from 172.0.0.0/8; # NPM generated CDN ip ranges: include conf.d/include/ip_ranges.conf; # always put the following 2 lines after ip subnets: real_ip_header X-Forwarded-For; real_ip_recursive on; # Files generated by NPM include /etc/nginx/conf.d/*.conf; include /data/nginx/default_host/*.conf; include /data/nginx/proxy_host/*.conf; include /data/nginx/redirection_host/*.conf; include /data/nginx/dead_host/*.conf; include /data/nginx/temp/*.conf; } stream { # Files generated by NPM include /data/nginx/stream/*.conf; } ``` And here what's in the footer of the adminpanel... ![Bildschirmfoto 2019-04-05 um 05 51 41](https://user-images.githubusercontent.com/14919688/55602649-ef06ed00-5766-11e9-9a06-364c51b7c6fb.png) mfg Chris
Author
Owner

@jc21 commented on GitHub (Apr 5, 2019):

I just pulled jc21/nginx-proxy-manager:latest (v2.0.12) and saw that the nginx.conf in there has the property you're missing.

Pull the latest image and try again.

<!-- gh-comment-id:480141131 --> @jc21 commented on GitHub (Apr 5, 2019): I just pulled `jc21/nginx-proxy-manager:latest` (v2.0.12) and saw that the nginx.conf in there has the property you're missing. Pull the latest image and try again.
Author
Owner

@cm86 commented on GitHub (Apr 5, 2019):

Hello again,
No i have pulled the image new, deleted the persistent Volume...

The "new nginx.conf"

# run nginx in foreground
daemon off;
load_module /usr/lib/nginx/modules/ngx_stream_module.so;

#user root;

# Set number of worker processes automatically based on number of CPU cores.
worker_processes auto;

# Enables the use of JIT for regular expressions to speed-up their processing.
pcre_jit on;

error_log /data/logs/error.log warn;

# Includes files with directives to load dynamic modules.
include /etc/nginx/modules/*.conf;

events {
    worker_connections  1024;
}

http {
  include                   /etc/nginx/mime.types;
  default_type              application/octet-stream;
  sendfile                  on;
  server_tokens             off;
  tcp_nopush                on;
  tcp_nodelay               on;
  client_body_temp_path     /tmp/nginx/body 1 2;
  keepalive_timeout         65;
  ssl_prefer_server_ciphers on;
  gzip                      on;
  proxy_ignore_client_abort off;
  client_max_body_size      2000m;
  proxy_http_version        1.1;
  proxy_set_header          X-Forwarded-Scheme $scheme;
  proxy_set_header          X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header          Accept-Encoding "";
  proxy_cache               off;
  proxy_cache_path          /var/lib/nginx/cache/public  levels=1:2 keys_zone=public-cache:30m max_size=192m;
  proxy_cache_path          /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m;

  # MISS
  # BYPASS
  # EXPIRED - expired, request was passed to backend
  # UPDATING - expired, stale response was used due to proxy/fastcgi_cache_use_stale updating
  # STALE - expired, stale response was used due to proxy/fastcgi_cache_use_stale
  # HIT
  # - (dash) - request never reached to upstream module. Most likely it was processed at Nginx-level only (e.g. forbidden, redirects, etc) (Ref: Mail Thread
  log_format proxy '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"';
  log_format standard '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"';

  access_log /data/logs/default.log proxy;

  # Dynamically generated resolvers file
  include /etc/nginx/conf.d/include/resolvers.conf;

  # Default upstream scheme
  map $host $forward_scheme {
    default http;
  }

  # Real IP Determination
  # Docker subnet:
  set_real_ip_from 172.0.0.0/8;
  # NPM generated CDN ip ranges:
  include conf.d/include/ip_ranges.conf;
  # always put the following 2 lines after ip subnets:
  real_ip_header X-Forwarded-For;
  real_ip_recursive on;

  # Files generated by NPM
  include /etc/nginx/conf.d/*.conf;
  include /data/nginx/default_host/*.conf;
  include /data/nginx/proxy_host/*.conf;
  include /data/nginx/redirection_host/*.conf;
  include /data/nginx/dead_host/*.conf;
  include /data/nginx/temp/*.conf;
}

stream {
    # Files generated by NPM
    include /data/nginx/stream/*.conf;
}

And it's still the Same behavior...

Mfg
Chris

<!-- gh-comment-id:480171556 --> @cm86 commented on GitHub (Apr 5, 2019): Hello again, No i have pulled the image new, deleted the persistent Volume... The "new nginx.conf" ```/tmp # cat /etc/nginx/nginx.conf # run nginx in foreground daemon off; load_module /usr/lib/nginx/modules/ngx_stream_module.so; #user root; # Set number of worker processes automatically based on number of CPU cores. worker_processes auto; # Enables the use of JIT for regular expressions to speed-up their processing. pcre_jit on; error_log /data/logs/error.log warn; # Includes files with directives to load dynamic modules. include /etc/nginx/modules/*.conf; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; server_tokens off; tcp_nopush on; tcp_nodelay on; client_body_temp_path /tmp/nginx/body 1 2; keepalive_timeout 65; ssl_prefer_server_ciphers on; gzip on; proxy_ignore_client_abort off; client_max_body_size 2000m; proxy_http_version 1.1; proxy_set_header X-Forwarded-Scheme $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; proxy_cache off; proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m; proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m; # MISS # BYPASS # EXPIRED - expired, request was passed to backend # UPDATING - expired, stale response was used due to proxy/fastcgi_cache_use_stale updating # STALE - expired, stale response was used due to proxy/fastcgi_cache_use_stale # HIT # - (dash) - request never reached to upstream module. Most likely it was processed at Nginx-level only (e.g. forbidden, redirects, etc) (Ref: Mail Thread log_format proxy '[$time_local] $upstream_cache_status $upstream_status $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] [Sent-to $server] "$http_user_agent" "$http_referer"'; log_format standard '[$time_local] $status - $request_method $scheme $host "$request_uri" [Client $remote_addr] [Length $body_bytes_sent] [Gzip $gzip_ratio] "$http_user_agent" "$http_referer"'; access_log /data/logs/default.log proxy; # Dynamically generated resolvers file include /etc/nginx/conf.d/include/resolvers.conf; # Default upstream scheme map $host $forward_scheme { default http; } # Real IP Determination # Docker subnet: set_real_ip_from 172.0.0.0/8; # NPM generated CDN ip ranges: include conf.d/include/ip_ranges.conf; # always put the following 2 lines after ip subnets: real_ip_header X-Forwarded-For; real_ip_recursive on; # Files generated by NPM include /etc/nginx/conf.d/*.conf; include /data/nginx/default_host/*.conf; include /data/nginx/proxy_host/*.conf; include /data/nginx/redirection_host/*.conf; include /data/nginx/dead_host/*.conf; include /data/nginx/temp/*.conf; } stream { # Files generated by NPM include /data/nginx/stream/*.conf; } ``` And it's still the Same behavior... Mfg Chris
Author
Owner

@spcqike commented on GitHub (Apr 29, 2019):

hello,

i have the same problem, with an raspberry pi 3 b+. i pulled the last armhf image, but i need to exec into the docker, install nano and add the line in the config file.

<!-- gh-comment-id:487553774 --> @spcqike commented on GitHub (Apr 29, 2019): hello, i have the same problem, with an raspberry pi 3 b+. i pulled the last armhf image, but i need to exec into the docker, install nano and add the line in the config file.
Author
Owner

@vueme commented on GitHub (May 11, 2019):

Hello, @jc21. Firstly, thanks for this great container!

I've just experienced the same issue when pulling :latest-armhf two days ago. Nothing worked untill i applied above fix (added server_names_hash_bucket_size with a value of 64, might work with 32, haven't tried). Give me a shout if you want me to test this further.

OBS: Just chcked, value of 32 doesn't work. 64 does.

Raspberry Pi 3 B, Rasbian (latest)

<!-- gh-comment-id:491507022 --> @vueme commented on GitHub (May 11, 2019): Hello, @jc21. Firstly, thanks for this great container! I've just experienced the same issue when pulling :latest-armhf two days ago. Nothing worked untill i applied above fix (added server_names_hash_bucket_size with a value of 64, might work with 32, haven't tried). Give me a shout if you want me to test this further. OBS: Just chcked, value of 32 doesn't work. 64 does. Raspberry Pi 3 B, Rasbian (latest)
Author
Owner

@jc21 commented on GitHub (May 12, 2019):

You should be pulling latest even on the rpi. latest-armhf tag is deprecated.

This image will have the configuration line.

<!-- gh-comment-id:491635799 --> @jc21 commented on GitHub (May 12, 2019): You should be pulling `latest` even on the rpi. `latest-armhf` tag is deprecated. This image will have the configuration line.
Author
Owner

@rudders commented on GitHub (Apr 19, 2020):

So I have the issue of running our of worker_processes and have had the hack the container to raise it 4096 which seems like a lot - I have about 45 active hosts but minimal traffic. Any thoughts on how I debug why I need so many worker_processes?

<!-- gh-comment-id:616216311 --> @rudders commented on GitHub (Apr 19, 2020): So I have the issue of running our of worker_processes and have had the hack the container to raise it 4096 which seems like a lot - I have about 45 active hosts but minimal traffic. Any thoughts on how I debug why I need so many worker_processes?
Author
Owner

@chaptergy commented on GitHub (May 10, 2021):

As this is fairly old and the original issue seems to have been solved I will close this.
@rudders if your comment is still relevant please open a new issue for it.

<!-- gh-comment-id:837335683 --> @chaptergy commented on GitHub (May 10, 2021): As this is fairly old and the original issue seems to have been solved I will close this. @rudders if your comment is still relevant please open a new issue for it.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/nginx-proxy-manager-NginxProxyManager#101
No description provided.