mirror of
https://github.com/NginxProxyManager/nginx-proxy-manager.git
synced 2026-04-25 09:25:55 +03:00
[GH-ISSUE #115] Proxy Host not rechable #101
Labels
No labels
awaiting feedback
bug
cannot reproduce
dns provider request
duplicate
enhancement
enhancement
enhancement
good first issue
help wanted
invalid
need more info
no certbot plugin available
product-support
pull-request
question
stale
troll
upstream issue
v2
v2
v2
v3
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/nginx-proxy-manager-NginxProxyManager#101
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @cm86 on GitHub (Apr 4, 2019).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/115
Hello,
i've installed the latest build of nginx-proxy-manager via docker.
i have changed the Ports to
admin-ui -->8888
port 80 --> port 80
port 443 --> port 443
and i have add the option --restart=unless stoped to the container
now when i'm creating a proxyhost, the host is showing up as offline
but the service is online, and is rechable
i have already gone in the container and pinged the host... it's pingable/rechable to from inside the container.
how can i solve this?
mfg
Chris
@jc21 commented on GitHub (Apr 4, 2019):
Hover the "Offline" text for the host and you should see why it failed to come online.
@cm86 commented on GitHub (Apr 4, 2019):
Hello,
Thanks for the reply.
Thats is what i get there
is it possible to give the "server_names_hash_bucket_size" as envirementvariable or something like that?!
edit:
OS: Ubuntu 18.04.2 LTS
@jc21 commented on GitHub (Apr 4, 2019):
That setting is already 64 in the base configuration. Are you running on anything small like a Pi?
@cm86 commented on GitHub (Apr 4, 2019):
No its a VPS with x64 6 cores and 8 GB of RAM.
@jc21 commented on GitHub (Apr 4, 2019):
ok try entering your running container can inspecting the
/etc/nginx/nginx.conffile to see if that value is definitely there and set to 64? Just want to check you have the latest image@cm86 commented on GitHub (Apr 5, 2019):
Hello,
here is the nginx.conf.
And here what's in the footer of the adminpanel...

mfg
Chris
@jc21 commented on GitHub (Apr 5, 2019):
I just pulled
jc21/nginx-proxy-manager:latest(v2.0.12) and saw that the nginx.conf in there has the property you're missing.Pull the latest image and try again.
@cm86 commented on GitHub (Apr 5, 2019):
Hello again,
No i have pulled the image new, deleted the persistent Volume...
The "new nginx.conf"
And it's still the Same behavior...
Mfg
Chris
@spcqike commented on GitHub (Apr 29, 2019):
hello,
i have the same problem, with an raspberry pi 3 b+. i pulled the last armhf image, but i need to exec into the docker, install nano and add the line in the config file.
@vueme commented on GitHub (May 11, 2019):
Hello, @jc21. Firstly, thanks for this great container!
I've just experienced the same issue when pulling :latest-armhf two days ago. Nothing worked untill i applied above fix (added server_names_hash_bucket_size with a value of 64, might work with 32, haven't tried). Give me a shout if you want me to test this further.
OBS: Just chcked, value of 32 doesn't work. 64 does.
Raspberry Pi 3 B, Rasbian (latest)
@jc21 commented on GitHub (May 12, 2019):
You should be pulling
latesteven on the rpi.latest-armhftag is deprecated.This image will have the configuration line.
@rudders commented on GitHub (Apr 19, 2020):
So I have the issue of running our of worker_processes and have had the hack the container to raise it 4096 which seems like a lot - I have about 45 active hosts but minimal traffic. Any thoughts on how I debug why I need so many worker_processes?
@chaptergy commented on GitHub (May 10, 2021):
As this is fairly old and the original issue seems to have been solved I will close this.
@rudders if your comment is still relevant please open a new issue for it.