mirror of
https://github.com/NginxProxyManager/nginx-proxy-manager.git
synced 2026-04-25 01:15:51 +03:00
[GH-ISSUE #5319] unable to start/run after updating to 2.14.0 #3165
Labels
No labels
awaiting feedback
bug
cannot reproduce
dns provider request
duplicate
enhancement
enhancement
enhancement
good first issue
help wanted
invalid
need more info
no certbot plugin available
product-support
pull-request
question
stale
troll
upstream issue
v2
v2
v2
v3
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/nginx-proxy-manager-NginxProxyManager#3165
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @mal5305 on GitHub (Feb 17, 2026).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/5319
Checklist
jc21/nginx-proxy-manager:latestdocker image?Describe the bug
After updating to 2.14, NPM no longer functions. i cannot access the web UI nor is it proxying requests to existing hosts.
Nginx Proxy Manager Version
cannot get to login page
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Screenshots
Operating System
Ubuntu 20.04
Additional context
Docker 28.1.1
From the docker logs:
@mal5305 commented on GitHub (Feb 17, 2026):
also, i reverted to 2.13.7 and the proxy started functioning, but i couldn't get to the admin page.
@mal5305 commented on GitHub (Feb 17, 2026):
my docker compose:
@zaphod82 commented on GitHub (Feb 17, 2026):
I'm experiencing the same.
@WouterGritter commented on GitHub (Feb 17, 2026):
I'm having the same issue.
Who will create MonthsSinceLastBreakingNginxProxyManagerUpdate.com? Should be easy enough with a static page...! Just don't put it behind this proxy!
@WouterGritter commented on GitHub (Feb 17, 2026):
Unfortunately, downgrading to
2.13.7doesn't seem like a proper temporary fix, as my console is getting spammed with the following:@WouterGritter commented on GitHub (Feb 17, 2026):
Once again, it's a breaking change to
nginx.conf.This bug only affects people shipping their own
nginx.conf.Broken by
github.com/NginxProxyManager/nginx-proxy-manager@187d21a0d5See https://github.com/NginxProxyManager/nginx-proxy-manager/blame/develop/docker/rootfs/etc/nginx/nginx.conf#L60
To fix, add
Between already existing
and
To make the admin page working again, a force reload of the page might be required.
I quote @CyborgRider from last month:
@mal5305 commented on GitHub (Feb 17, 2026):
@WouterGritter you are a hero, thank you so much.
i'm not aware of anything i did to manually update the
nginx.conffile in the past to make it so i was using "my own" but i also can't say that i never did that.regardless, adding the lines you posted above fixed it for 2.14.0 for me.
@jc21 commented on GitHub (Feb 17, 2026):
Commentary like that doesn’t add value to resolving the issue.
Correct. For clarity: the documentation for Nginx Proxy Manager does not recommend replacing the bundled nginx.conf. The image is built around a specific configuration structure. If you are overriding core config files, you are operating outside the supported design. At that point, running a custom NGINX setup directly would be more appropriate.
For completeness: this release (and every release in the last 2 years), including the referenced patch, was tested across multiple environments, with and without existing data. Due diligence was done before publishing.
If you are running this in production:
I encourage you to contribute, monitor the PR's and help the rest of the community out. Most of the changes come from people just like you. Perhaps you can pick up an issue before it affects you. Perhaps there's a reason you're modifying your
nginx.confthat everyone could benefit from.@jc21 commented on GitHub (Feb 17, 2026):
For anyone wanting to downgrade, you will need to run these queries on your db to undo the migration:
@zaphod82 commented on GitHub (Feb 17, 2026):
So what about the people who haven't modified their nginx.conf file and still ran into this issue?
@WouterGritter commented on GitHub (Feb 17, 2026):
@jc21 I apologize for the snarky comments. However, seeing as this is the second time in a short while an issue like this comes up with numerous quick replies, unfortunately it seems like a lot of people do ship their own
nginx.conf, even though this might not be officially supported.I wonder if its beneficial to add a warning to the documentation about doing so at your own risk. Maybe suggesting pinning the NPM version, although that of course comes with other risks. Or a comment at the top of
nginx.confsteering people away from copying and modifying it would be enough.@WouterGritter commented on GitHub (Feb 17, 2026):
Are you sure you somehow are not using an older
nginx.conf? Do you maybe have a docker volume for/etc/nginx/or/etc/nginx/nginx.conf?@zaphod82 commented on GitHub (Feb 17, 2026):
So I am confused. I haven't modified my nginx.conf file, but still ran into this issue. Does that mean I should modify it, so I can continue using NPM even though that means I'm no longer running it as supported, or do I just leave it broken?
@CyborgRider commented on GitHub (Feb 17, 2026):
@jc21 I also apologize for my anger, though I had read the change logs. I had searched for anything suggesting that there was a change that might have broken my installation.
To be clear, I have modified my nginx conf file with the following purpose: There is no currently supported method of adding load balancing between several IPs. I had to create an upstream ip hash, then pass that through in a custom config for one of my proxies, to pass to the ip hash. This allows me to use one subdomain to access whichever server has the lowest current resource usage in my network.
If we were able to get this as a feature, I wouldn't need to edit anything in the nginx.conf file, but I don't mind the necessity to edit things, as long as I can do what I need. I tend to access my servers remotely via my domain name access, so when I updated, not seeing anything in the change logs that should have broken anything from what I could tell, I was freaked out and got upset. I appreciate the work you and the contributors do to keep things updated and secure, but I have never had any issues like this in the past, using my own nginx.conf file.
@zaphod82 I believe the expected change is to remove any volumes on your docker compose file that points to the nginx.conf, and rebuilding, to let the docker compose file run the nginx.conf from within the container. This should force it to upgrade to the current nginx.conf file on each upgrade. If you don't make any changes to the conf file, this should be effective at restoring functionality.
@zaphod82 commented on GitHub (Feb 17, 2026):
These are my volume mounts
Even with only those two mounts I still had an issue. This is partly why I'm so confused. Those are the mounts that it shows in the install for docker compose. Do I now have to mount the nginx.conf file to make this change? I rolled back to 2.13.7 to be able to use NPM again.
@zaphod82 commented on GitHub (Feb 17, 2026):
Ok. I have it working now. I had to prune the container and completely rebuild it. I tried going into the root of the container and deleting the nginx.conf file, but that caused even more problems.
@alcayaga commented on GitHub (Feb 18, 2026):
I don't have a custom
nginx.conffile, but after updating the logs are flooded withtrust_forwarded_protoerrors:proxy-host-8_error.log:2026/02/18 13:18:25 [warn] 274#274: *10555 using uninitialized "trust_forwarded_proto" variable, client: xxx, server: yyy, request: "GET / HTTP/2.0", host: "yyyy", referrer: "yyy"Seems like editing and saving the host fixes the issue
@jc21 commented on GitHub (Feb 18, 2026):
@zaphod82 At the time of my response, you only commented to say you experienced the same issue, of which the other 2 reporters showed evidence of mounting their own
nginx.conf. I am very interested in your problem. It really should not have happened at all. I wonder if you had any custom configurations in the/data/nginxfolder?@alcayaga If this ever comes up again, can you provide the contents of your hosts conf file before it's rebuilt?
I'll reopen this for anyone else having the same problem
@jc21 commented on GitHub (Feb 18, 2026):
@CyborgRider Could you use the custom configurations for your config changes, leaving the baked
nginx.confas is?@brlacquement commented on GitHub (Feb 19, 2026):
I started noticing errors in all of my proxy host logs
2026/02/19 14:00:11 [warn] 293#293: *224 using uninitialized "trust_forwarded_proto" variable. Running Nginx on Unraid in docker. Don't recall ever modifying my nginx.conf file although I may have long ago and forgotten. My proxy error logs go back before 2.14.0 was published - I do not see this error prior to the update. I check for updates on Nginx daily so this error starts showing up as soon as the update was applied.nginx.conf.txt attached. I'm not sure which file to compare this to to see if its the current/correct.
I tried editing a proxy (enabled, saved, disabled, saved the new trust proto headers toggle) and now the errors no longer appear on the proxy error logs. During this entire time, I should note, I have not noticed anything broken or not functioning correctly. I mainly have a family media server with some basic apps proxied through Nginx.
To try and check if I had modified my nginx.conf file, I ran
stat -c %yon it and got this.Log of when Nginx was updated.
I did a full OS restart on Unraid on 2025-02-14 around 11:44:00 shortly after the conf says it was modified so I'm not sure if this data is helpful. Restart was unrelated to Nginx - I was troubleshooting a Plex issue.
Tried checking which conf file Nginx uses when it loads and this seems correct. I am not sure why the
nginx.conffile is not showing as modified on the day of the recent update. Assuming that it should.Apologies if I've missed any normal practices when commenting - really the first time ever doing this on Github. Thought this could help someone.
For me, it was as simple as editing the proxy. I will do that for each of mine. And in the meantime, doesn't seem to be breaking anything. I use my Nginx logs frequently when troubleshooting access to my server/apps so having them flooded with these errors is only annoying - not breaking in my case.
I do recall modifying something with headers a long time ago. Needing to pass the true source IP through to Nginx so that my local access rule would work correctly (I have all proxies restricted in Nginx to only allow local source IPs). Had something to do with me using Cloudflare. I remember it being something like this https://github.com/NginxProxyManager/nginx-proxy-manager/discussions/3215#discussioncomment-11426569. But I don't see that in my nginx conf anymore. Also don't see it in the custom config tab of any proxies. Would need to dig some more to find where I had added those lines if you think its relevant.
@deviantintegral commented on GitHub (Feb 19, 2026):
Here's a redacted config for a host that is currently spamming
using uninitialized "trust_forwarded_proto". The new "trust upstream forwarded proto headers" option in the SSL tab is off.Saving the host fixed this, but that save made no changes to this file which surprises me. I only see references to trust_forwarded_proto in
conf.d/include/force-ssl.conf.@deviantintegral commented on GitHub (Feb 19, 2026):
Oops, I lied, there is a difference!
was added to the regenerated config before the force-ssl.conf include.
@alberanid commented on GitHub (Feb 21, 2026):
In my case, after re-saving the proxy or disabling/enabling it again, I have these changes:
After that, the error message is no longer present. Interestingly, everything worked fine despite the error.
I do not have any custom configuration.
I'm not familiar with the internals o nginxproxymanager, so I don't know if it's just a matter of re-creating the configuration after an update or something.
Maybe, if not already considered, it would be nice to have a "mass operation" button to disable/enable multiple entries at the same time, or a way to trigger it from the CLI.
Thanks for the amazing project!
@jc21 commented on GitHub (Feb 23, 2026):
Ok thanks, this is very helpful. In hindsight, this change should not have been pushed this way.
I suppose that, if changes are required to generated-when-saved files, a migration could be written to do this once upon upgrade. I anticipate there will be more Github issues/complaints when it overwrites changes on disk that people have manually made.
So instead, I'm thinking of making a Docker command that can do the regeneration of all hosts, which would have to be run manually when required.