mirror of
https://github.com/NginxProxyManager/nginx-proxy-manager.git
synced 2026-04-25 09:25:55 +03:00
[GH-ISSUE #2640] Admin Interface is INSECURE #1830
Labels
No labels
awaiting feedback
bug
cannot reproduce
dns provider request
duplicate
enhancement
enhancement
enhancement
good first issue
help wanted
invalid
need more info
no certbot plugin available
product-support
pull-request
question
stale
troll
upstream issue
v2
v2
v2
v3
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/nginx-proxy-manager-NginxProxyManager#1830
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @pwfraley on GitHub (Feb 28, 2023).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/2640
Checklist
jc21/nginx-proxy-manager:latestdocker image?Describe the bug
In the README.md it say: Beatifull and Secure Admin interface, the Admin Interface including the Login is served over HTTP and not HTTPS so it is by definition NOT secure!
Nginx Proxy Manager Version
All Version up to and including current Version
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Expect the Admin interface to be secured by SSL
Screenshots
Operating System
All
Additional context
@moninformateur commented on GitHub (Mar 1, 2023):
Unless I am missing something, you can secure the admin panel the same way you can secure any of the proxy hosts by requesting a SSL certificate in the config. For extended security, you'd obviously use either the acces list or an authentication service to prevent anyone from accessing your admin panel.
This may be a documentation error, or simply something to spook people away from exposing their NGINX admin panel to the internet. You know.
@the1ts commented on GitHub (Mar 1, 2023):
This I guess suffers from the "chicken and egg problem" which came first. How can NPM (a simple to use interface for securing with SSL) secure itself before standing up the interface to enable the User to put in the required pieces to get a valid cert? I would not call it simple to use if the user had to put the required info in a config file or docker variables. There are other nginx docker images that don't use an interface so have nothing to secure if that is a real issue for you. Also if NPM was stood up with a self signed cert we would just fill here with browser error messages and people asking for us not to use self signed since there security checks now fail.
I would stay we are stuck with what we have. Stand up NPM, use it to secure itself with SSL in the standard fashion and use firewalls to block the HTTP admin for security purposes, the admin interface is on a different port to enable all this to be done easily.
@bmmmm commented on GitHub (Mar 4, 2023):
I agree with @moninformateur and @the1ts - block admin panel from being accessible from outside your network.
@pwfraley commented on GitHub (Mar 4, 2023):
Hey thanks for the Feedback.
I guess I did not make this ticket clear:
Saying it has a secure admin interface is just plain not true. Out of the Box it is extremely insecure and without further steps it will never be secure.
I guess there are two simple options:
The Option:
Don't expose your admin port outside of your network, basically means you can not use NPM on a Puiblic Cloudbased hosts because you can not reach the admin interface. Or one would have to install a VPN and connect that to the cloud based host, and then allowing the admin port only over the VPN (anything but simple for non admins).
Personally I would prefer the Option to generate a self signed certificate on first container start and then using this to secure the admin interface.
@jc21 commented on GitHub (Mar 4, 2023):
Just so I'm clear on my statement of the admin interface being secure:
Just because it doesn't have ssl out of the box doesn't mean that my claims of the interface being secure isn't true.
I would also argue that the admin interface is only made less secure if the port 81 was exposed outside of your network and to the public without first setting up a Proxy Host for it.
@pwfraley commented on GitHub (Mar 4, 2023):
I understand your reasoning, but it does not matter how secure the implemented features are, if the basis is insecure, than everything that builds on it, no matter how secure, is insecure.
I don't know if you remember the mongodb meltdown. MongoDB was able to be used and installed securely, but out of the box it was configured extremely insecure (no root password), which eventually led to most mongodb installs beeing insecure.
After some researchers found a ton of mongodb instances online without root passwords and the news spread that really hurt mongodb for a couple of years.
I mean seriously, how hard is it to generate a self signed certificate on first container start? Isn't that just a couple of lines of code?
@the1ts commented on GitHub (Mar 4, 2023):
Yes, a signed cert is easy to make, getting users to recognise that the error message that makes in browsers is safe to ignore is not. Isn't getting users to click through the security warning to use NPM going to look very bad? Also since the location NPM is made for is to sit behind a NAT router how does the admin port get on the internet and therefore very vulnerable unless you ignore the getting started guide which says to forward only 80 and 443? You are worrying about a theoretical risk that is understood and allowed to exist due to ease of use.
Not sure I take the MongoDB point, you can say the same for MySQL, Elastic search, kubernetes, vmware, none of those have been damaged by poorly educated users getting hacked by putting them on the internet directly for no good reason with zero or poor passwords and not keeping up to date.
We can make 100% secure software, it would just not be useable by 99.9% of the world. See Windows NT being given the US DOD highest certificate for security only if it has no NIC installed.
@pwfraley commented on GitHub (Mar 4, 2023):
That is only secure if the network that it is attached to, is secure. And as security experts say:
There is no such thing as a secure network.
If you have WiFi in your network, how do you prevent someone from connecting? With things like Kali Linux, hacking into someone else WiFi is a kids game now a days. I also know that one could add a firewall rule for port 81 to only allow connections from one host, but that is not realy secure either. Mac and IP Spoofing is simple. Also that does not prevent someone on your network to see the traffic and read the password that is beeing transfered in plain text.
I don't want to step on any toes here, but security should be taken seriously. And knowingly shipping software that installs itself in an insecure manner out of the box, but telling people it is secure basically just hurts the project and that would be a shame.
@the1ts commented on GitHub (Mar 4, 2023):
You do not need to allow any access to the admin port outside the docker container so all the problems inside a LAN are null and void from then onwards. If a person is on your network to level you mention they can probably man in the middle HTTPS to an IP address with the same browser errors we would need to tell users to ignore. How do you want it fixed that does not break the simple to use nature which is issue one in the reason for it's existence?
@dioxide-jazz commented on GitHub (Mar 5, 2023):
You can make your wifi network secure against cracking by making a very complicated pword. you only need to enter it once in a device. you prevent people from phishing you by educating yourself and your team/family on how to recognize the signs.
the world is unsafe but we do what we can to mitigate.
it seems like you are concerned about the fact that its not SSL out of the box. I opened this to see if there was some sort of exploit in the components the GUI is built with or some way to access the db by fuzzing the web portal or something but this i am not concerned about and heres why:
this VM or bare metal server that you are spinning this container up on shouldn't be accessible from the outside at first. this is why you want this program no? so you can have a safe way to proxy traffic in and out of your network? so i would hope this isn't accessible just from WAN just yet
however, if it is on a VPS somewhere or you are settimg this up from remote connection then i see your concern. but here is what you do - you log in with the default creds, set up the first host being the NPM web console get your lets encrypt cert and you then reconnect using the FQDN that will now proxy to that the NPM console at port 81. then just change the password to what you want.
since you have made the connection through HTTPS when you reset the pw then it should not be as liable to compromise as just over plain HTTP.
@Coreparad0x commented on GitHub (Mar 6, 2023):
I would go as far as to say getting end users to get in the habit of disregarding the invalid cert security message is bad practice as well. Not only is it pointless at that point, because someone with that level of access could just MITM the traffic and inject their own cert which the user will then just blindly trust (as you point out), but it could lead them to doing it in other situations when they shouldn't be. At least with no cert you know exactly what you're getting, and don't have some false sense of security.
Even then I wouldn't expose the interface. If you're using a VPS where you have the access to set this software up, then you likely have SSH and you should just be using public-key-only auth with SSH and forwarding the port so you can access the internal admin URL locally. I have several things hosted in the cloud on services like Digital Ocean where the backend admin panels are locked behind having to SSH in and forward the remote port. I go as far as to have an entirely separate VPS setup which we SSH into so that the one hosting the actual public facing application doesn't even have to expose SSH, and I have SSH blocked in Digital Oceans firewall on everything but the "bastion" (as we call it.) For instance,
ssh -L 81:<VPS Private IP>:81 <ssh server>and then I hit it with localhost:81I'm in the same boat, this is also why I clicked on this. I was expecting to find some kind of vulnerability in the software itself. This is a problem you can solve with a bit of extra configuration, and not one that's really solvable out of the box.
@Maximus48p commented on GitHub (Jun 22, 2023):
related....
When i create an access user and set this up for a certain proxy it will ask me, using a small pop-up, to enter a username and password before i'm able to enter the proxy url.

This pop-up windos is not secured, therefore name and passwords are not send in a secured way.
@ghost commented on GitHub (Jul 29, 2023):
You can mitigate the insecure access to the admin panel using the following method:
iptables -I DOCKER-USER 1 -p tcp -m conntrack --ctorigdstport 81 --ctdir ORIGINAL -j DROP@brickpop commented on GitHub (Oct 5, 2023):
Using a self signed certificate for the management UI is an easy solution, I don't see the reason to dismiss such a basic idea.
81, ever@Coreparad0x commented on GitHub (Oct 5, 2023):
I don't personally have that much of an issue with self-signed certs on the admin interface. That being said, I don't really see why there's an issue with NPM - a tool that out of the box handles grabbing LE certs from any number of validation methods, including DNS - couldn't offer to set itself up on a host with an LE cert configured like any of the others.
I don't really see how you can draw this conclusion. LetsEncrypt allows you do a number of validation methods, including DNS validation which works whether the deployment is public or private. This is what I do at home, and this is what I've done at work even when using it for internal sites.
As a user of Portainer I wouldn't mind if it offered to configure itself with a cert using LE DNS validation as well.
All of that that being said:
This is what I would do with all of this stuff anyways. I would personally be fine having this on a self-signed cert being a tunnel or VPN. This is how I do all of my public deployments on places like Digital Ocean. It's easy to setup, easy to use, and far more secure than just having your NPM admin interface exposed publicly IMO.
Edit:
Fully agree here. That being said, self-signed requires you to add an exception for the cert. It puts you in a position to be more easily MITM due to the fact that getting an invalid cert warning is much less suspicious. If I have an LE cert, and get an invalid cert popup, then it's a bit suspicious. That being said I'm not really sure what scenario you would be getting MITM and not already have an attacker in a position where you're already screwed anyways. At your house or work, that means they're already in your network. At a hotel, well you shouldn't be connecting to it from a hotel or other public network without at least a VPN or SSH tunnel.
@vs4vijay commented on GitHub (Nov 3, 2023):
I concur with @pwfraley that nginx proxy manager should be served on HTTPS rather than HTTP, as this would prevent anyone between the nginx host and your internet provider from intercepting credentials. As suggested by many, I would also suggest that the Admin Portal use HTTPS by default. This would be easy to do, as Caddy does this automatically.
@kanevbg commented on GitHub (Jan 26, 2024):
+1. There is a reason for HTTP2 being used only with SSL by browsers.
I would have named that issue as "Use HTTPS for admin panel" and mark as Improvement/Feature Request and label it with security.
@ReezyBoi commented on GitHub (May 8, 2024):
Chiming in to support the NPM Admin UI being served over self signed HTTPS either by default or with a line of code in the initial yaml file than can be uncommented. That's way, there are options.
@nikhilweee commented on GitHub (Jun 7, 2024):
Just came in here to say that I followed @brickpop's solution and changed my docker compose file to only expose port 81 to 127.0.0.1 instead of all interfaces (0.0.0.0). The corresponding change is highlighted below:
I can then port forward 8081 on my remote host to localhost using SSH and access the admin UI.
@bilogic commented on GitHub (Jul 14, 2024):
Isn't it odd that NPM is able to acquire SSL certs for its records but won't do so for itself?
And in order to expose NPM securely to the internet we need another tool that basically does the exact same thing that NPM is trying to do in the first place.
I mean I doubt there is much use for NPM if it wasn't able to acquire LE SSL certs as part of its functionality.
@github-actions[bot] commented on GitHub (Jan 16, 2025):
Issue is now considered stale. If you want to keep it open, please comment 👍
@McNickSistoPro commented on GitHub (Apr 11, 2025):
Any updates ?
@farwestnz commented on GitHub (Jun 11, 2025):
It feels like it should be possible to specify some Letsencrypt config in the docker-compose.yml and have a certificate created at startup for the admin interface.
@kanevbg commented on GitHub (Jul 9, 2025):
That will be useful, another practical feature will be to be able to specify the certificate files to use right away instead of generating.
@ridly commented on GitHub (Sep 2, 2025):
So far the best option is from nikhilweee ssh port forwarding
Just use ssh port forwarding on your local machine into the remote server and then you don't need to expose port 81 on HTTP to the world.
An example:
Run this on your local machine and keep it running (assuming you have mapped port 81 to 8081 on the remote server):
ssh -L 63333:localhost:8081 <ip of your server>Then:
You can do a
http://localhost:63333that will connect to your remote<ip of your server>:8081It's a chicken and egg problem: you want to install ssl but first you need to login into the tool itself without ssl.