[GH-ISSUE #1963] Load Balancing #1421

Open
opened 2026-02-26 07:30:55 +03:00 by kerem · 38 comments
Owner

Originally created by @bendini20 on GitHub (Mar 29, 2022).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/1963

Hello! I have been using this docker for a while now. It is truly great. There could be two native integrations that should be relatively straightforward. NGINX supports load balancing to servers natively via round robin, health, etc. Right now, if I want to do load balancing, I have to forward traffic to a bare NGINX docker. Is there a way to add native GUI support for load balancing within NginxProxyManager?

Simple Nginx config for load balancing:

upstream {
server hostname:port;
server url;

}

server {

listen 80;
server_name <url to be balanced>;




location / {
    proxy_pass http://<name>;
}

}

Originally created by @bendini20 on GitHub (Mar 29, 2022). Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/1963 Hello! I have been using this docker for a while now. It is truly great. There could be two native integrations that should be relatively straightforward. NGINX supports load balancing to servers natively via round robin, health, etc. Right now, if I want to do load balancing, I have to forward traffic to a bare NGINX docker. Is there a way to add native GUI support for load balancing within NginxProxyManager? Simple Nginx config for load balancing: upstream <name>{ server hostname:port; server url; } server { listen 80; server_name <url to be balanced>; location / { proxy_pass http://<name>; } }
Author
Owner

@support-tt commented on GitHub (Mar 30, 2022):

This is already planned for version 3 of npm #156 . A release is not known yet as i know

<!-- gh-comment-id:1082769736 --> @support-tt commented on GitHub (Mar 30, 2022): This is already planned for version 3 of npm #156 . A release is not known yet as i know
Author
Owner

@LinuxMeow commented on GitHub (Nov 10, 2022):

I was looking into opening this issue glad someone did already. Not having load balancing options calls for installing load balancer behind it. Which kinda beats the purpose of using nginx proxy manager altogether. We need load balancing options for sure.

<!-- gh-comment-id:1311036705 --> @LinuxMeow commented on GitHub (Nov 10, 2022): I was looking into opening this issue glad someone did already. Not having load balancing options calls for installing load balancer behind it. Which kinda beats the purpose of using nginx proxy manager altogether. We need load balancing options for sure.
Author
Owner

@Faridalim commented on GitHub (Nov 11, 2022):

Hi, any news about load balancing feature?

<!-- gh-comment-id:1311193509 --> @Faridalim commented on GitHub (Nov 11, 2022): Hi, any news about load balancing feature?
Author
Owner

@AlphaInfamous commented on GitHub (Nov 21, 2022):

waiting on release date for load balancing

<!-- gh-comment-id:1321292106 --> @AlphaInfamous commented on GitHub (Nov 21, 2022): waiting on release date for load balancing
Author
Owner

@martin-braun commented on GitHub (Nov 30, 2022):

Not enabling the load balancer misses the mark which results in letting this proxy look immature compared to its competitors. Would love to see this as well.

<!-- gh-comment-id:1331494140 --> @martin-braun commented on GitHub (Nov 30, 2022): Not enabling the load balancer misses the mark which results in letting this proxy look immature compared to its competitors. Would love to see this as well.
Author
Owner

@pwfraley commented on GitHub (Jan 18, 2023):

This would make NPM perfect (ok almost) But I sure am missing this feature.

<!-- gh-comment-id:1387134441 --> @pwfraley commented on GitHub (Jan 18, 2023): This would make NPM perfect (ok almost) But I sure am missing this feature.
Author
Owner

@martin-braun commented on GitHub (Jan 19, 2023):

To share my discovery and what I did instead: If you use pfSense on your network for a good solid firewall, you have a HAProxy module available for download. It's a reverse proxy with load balancer and it's fully integrated into pfSense, so you don't have to deal with the HAProxy configuration files, since the module uses the GUI of pfSense to integrate it properly.

Having a solid firewall VM is recommend and pfSense is really a free enterprise solution that does its job very well. No need to use nginx for load balancing unless you want to split firewall from load balancer.

<!-- gh-comment-id:1396363882 --> @martin-braun commented on GitHub (Jan 19, 2023): To share my discovery and what I did instead: If you use pfSense on your network for a good solid firewall, you have a HAProxy module available for download. It's a reverse proxy with load balancer and it's fully integrated into pfSense, so you don't have to deal with the HAProxy configuration files, since the module uses the GUI of pfSense to integrate it properly. Having a solid firewall VM is recommend and pfSense is really a free enterprise solution that does its job very well. No need to use nginx for load balancing unless you want to split firewall from load balancer.
Author
Owner

@ne0YT commented on GitHub (Mar 3, 2023):

To share my discovery and what I did instead: If you use pfSense on your network for a good solid firewall, you have a HAProxy module available for download. It's a reverse proxy with load balancer and it's fully integrated into pfSense, so you don't have to deal with the HAProxy configuration files, since the module uses the GUI of pfSense to integrate it properly.

Having a solid firewall VM is recommend and pfSense is really a free enterprise solution that does its job very well. No need to use nginx for load balancing unless you want to split firewall from load balancer.

the big difference here is that haproxy cannot load-balance UDP!
I use HAproxy on pfSense since years, but always had th eproblem that I couldn't load balance UDP. Having this inside NginxProxyManager would be really cool.

<!-- gh-comment-id:1453201198 --> @ne0YT commented on GitHub (Mar 3, 2023): > To share my discovery and what I did instead: If you use pfSense on your network for a good solid firewall, you have a HAProxy module available for download. It's a reverse proxy with load balancer and it's fully integrated into pfSense, so you don't have to deal with the HAProxy configuration files, since the module uses the GUI of pfSense to integrate it properly. > > Having a solid firewall VM is recommend and pfSense is really a free enterprise solution that does its job very well. No need to use nginx for load balancing unless you want to split firewall from load balancer. the big difference here is that haproxy cannot load-balance UDP! I use HAproxy on pfSense since years, but always had th eproblem that I couldn't load balance UDP. Having this inside NginxProxyManager would be really cool.
Author
Owner

@manfred-warta commented on GitHub (Mar 4, 2023):

Same problem like @ne0YT has explained, HA-Proxy in a higher version is able to load UDP, but not the version of PF-Sense FW. Therefore in NPM it would be a very good placement too to have alternatives. Especially when it comes to Syslog UDP loadbalancing. DNS UDP can be done quite smoother with PowerDNS for shure.

<!-- gh-comment-id:1454688073 --> @manfred-warta commented on GitHub (Mar 4, 2023): Same problem like @ne0YT has explained, HA-Proxy in a higher version is able to load UDP, but not the version of PF-Sense FW. Therefore in NPM it would be a very good placement too to have alternatives. Especially when it comes to Syslog UDP loadbalancing. DNS UDP can be done quite smoother with PowerDNS for shure.
Author
Owner

@ne0YT commented on GitHub (Mar 4, 2023):

But is there a real load balancing? It looks like there's only NAT mode on haproxy for UDP.
so really adding this already available feature to the GUI would just be very nice

<!-- gh-comment-id:1454690660 --> @ne0YT commented on GitHub (Mar 4, 2023): But is there a real load balancing? It looks like there's only NAT mode on haproxy for UDP. so really adding this already available feature to the GUI would just be very nice
Author
Owner

@ne0YT commented on GitHub (Mar 9, 2023):

as there is no healthcheck on free nginx it doesnt make to much sense to have load balancing

<!-- gh-comment-id:1461616916 --> @ne0YT commented on GitHub (Mar 9, 2023): > as there is no healthcheck on free nginx it doesnt make to much sense to have load balancing
Author
Owner

@manfred-warta commented on GitHub (Mar 9, 2023):

Arrrgg, I forgot, yes, you are right :-(

<!-- gh-comment-id:1462034253 --> @manfred-warta commented on GitHub (Mar 9, 2023): Arrrgg, I forgot, yes, you are right :-(
Author
Owner

@bendini20 commented on GitHub (Mar 9, 2023):

as there is no healthcheck on free nginx it doesnt make to much sense to have load balancing

This is completely incorrect. NGINX free version can tell when a forward host is offline and not send traffic to it. If I set multiple hosts in an upstream configuration, NGINX (The free version) will only send traffic to the hosts that are online.

<!-- gh-comment-id:1462487382 --> @bendini20 commented on GitHub (Mar 9, 2023): > > > > as there is no healthcheck on free nginx it doesnt make to much sense to have load balancing This is completely incorrect. NGINX free version can tell when a forward host is offline and not send traffic to it. If I set multiple hosts in an upstream configuration, NGINX (The free version) will only send traffic to the hosts that are online.
Author
Owner

@bendini20 commented on GitHub (Mar 9, 2023):

I avoid putting unnecessary tasks on a router. It already has a task of routing/filtering millions of packets per second as the router, firewall and DNS. Tacking on a reverse proxy is too much IMO. In addition, running the reverse proxy where your dockers are means you can route via hostname and not IP addresses.

<!-- gh-comment-id:1462491561 --> @bendini20 commented on GitHub (Mar 9, 2023): I avoid putting unnecessary tasks on a router. It already has a task of routing/filtering millions of packets per second as the router, firewall and DNS. Tacking on a reverse proxy is too much IMO. In addition, running the reverse proxy where your dockers are means you can route via hostname and not IP addresses.
Author
Owner

@AustinLeath commented on GitHub (Apr 11, 2023):

This is not the right way of thinking. Yes there is a finite limit of PPS that a router can handle, however, Since PFSense is purpose built (for many things in fact). If you run PFsense on a system that is resilient enough, you should not have to worry about limitations. If being used in a production enterprise environment I would look into setting up multiple PFsense boxes in HA mode to lighten the load across the board.

<!-- gh-comment-id:1503563623 --> @AustinLeath commented on GitHub (Apr 11, 2023): > This is not the right way of thinking. Yes there is a finite limit of PPS that a router can handle, however, Since PFSense is purpose built (for many things in fact). If you run PFsense on a system that is resilient enough, you should not have to worry about limitations. If being used in a production enterprise environment I would look into setting up multiple PFsense boxes in HA mode to lighten the load across the board.
Author
Owner

@ne0YT commented on GitHub (Apr 11, 2023):

as there is no healthcheck on free nginx it doesnt make to much sense to have load balancing

This is completely incorrect. NGINX free version can tell when a forward host is offline and not send traffic to it. If I set multiple hosts in an upstream configuration, NGINX (The free version) will only send traffic to the hosts that are online.

It has no active healthchecks.

<!-- gh-comment-id:1503586491 --> @ne0YT commented on GitHub (Apr 11, 2023): > > > > > > > > > as there is no healthcheck on free nginx it doesnt make to much sense to have load balancing > > This is completely incorrect. NGINX free version can tell when a forward host is offline and not send traffic to it. If I set multiple hosts in an upstream configuration, NGINX (The free version) will only send traffic to the hosts that are online. It has no active healthchecks.
Author
Owner

@alex14dark commented on GitHub (May 10, 2023):

Hi Guys, I know there is another way to achieve load balancing,
First you need to create a custom directory under the data/nginx directory corresponding to the server where you deploy npm, and then create a file named http.conf in the custom directory , the content of the file is upstream your_server { server ... }
Then go back to your npm background, select the corresponding Proxy Host
And finally select the Advanced option, fill in the location configuration, such as loaction /api { proxy_pass http://your_server }
OK, like this npm achieves the effect of load balancing

<!-- gh-comment-id:1541565239 --> @alex14dark commented on GitHub (May 10, 2023): Hi Guys, I know there is another way to achieve load balancing, First you need to create a custom directory under the data/nginx directory corresponding to the server where you deploy npm, and then create a file named http.conf in the custom directory , the content of the file is upstream your_server { server ... } Then go back to your npm background, select the corresponding Proxy Host And finally select the Advanced option, fill in the location configuration, such as loaction /api { proxy_pass http://your_server } OK, like this npm achieves the effect of load balancing
Author
Owner

@tomitrescak commented on GitHub (May 12, 2023):

@alex14dark, can you please elaborate with some real example? It seems like you cracked it but I am missing something as it does not work for me.

FYI:

I got this in my http.conf in the data/nginx/root directory:

http {
  upstream backend {
    server backend1.example.com;
    server backend2.example.com;
    check interval=5000 rise=2 fall=3 timeout=2000;
  }

  server {
    listen 80;
    server_name example.com;

    location / {
      proxy_pass http://backend;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
    }
  }
}

what do I need to adjust on my proxy? How does NPM load my custom http.conf? thanks

<!-- gh-comment-id:1545633577 --> @tomitrescak commented on GitHub (May 12, 2023): @alex14dark, can you please elaborate with some real example? It seems like you cracked it but I am missing something as it does not work for me. FYI: I got this in my `http.conf` in the `data/nginx/root` directory: ``` http { upstream backend { server backend1.example.com; server backend2.example.com; check interval=5000 rise=2 fall=3 timeout=2000; } server { listen 80; server_name example.com; location / { proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } } ``` what do I need to adjust on my proxy? How does NPM load my custom http.conf? thanks
Author
Owner

@alex14dark commented on GitHub (May 13, 2023):

@tomitrescak The custom configuration needs to be in the data/nginx/custom directory. If not, you need to create one, and then create a http.conf file in this directory. According to the information you provide, the file content should be upstream backend {
server backend1.example.com;
server backend2.example.com;
check interval=5000 rise=2 fall=3 timeout=2000;
}, and finally go back to your npm web background interface, select the corresponding proxy host, and then add location / { in the corresponding advanced option
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}

<!-- gh-comment-id:1546526956 --> @alex14dark commented on GitHub (May 13, 2023): @tomitrescak The custom configuration needs to be in the data/nginx/custom directory. If not, you need to create one, and then create a http.conf file in this directory. According to the information you provide, the file content should be upstream backend { server backend1.example.com; server backend2.example.com; check interval=5000 rise=2 fall=3 timeout=2000; }, and finally go back to your npm web background interface, select the corresponding proxy host, and then add location / { in the corresponding advanced option proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; }
Author
Owner

@tomitrescak commented on GitHub (May 14, 2023):

@alex14dark adding this to my custom config leads to nginx crashing

this is in my custom/http.conf

http {
  upstream jobiq {
    server server1.com weight 100;
    server 127.0.0.1:3020;
    check interval=5000 rise=2 fall=3 timeout=2000;
  }
}

This is in the custom config of my reverse proxy

location / {
      proxy_pass http://jobiq;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
}

Moreover there is a big fat warnign on the custom config page:

Please note, that any add_header or set_header directives added here will not be used by nginx. You will have to add a custom location '/' and add the header in the custom config there.

[UPDATE]

I found that I had some issues in the config for the server and this one made my server start custom/http.conf

upstream jobiq {
    server server1.com weight=100;
    server 127.0.0.1:3020;
  }

The issue is that I am now getting "Too many redirects" error ;(

<!-- gh-comment-id:1546856579 --> @tomitrescak commented on GitHub (May 14, 2023): @alex14dark adding this to my custom config leads to nginx crashing this is in my `custom/http.conf` ``` http { upstream jobiq { server server1.com weight 100; server 127.0.0.1:3020; check interval=5000 rise=2 fall=3 timeout=2000; } } ``` This is in the custom config of my reverse proxy ``` location / { proxy_pass http://jobiq; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } ``` Moreover there is a big fat warnign on the custom config page: **Please note, that any add_header or set_header directives added here will not be used by nginx. You will have to add a custom location '/' and add the header in the custom config there.** [UPDATE] I found that I had some issues in the config for the server and this one made my server start `custom/http.conf` ``` upstream jobiq { server server1.com weight=100; server 127.0.0.1:3020; } ``` The issue is that I am now getting **"Too many redirects"** error ;(
Author
Owner

@tomitrescak commented on GitHub (May 14, 2023):

I add this as a separate post as I managed to SOLVE this thanks to @alex14dark and ChatGPT :)

In data/nginx/custom/http.conf you set up your upstream, avoid the server directive (more info at https://nginxproxymanager.com/advanced-config/#custom-nginx-configurations)

upstream backend {
    server server1.com;
    server 127.0.0.1:3020 backup; # or whatever is your config
}

In the "custom configuration" of your proxy add the following:

location / {
      proxy_pass http://backend;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
}

INFO: This will essentially remove the configuration of your endpoint as a reverse proxy (removes the proxy configuration line) and adds only your custom config. Quite cool, but unclear I'd say.

MORE INFO for noobs like myself

If you are redirecting to another server such as server1.com in the configuration above, make sure you configure the endpoint only as HTTP in the NPM, and do not request any HTTPS configuration. Maybe someone smarter can explain why, I do not know. All I know is that if the redirected endpoint was configured with HTTPS I was getting a "too many redirects" error. Maybe this is a huge security hole, please let me know if that is so.

<!-- gh-comment-id:1546873758 --> @tomitrescak commented on GitHub (May 14, 2023): I add this as a separate post as I managed to **SOLVE** this thanks to @alex14dark and ChatGPT :) In `data/nginx/custom/http.conf` you set up your upstream, avoid the `server` directive (more info at https://nginxproxymanager.com/advanced-config/#custom-nginx-configurations) ``` upstream backend { server server1.com; server 127.0.0.1:3020 backup; # or whatever is your config } ``` In the "custom configuration" of your proxy add the following: ``` location / { proxy_pass http://backend; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } ``` **INFO**: This will essentially remove the configuration of your endpoint as a reverse proxy (removes the proxy configuration line) and adds only your custom config. Quite cool, but unclear I'd say. **MORE INFO** for noobs like myself If you are redirecting to another server such as `server1.com` in the configuration above, make sure you configure the endpoint only as HTTP in the NPM, and do not request any HTTPS configuration. Maybe someone smarter can explain why, I do not know. All I know is that if the redirected endpoint was configured with HTTPS I was getting a "too many redirects" error. **Maybe this is a huge security hole**, please let me know if that is so.
Author
Owner

@alex14dark commented on GitHub (May 14, 2023):

@tomitrescak Glad to be able to help you!

<!-- gh-comment-id:1546928980 --> @alex14dark commented on GitHub (May 14, 2023): @tomitrescak Glad to be able to help you!
Author
Owner

@haumanto commented on GitHub (Nov 9, 2023):

@tomitrescak can the access list still work while using your custom configuration?

<!-- gh-comment-id:1803832372 --> @haumanto commented on GitHub (Nov 9, 2023): @tomitrescak can the access list still work while using your custom configuration?
Author
Owner

@leuedaniel commented on GitHub (Jan 19, 2024):

This is the Solution for me in this point:
https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/

<!-- gh-comment-id:1901096919 --> @leuedaniel commented on GitHub (Jan 19, 2024): This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/
Author
Owner

@AustinLeath commented on GitHub (Jan 19, 2024):

Thanks! This is awesome

From: leuedaniel @.>
Reply-To: NginxProxyManager/nginx-proxy-manager @.
>
Date: Friday, January 19, 2024 at 2:51 PM
To: NginxProxyManager/nginx-proxy-manager @.>
Cc: Austin Leath @.
>, Comment @.***>
Subject: Re: [NginxProxyManager/nginx-proxy-manager] Load Balancing (Issue #1963)

This is the Solution for me in this point:
https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/


Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: @.***>

<!-- gh-comment-id:1901132552 --> @AustinLeath commented on GitHub (Jan 19, 2024): Thanks! This is awesome From: leuedaniel ***@***.***> Reply-To: NginxProxyManager/nginx-proxy-manager ***@***.***> Date: Friday, January 19, 2024 at 2:51 PM To: NginxProxyManager/nginx-proxy-manager ***@***.***> Cc: Austin Leath ***@***.***>, Comment ***@***.***> Subject: Re: [NginxProxyManager/nginx-proxy-manager] Load Balancing (Issue #1963) This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/ — Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: ***@***.***>
Author
Owner

@RobsonMi commented on GitHub (Feb 1, 2024):

This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/

This looks promising, however I am interested in UDP load balancing and I suppose this method won't work for this use case, right?

<!-- gh-comment-id:1921348887 --> @RobsonMi commented on GitHub (Feb 1, 2024): > This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/ This looks promising, however I am interested in UDP load balancing and I suppose this method won't work for this use case, right?
Author
Owner

@SuperMiguel commented on GitHub (Mar 3, 2024):

Is this still the best workaround? I have 5 servers that server the same traffic and i would like to map them to a single URL using nxing proxy manager

<!-- gh-comment-id:1975019098 --> @SuperMiguel commented on GitHub (Mar 3, 2024): Is this still the best workaround? I have 5 servers that server the same traffic and i would like to map them to a single URL using nxing proxy manager
Author
Owner

@hoanganht91 commented on GitHub (Mar 26, 2024):

Instead of wait NPM to support load balancing, I customize an image based on https://github.com/caprover/nginx-reverse-proxy, so you just connect your domain to this service

Source code: https://github.com/hoanganht91/nginx-reverse-proxy
Docker image: https://hub.docker.com/r/annh9x/nginx-reverse-proxy

This is an example to test load balancing config

version: '3.8'

services:
  test1:
    image: strm/helloworld-http
  test2:
    image: strm/helloworld-http
  test3:
    image: strm/helloworld-http
  load-balancer:
    image: annh9x/nginx-reverse-proxy
    environment:
      UPSTREAM_HTTP_ADDRESS: 'server test1 weight=1;server test2 weight=2;server test3 weight=3;'
      CLIENT_MAX_BODY_SIZE: 256M
<!-- gh-comment-id:2020020227 --> @hoanganht91 commented on GitHub (Mar 26, 2024): Instead of wait NPM to support load balancing, I customize an image based on https://github.com/caprover/nginx-reverse-proxy, so you just connect your domain to this service Source code: https://github.com/hoanganht91/nginx-reverse-proxy Docker image: https://hub.docker.com/r/annh9x/nginx-reverse-proxy This is an example to test load balancing config ```dockerfile version: '3.8' services: test1: image: strm/helloworld-http test2: image: strm/helloworld-http test3: image: strm/helloworld-http load-balancer: image: annh9x/nginx-reverse-proxy environment: UPSTREAM_HTTP_ADDRESS: 'server test1 weight=1;server test2 weight=2;server test3 weight=3;' CLIENT_MAX_BODY_SIZE: 256M ```
Author
Owner

@IliyaPIS commented on GitHub (Apr 14, 2024):

Hello everyone, it's working for me.
1.
image
2.
image

  upstream apimeserverpool{
    ip_hash; 
         server 192.168.1.56:8888 max_fails=3 fail_timeout=60s;
         server 192.168.1.96:8888 max_fails=3 fail_timeout=60s;
         keepalive 64;
	}
  server {

     listen 80;
     listen [::]:80;

     listen 443 ssl http2;
     listen [::]:443 ssl http2;

    ssl on;
    ssl_stapling on;
    ssl_stapling_verify on;

    server_name apimeserver.com;

     ssl_certificate /data/custom_ssl/npm-3/fullchain.pem;
     ssl_certificate_key /data/custom_ssl/npm-3/privkey.pem;

     include conf.d/include/assets.conf;
 
    include conf.d/include/force-ssl.conf;

    access_log /data/logs/proxy-host-esb_access.log proxy;
    error_log /data/logs/proxy-host-esb_error.log warn;

    location / {
      proxy_pass http://apimeserverpool;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header X-Forwarded-Proto $scheme;
    }
 }
<!-- gh-comment-id:2053981721 --> @IliyaPIS commented on GitHub (Apr 14, 2024): Hello everyone, it's working for me. 1. ![image](https://github.com/NginxProxyManager/nginx-proxy-manager/assets/17147777/78e74e7d-bbc5-49d0-9dc1-5d7955eb8880) 2. ![image](https://github.com/NginxProxyManager/nginx-proxy-manager/assets/17147777/c3a8dc68-0b90-4997-bbcb-05ac9362d4c1) 3. ``` upstream apimeserverpool{ ip_hash; server 192.168.1.56:8888 max_fails=3 fail_timeout=60s; server 192.168.1.96:8888 max_fails=3 fail_timeout=60s; keepalive 64; } server { listen 80; listen [::]:80; listen 443 ssl http2; listen [::]:443 ssl http2; ssl on; ssl_stapling on; ssl_stapling_verify on; server_name apimeserver.com; ssl_certificate /data/custom_ssl/npm-3/fullchain.pem; ssl_certificate_key /data/custom_ssl/npm-3/privkey.pem; include conf.d/include/assets.conf; include conf.d/include/force-ssl.conf; access_log /data/logs/proxy-host-esb_access.log proxy; error_log /data/logs/proxy-host-esb_error.log warn; location / { proxy_pass http://apimeserverpool; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } ```
Author
Owner

@jbhardman commented on GitHub (Sep 24, 2024):

This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/

Not an expert here... but, it looks like the modification of the conf files in the container as instructed here should be mounted as external files in the docker startup? Also, I think this will break other non-load balanced sites that I setup? Or at least I will manually have to add the header config to each one, because we comment out all the headers in the local proxy.conf?

<!-- gh-comment-id:2371753687 --> @jbhardman commented on GitHub (Sep 24, 2024): > This is the Solution for me in this point: https://silicon.blog/2023/05/17/how-to-load-balance-your-servers-using-nginx-proxy-manager-and-cloudflare/ Not an expert here... but, it looks like the modification of the conf files in the container as instructed here should be mounted as external files in the docker startup? Also, I think this will break other non-load balanced sites that I setup? Or at least I will manually have to add the header config to each one, because we comment out all the headers in the local proxy.conf?
Author
Owner

@Timman6866 commented on GitHub (Oct 29, 2024):

Anyone get those steps to work recently? It seems like wherever I try to call "upstream" from I get errors like this, "_warning nginx: [emerg] "upstream" directive is not allowed here in..."_

<!-- gh-comment-id:2445316210 --> @Timman6866 commented on GitHub (Oct 29, 2024): Anyone get those steps to work recently? It seems like wherever I try to call "upstream" from I get errors like this, `"_warning nginx: [emerg] "upstream" directive is not allowed here in..."_`
Author
Owner

@filipemcg commented on GitHub (Jan 3, 2025):

It's not straight forward how to setup NPM as a loadbalancer, so I made a video: https://youtu.be/AwIejcfOAVI?si=bZbxY0HKLAVqeu7O

<!-- gh-comment-id:2569082827 --> @filipemcg commented on GitHub (Jan 3, 2025): It's not straight forward how to setup NPM as a loadbalancer, so I made a video: https://youtu.be/AwIejcfOAVI?si=bZbxY0HKLAVqeu7O
Author
Owner

@patrick7 commented on GitHub (Jun 21, 2025):

There's a very simple approach: Create a DNS entry with all your backend servers, and use it in NPM as Forward Hostname. Done. No need to manually edit NPM configs.

backend-servers.yourdomain.com
A 192.0.2.100
A 192.0.2.150

<!-- gh-comment-id:2993738700 --> @patrick7 commented on GitHub (Jun 21, 2025): There's a very simple approach: Create a DNS entry with all your backend servers, and use it in NPM as Forward Hostname. Done. No need to manually edit NPM configs. backend-servers.yourdomain.com A 192.0.2.100 A 192.0.2.150
Author
Owner

@JBlond commented on GitHub (Jun 23, 2025):

There's a very simple approach: Create a DNS entry with all your backend servers, and use it in NPM as Forward Hostname. Done. No need to manually edit NPM configs.

backend-servers.yourdomain.com A 192.0.2.100 A 192.0.2.150

That is DNS round robin and does not do load balancing

<!-- gh-comment-id:2995789132 --> @JBlond commented on GitHub (Jun 23, 2025): > There's a very simple approach: Create a DNS entry with all your backend servers, and use it in NPM as Forward Hostname. Done. No need to manually edit NPM configs. > > backend-servers.yourdomain.com A 192.0.2.100 A 192.0.2.150 That is DNS round robin and does not do load balancing
Author
Owner

@patrick7 commented on GitHub (Jun 23, 2025):

Are you sure? I use it for my proxmox setup and with every second reload I'm on another server...

<!-- gh-comment-id:2996005511 --> @patrick7 commented on GitHub (Jun 23, 2025): Are you sure? I use it for my proxmox setup and with every second reload I'm on another server...
Author
Owner

@JBlond commented on GitHub (Jun 24, 2025):

Are you sure? I use it for my Proxmox setup, and with every second reload, I'm on another server...

See https://en.wikipedia.org/wiki/Round-robin_DNS and https://www.cloudns.net/wiki/article/182/

<!-- gh-comment-id:2999008468 --> @JBlond commented on GitHub (Jun 24, 2025): > Are you sure? I use it for my Proxmox setup, and with every second reload, I'm on another server... See https://en.wikipedia.org/wiki/Round-robin_DNS and https://www.cloudns.net/wiki/article/182/
Author
Owner

@patrick7 commented on GitHub (Jun 24, 2025):

I guess you didn't read my post carefully.

<!-- gh-comment-id:3000554924 --> @patrick7 commented on GitHub (Jun 24, 2025): I guess you didn't read my post carefully.
Author
Owner

@aminotran commented on GitHub (Oct 22, 2025):

Try it: nginx-love
It's ready for load balancing and more.

<!-- gh-comment-id:3434278861 --> @aminotran commented on GitHub (Oct 22, 2025): Try it: [nginx-love](https://github.com/TinyActive/nginx-love) It's ready for load balancing and more.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/nginx-proxy-manager-NginxProxyManager#1421
No description provided.