[GH-ISSUE #1180] Question - HTTP Throughput/Performance Over NPM #964

Closed
opened 2026-02-26 06:35:14 +03:00 by kerem · 4 comments
Owner

Originally created by @jiriteach on GitHub (Jun 17, 2021).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/1180

I wanted to check if anyone knows why I would be see a max throughput/performance of a simple HTTP rule over NPM max out ~ 250mb/sec.

I had iperf3 setup to test the server to ensure thats not the issue and from client to server (same server running NPM) - I can get ~980mb/sec. I then setup a a web based speedtest (internal) and over the server IP - I get ~ 980mb/sec. Over NPM - I can only get ~250mb/sec.

Any options to increase throughput? I know its not the server running NPM.

Originally created by @jiriteach on GitHub (Jun 17, 2021). Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/1180 I wanted to check if anyone knows why I would be see a max throughput/performance of a simple HTTP rule over NPM max out ~ 250mb/sec. I had iperf3 setup to test the server to ensure thats not the issue and from client to server (same server running NPM) - I can get ~980mb/sec. I then setup a a web based speedtest (internal) and over the server IP - I get ~ 980mb/sec. Over NPM - I can only get ~250mb/sec. Any options to increase throughput? I know its not the server running NPM.
kerem closed this issue 2026-02-26 06:35:14 +03:00
Author
Owner

@chaptergy commented on GitHub (Jun 17, 2021):

Since NPM is only a frontend to configure nginx, the throughput depends on nginx. NPM only generates configuration files for nginx. So to speed up your proxy, you can go look for infos on how to make nginx in docker quicker, like this StackOverflow question and the corresponding docker-compose-docs docs.

Short summary:
The docker docs contain the following:

NETWORK: HOST
Compared to the default bridge mode, the host mode gives significantly better networking performance since it uses the host’s native networking stack whereas the bridge has to go through one level of virtualization through the docker daemon.
It is recommended to run containers in this mode when their networking performance is critical, for example, a production Load Balancer or a High Performance Web Server.

To do this via docker-compose, add network_mode: host to your docker-compose file.

services:
  app:
    ...
    network_mode: host
    ....

Make sure connecting to the db is still possible (if you use the suggested docker-compose file, the DB container is only available in its own isolated network, which the app is then no longer a part of). If you can't get it to work, use the SQLite database.

<!-- gh-comment-id:863184299 --> @chaptergy commented on GitHub (Jun 17, 2021): Since NPM is only a frontend to configure nginx, the throughput depends on nginx. NPM only generates configuration files for nginx. So to speed up your proxy, you can go look for infos on how to make nginx in docker quicker, like [this StackOverflow question](https://stackoverflow.com/questions/49023800/performance-issues-running-nginx-in-a-docker-container/49274333#49274333) and the [corresponding `docker-compose`-docs](https://docs.docker.com/compose/compose-file/compose-file-v3/#network_mode) docs. Short summary: The docker docs contain the following: > NETWORK: HOST > Compared to the default `bridge` mode, the `host` mode gives significantly better networking performance since it uses the host’s native networking stack whereas the bridge has to go through one level of virtualization through the docker daemon. >It is recommended to run containers in this mode when their networking performance is critical, for example, a production Load Balancer or a High Performance Web Server. To do this via `docker-compose`, add `network_mode: host` to your docker-compose file. ```yaml services: app: ... network_mode: host .... ``` Make sure connecting to the db is still possible (if you use the suggested docker-compose file, the DB container is only available in its own isolated network, which the app is then no longer a part of). If you can't get it to work, use the SQLite database.
Author
Owner

@jiriteach commented on GitHub (Jun 17, 2021):

Awesome - Thanks

<!-- gh-comment-id:863531750 --> @jiriteach commented on GitHub (Jun 17, 2021): Awesome - Thanks
Author
Owner

@rrolla commented on GitHub (Jan 16, 2022):

I tested the network from this tutorial: https://jtway.co/docker-network-performance-b95bce32b4b9 and the bridge results basically was the same as host.

docker --version
Docker version 20.10.12, build e91ed57

And after some more tests I found out that on logs/fallback_error.log there are

2022/01/16 14:03:40 [warn] 12404#12404: 1024 worker_connections are not enough, reusing connections

So performance issue can be related to worker_connections : #1435

<!-- gh-comment-id:1013895787 --> @rrolla commented on GitHub (Jan 16, 2022): I tested the network from this tutorial: https://jtway.co/docker-network-performance-b95bce32b4b9 and the bridge results basically was the same as host. ```bash docker --version Docker version 20.10.12, build e91ed57 ``` And after some more tests I found out that on logs/fallback_error.log there are ```bash 2022/01/16 14:03:40 [warn] 12404#12404: 1024 worker_connections are not enough, reusing connections ``` So performance issue can be related to worker_connections : #1435
Author
Owner

@Snickers333 commented on GitHub (Jan 15, 2025):

Has anyone resolved this issue? I tried i think all the possible configurations when it comes to buffering, caching, compressing etc., as well as this network_mode host. Nothing is helping here.

all i can add is that halving of speed happens both on LAN and WAN.
here are my tests when connecting from a windows vm on the same host:

image

compared to local ip

image

nginx.conf
daemon off; pid /run/nginx/nginx.pid; user npm; worker_processes auto; worker_rlimit_nofile 100000; pcre_jit on; error_log /data/logs/fallback_error.log warn; include /etc/nginx/modules/*.conf; include /data/nginx/custom/root_top[.]conf; events { worker_connections 2048; multi_accept on; include /data/nginx/custom/events[.]conf; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; server_tokens off; tcp_nopush on; tcp_nodelay on; client_body_temp_path /tmp/nginx/body 1 2; keepalive_timeout 90s; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; ssl_prefer_server_ciphers on; gzip on; proxy_ignore_client_abort off; client_max_body_size 2000m; server_names_hash_bucket_size 1024; proxy_http_version 1.1; proxy_set_header X-Forwarded-Scheme $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; proxy_cache off; proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m; proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m; include /etc/nginx/conf.d/include/log.conf; include /etc/nginx/conf.d/include/resolvers.conf; map $host $forward_scheme { default http; } set_real_ip_from 10.0.0.0/8; set_real_ip_from 172.16.0.0/12; # Includes Docker subnet set_real_ip_from 192.168.0.0/16; include conf.d/include/ip_ranges.conf; real_ip_header X-Real-IP; real_ip_recursive on; include /data/nginx/custom/http_top[.]conf; include /etc/nginx/conf.d/*.conf; include /data/nginx/default_host/*.conf; include /data/nginx/proxy_host/*.conf; include /data/nginx/redirection_host/*.conf; include /data/nginx/dead_host/*.conf; include /data/nginx/temp/*.conf; include /data/nginx/custom/http[.]conf; } stream { include /data/nginx/stream/*.conf; include /data/nginx/custom/stream[.]conf; } include /data/nginx/custom/root[.]conf;

Custom nginx configuration
proxy_buffering off; client_max_body_size 100000M; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $http_host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "0"; add_header X-Content-Type-Options "nosniff";

this is the same way when accessing from wan. For example on my 700/700 link i get only around 350mbps.

i would be glad if anyone can help here thanks a lot!

<!-- gh-comment-id:2594071905 --> @Snickers333 commented on GitHub (Jan 15, 2025): Has anyone resolved this issue? I tried i think all the possible configurations when it comes to buffering, caching, compressing etc., as well as this network_mode host. Nothing is helping here. all i can add is that halving of speed happens both on LAN and WAN. here are my tests when connecting from a windows vm on the same host: <img width="503" alt="image" src="https://github.com/user-attachments/assets/52ae7438-6991-4675-a50f-0f0c579d5bf2" /> compared to local ip <img width="507" alt="image" src="https://github.com/user-attachments/assets/85d38df7-be99-4cb9-b39a-2a4f69e29987" /> nginx.conf ` daemon off; pid /run/nginx/nginx.pid; user npm; worker_processes auto; worker_rlimit_nofile 100000; pcre_jit on; error_log /data/logs/fallback_error.log warn; include /etc/nginx/modules/*.conf; include /data/nginx/custom/root_top[.]conf; events { worker_connections 2048; multi_accept on; include /data/nginx/custom/events[.]conf; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; server_tokens off; tcp_nopush on; tcp_nodelay on; client_body_temp_path /tmp/nginx/body 1 2; keepalive_timeout 90s; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; ssl_prefer_server_ciphers on; gzip on; proxy_ignore_client_abort off; client_max_body_size 2000m; server_names_hash_bucket_size 1024; proxy_http_version 1.1; proxy_set_header X-Forwarded-Scheme $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; proxy_cache off; proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m; proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m; include /etc/nginx/conf.d/include/log.conf; include /etc/nginx/conf.d/include/resolvers.conf; map $host $forward_scheme { default http; } set_real_ip_from 10.0.0.0/8; set_real_ip_from 172.16.0.0/12; # Includes Docker subnet set_real_ip_from 192.168.0.0/16; include conf.d/include/ip_ranges.conf; real_ip_header X-Real-IP; real_ip_recursive on; include /data/nginx/custom/http_top[.]conf; include /etc/nginx/conf.d/*.conf; include /data/nginx/default_host/*.conf; include /data/nginx/proxy_host/*.conf; include /data/nginx/redirection_host/*.conf; include /data/nginx/dead_host/*.conf; include /data/nginx/temp/*.conf; include /data/nginx/custom/http[.]conf; } stream { include /data/nginx/stream/*.conf; include /data/nginx/custom/stream[.]conf; } include /data/nginx/custom/root[.]conf; ` Custom nginx configuration `proxy_buffering off; client_max_body_size 100000M; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $http_host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "0"; add_header X-Content-Type-Options "nosniff";` this is the same way when accessing from wan. For example on my 700/700 link i get only around 350mbps. i would be glad if anyone can help here thanks a lot!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/nginx-proxy-manager-NginxProxyManager#964
No description provided.