mirror of
https://github.com/NginxProxyManager/nginx-proxy-manager.git
synced 2026-04-25 17:35:52 +03:00
[GH-ISSUE #1180] Question - HTTP Throughput/Performance Over NPM #964
Labels
No labels
awaiting feedback
bug
cannot reproduce
dns provider request
duplicate
enhancement
enhancement
enhancement
good first issue
help wanted
invalid
need more info
no certbot plugin available
product-support
pull-request
question
stale
troll
upstream issue
v2
v2
v2
v3
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/nginx-proxy-manager-NginxProxyManager#964
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jiriteach on GitHub (Jun 17, 2021).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/1180
I wanted to check if anyone knows why I would be see a max throughput/performance of a simple HTTP rule over NPM max out ~ 250mb/sec.
I had iperf3 setup to test the server to ensure thats not the issue and from client to server (same server running NPM) - I can get ~980mb/sec. I then setup a a web based speedtest (internal) and over the server IP - I get ~ 980mb/sec. Over NPM - I can only get ~250mb/sec.
Any options to increase throughput? I know its not the server running NPM.
@chaptergy commented on GitHub (Jun 17, 2021):
Since NPM is only a frontend to configure nginx, the throughput depends on nginx. NPM only generates configuration files for nginx. So to speed up your proxy, you can go look for infos on how to make nginx in docker quicker, like this StackOverflow question and the corresponding
docker-compose-docs docs.Short summary:
The docker docs contain the following:
To do this via
docker-compose, addnetwork_mode: hostto your docker-compose file.Make sure connecting to the db is still possible (if you use the suggested docker-compose file, the DB container is only available in its own isolated network, which the app is then no longer a part of). If you can't get it to work, use the SQLite database.
@jiriteach commented on GitHub (Jun 17, 2021):
Awesome - Thanks
@rrolla commented on GitHub (Jan 16, 2022):
I tested the network from this tutorial: https://jtway.co/docker-network-performance-b95bce32b4b9 and the bridge results basically was the same as host.
And after some more tests I found out that on logs/fallback_error.log there are
So performance issue can be related to worker_connections : #1435
@Snickers333 commented on GitHub (Jan 15, 2025):
Has anyone resolved this issue? I tried i think all the possible configurations when it comes to buffering, caching, compressing etc., as well as this network_mode host. Nothing is helping here.
all i can add is that halving of speed happens both on LAN and WAN.
here are my tests when connecting from a windows vm on the same host:
compared to local ip
nginx.conf
daemon off; pid /run/nginx/nginx.pid; user npm; worker_processes auto; worker_rlimit_nofile 100000; pcre_jit on; error_log /data/logs/fallback_error.log warn; include /etc/nginx/modules/*.conf; include /data/nginx/custom/root_top[.]conf; events { worker_connections 2048; multi_accept on; include /data/nginx/custom/events[.]conf; } http { include /etc/nginx/mime.types; default_type application/octet-stream; sendfile on; server_tokens off; tcp_nopush on; tcp_nodelay on; client_body_temp_path /tmp/nginx/body 1 2; keepalive_timeout 90s; proxy_connect_timeout 90s; proxy_send_timeout 90s; proxy_read_timeout 90s; ssl_prefer_server_ciphers on; gzip on; proxy_ignore_client_abort off; client_max_body_size 2000m; server_names_hash_bucket_size 1024; proxy_http_version 1.1; proxy_set_header X-Forwarded-Scheme $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Accept-Encoding ""; proxy_cache off; proxy_cache_path /var/lib/nginx/cache/public levels=1:2 keys_zone=public-cache:30m max_size=192m; proxy_cache_path /var/lib/nginx/cache/private levels=1:2 keys_zone=private-cache:5m max_size=1024m; include /etc/nginx/conf.d/include/log.conf; include /etc/nginx/conf.d/include/resolvers.conf; map $host $forward_scheme { default http; } set_real_ip_from 10.0.0.0/8; set_real_ip_from 172.16.0.0/12; # Includes Docker subnet set_real_ip_from 192.168.0.0/16; include conf.d/include/ip_ranges.conf; real_ip_header X-Real-IP; real_ip_recursive on; include /data/nginx/custom/http_top[.]conf; include /etc/nginx/conf.d/*.conf; include /data/nginx/default_host/*.conf; include /data/nginx/proxy_host/*.conf; include /data/nginx/redirection_host/*.conf; include /data/nginx/dead_host/*.conf; include /data/nginx/temp/*.conf; include /data/nginx/custom/http[.]conf; } stream { include /data/nginx/stream/*.conf; include /data/nginx/custom/stream[.]conf; } include /data/nginx/custom/root[.]conf;Custom nginx configuration
proxy_buffering off; client_max_body_size 100000M; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-Protocol $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $http_host; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_redirect off; add_header X-Frame-Options "SAMEORIGIN"; add_header X-XSS-Protection "0"; add_header X-Content-Type-Options "nosniff";this is the same way when accessing from wan. For example on my 700/700 link i get only around 350mbps.
i would be glad if anyone can help here thanks a lot!