mirror of
https://github.com/NginxProxyManager/nginx-proxy-manager.git
synced 2026-04-25 17:35:52 +03:00
[GH-ISSUE #643] Configure NPM to not interrupt downloads at 1GB #545
Labels
No labels
awaiting feedback
bug
cannot reproduce
dns provider request
duplicate
enhancement
enhancement
enhancement
good first issue
help wanted
invalid
need more info
no certbot plugin available
product-support
pull-request
question
stale
troll
upstream issue
v2
v2
v2
v3
wontfix
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/nginx-proxy-manager-NginxProxyManager#545
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @dr460nf1r3 on GitHub (Oct 9, 2020).
Original GitHub issue: https://github.com/NginxProxyManager/nginx-proxy-manager/issues/643
Hello,
Im currently trying to get downloads over 1GB working via nginx.
Im using an nginx container proxied by npm which has a custom nginx.conf mounted containing proxy_max_temp_file_size 0; (required to solve the issue) however this does not seem to work, downloads are still getting interrupted. So suspect it has something to do with npm.
How can I change configuration to not interrupt downloads at 1GB?
@AlpixTM commented on GitHub (Apr 7, 2021):
Same problem here, is there a known fix to that?
@ltonn02 commented on GitHub (Apr 19, 2021):
I'm having the same problem... has anyone found a solution?
@dr460nf1r3 commented on GitHub (Apr 19, 2021):
It seems there is no real interest in providing a solution which is quite a bummer :(
@ltonn02 commented on GitHub (Apr 19, 2021):
Well... I read a little bit about it and got it working for me.
All in all it seemed to be very straight forward.
All I had to do, was to add these into the advanced section of my proxy:
client_max_body_size 0;proxy_max_temp_file_size 0;It should now look like this:

I'm using the proxy for my nextcloud which is now working without any issues.
I've tried to download a 4.7GB file which worked after applying this fix.
Hope this works for you as well :)
@chaptergy commented on GitHub (May 12, 2021):
Glad you found a solution! But this was more of an nginx issue than an NPM one, so if similar problems pop up in the future, you will probably find solutions much quicker on platforms which provide help for nginx.
@Rustymage commented on GitHub (Aug 29, 2021):
Note this issue for anyone searching, it is already present and described as 'not a bug': https://trac.nginx.org/nginx/ticket/1472
@EgonHeuson commented on GitHub (Aug 12, 2022):
Hello! Thank you VERY much for this solution! It helped me a lot.
I'd have a related question if you allow me. I installed a Nextcloud AIO instance behind my NPM, and I get an error when trying to upload large files (over 1GB approx.). Are the download and upload config both linked to these variables? Do you know how I could resolve this uploading problem? I first thought it was linked to Nextcloud, but after discussing it with the devs (here: https://github.com/nextcloud/all-in-one/discussions/979) I think it's more related to NPM. Don't you think?
@ltonn02 commented on GitHub (Aug 12, 2022):
Hey, since posting my original config, I've run into the same problems you are describing, myself.
I'm not sure if there's anything else that I've changed, but this is my current npm config, which handles large uploads just fine:
If it still won't work for you, I can investigate further into my system, since I remember changing a lot of variables until I finally got it running reliably.
@EgonHeuson commented on GitHub (Aug 12, 2022):
@ltonn02 You rox man! I'll try this config tonight! And I'll keep you updated here :-) Thanks a lot!
@EgonHeuson commented on GitHub (Aug 12, 2022):
Just a quick question. Why do you change proxy_max_temp_file_size from "0" to "16384m"?
By the way, it also was for Nextcloud on your side?
@ltonn02 commented on GitHub (Aug 12, 2022):
Yes, I'm using it to expose a Nextcloud docker container to the internet. The proxy_max_temp_file_size is at "16384m" because it worked that way. I must have spent days copying every config I could find from several forums onto my system until this one finally worked. It's very possible, that "0" works just fine or even better, but as they say "never change a winning team" :)
@EgonHeuson commented on GitHub (Aug 12, 2022):
Fully understand what you mean! 😅 I'll try this then :-) Your first config already solved my problem when downloading files from Synology File Station.
@EgonHeuson commented on GitHub (Aug 12, 2022):
Do you have other parameters defines in the other tabs of your host config by the way?
@ltonn02 commented on GitHub (Aug 12, 2022):
Here you can see all the configs I made inside of npm:




:
@EgonHeuson commented on GitHub (Aug 12, 2022):
Eheh so it doesn't work on my side. The solution at the beginning of the discussion seems to resolve my issue with the synology File Station download, and I therefore can download files over 1GB. :-D
But regarding Nextcloud AIO, nope. Whether I set proxy_max_temp_file_size to 0, 2048, 4096 or even 16384, nothing changes. I still get an error if the uploaded file is over 2GB or so. But it's a false alarm as the file is correctly uploaded and can be found in the correct folder. So I really don't know what I'm up to here...
@ltonn02 commented on GitHub (Aug 16, 2022):
Hey, I've spent quite some time trying to reproduce your error, but unfortunately I was not able to do so. Anyway, here are some ideas, which might help you solve your problem. (FYI: I'm not using a Synology NAS, maybe there's a problem related to that.)
@EgonHeuson commented on GitHub (Aug 16, 2022):
Hello Itonn02,
Thank you VERY much for all your time spent. Actually my Nexcloud is not on my Synology. I installed it in a VM on my Proxmox Cluster. But I think the main difference lies in the fact that I installed Nextcloud AIO and not the classical Docker package. It uses docker but in a much more automated and packed way and I guess this is what create the issue. I talked with the developers here, but we didn't find the source yet. They concluded that it's due to slow hardware but my Nextcloud instance is installed on a Celeron J3455 with 3 cores allocated and 8Go of DDR3 1600MHz RAM. So it seems a little bit odd... Anyway, I hope I'll find the issue someday ☺️
Thanks again anyway for all! Have a nice evening!