[GH-ISSUE #499] Too many HTTP connections choking the server #277

Closed
opened 2026-03-04 01:43:58 +03:00 by kerem · 3 comments
Owner

Originally created by @viggyprabhu on GitHub (Nov 16, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/499

I am using s3fs to mount a S3 bucket in my EC2 instance. I ran some script which created close to 300 files in matter of minutes and due to this too many HTTP connections of s3fs were in TIME_WAIT conditions and it choked the server making my web server unresponsive. I unmounted the s3fs partition and I see that connections get closed. However, when I mount the s3fs partition again, I again see lot of HTTP connections which is choking the server. I am able to replicate this everytime I am mounting the s3fs partition. I mounted the same bucket in another server and I dont see any issue there. What can I do to ensure that this number of HTTP connections is not created again and again.

Originally created by @viggyprabhu on GitHub (Nov 16, 2016). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/499 I am using s3fs to mount a S3 bucket in my EC2 instance. I ran some script which created close to 300 files in matter of minutes and due to this too many HTTP connections of s3fs were in TIME_WAIT conditions and it choked the server making my web server unresponsive. I unmounted the s3fs partition and I see that connections get closed. However, when I mount the s3fs partition again, I again see lot of HTTP connections which is choking the server. I am able to replicate this everytime I am mounting the s3fs partition. I mounted the same bucket in another server and I dont see any issue there. What can I do to ensure that this number of HTTP connections is not created again and again.
kerem closed this issue 2026-03-04 01:43:59 +03:00
Author
Owner

@ggtakec commented on GitHub (Jan 7, 2017):

@viggyprabhu I'm sorry for my late reply.
You can control multipart session count for s3fs by multireq_max and parallel_count option.
multireq_max sets number of parallel request for listing objects(files).
And parallel_count sets number of parallel request for uploading big objects.
You can also set cache(stats and files) to reduce the number of listing/downloading objects.

Please see man s3fs or wiki page.
Thanks in advance for your assistance.

<!-- gh-comment-id:271069195 --> @ggtakec commented on GitHub (Jan 7, 2017): @viggyprabhu I'm sorry for my late reply. You can control multipart session count for s3fs by multireq_max and parallel_count option. multireq_max sets number of parallel request for listing objects(files). And parallel_count sets number of parallel request for uploading big objects. You can also set cache(stats and files) to reduce the number of listing/downloading objects. Please see man s3fs or [wiki page](https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon#options). Thanks in advance for your assistance.
Author
Owner

@jigneshkhatri commented on GitHub (Jun 12, 2020):

@ggtakec Once we have mounted S3 bucket to EC2, how can we change value of parameters multireq_max and parallel_count?

<!-- gh-comment-id:643389930 --> @jigneshkhatri commented on GitHub (Jun 12, 2020): @ggtakec Once we have mounted S3 bucket to EC2, how can we change value of parameters multireq_max and parallel_count?
Author
Owner

@ggtakec commented on GitHub (Aug 2, 2020):

@jigneshkhatri We do not provide a way to change these options after mounting.
You must run umount/mount to change these values.

<!-- gh-comment-id:667694126 --> @ggtakec commented on GitHub (Aug 2, 2020): @jigneshkhatri We do not provide a way to change these options after mounting. You must run umount/mount to change these values.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#277
No description provided.