[GH-ISSUE #743] How to make the performance turning for the big data files? #425

Closed
opened 2026-03-04 01:45:27 +03:00 by kerem · 3 comments
Owner

Originally created by @wangf8406 on GitHub (Apr 4, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/743

Version of s3fs being used (s3fs --version)

_example: 1.87

Details about issue

Hi there, we now working on backup the big files (usually 200G to 400G), how to configure the options to make the S3FS have best performance ? currently the connection bandwidth is enough to get the best transport performance? the main scenario is to transfer the big files .

Originally created by @wangf8406 on GitHub (Apr 4, 2018). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/743 #### Version of s3fs being used (s3fs --version) _example: 1.87 ### Details about issue Hi there, we now working on backup the big files (usually 200G to 400G), how to configure the options to make the S3FS have best performance ? currently the connection bandwidth is enough to get the best transport performance? the main scenario is to transfer the big files .
kerem closed this issue 2026-03-04 01:45:27 +03:00
Author
Owner

@gaul commented on GitHub (Jan 26, 2019):

You should try tuning the -o multipart_size flag since its default 10 MB is low for large files. This should influence both read and write speeds. Please share the results of your test which other users will value!

<!-- gh-comment-id:457785895 --> @gaul commented on GitHub (Jan 26, 2019): You should try tuning the `-o multipart_size` flag since its default 10 MB is low for large files. This should influence both read and write speeds. Please share the results of your test which other users will value!
Author
Owner

@jonassvatos commented on GitHub (Feb 18, 2019):

I can confirm a huge increase in read performance after increasing the part size.
In our case, 300% increase on large files (hundreds of GBs) with -o multipart_size=100 using MINIO as a backend.

<!-- gh-comment-id:464886809 --> @jonassvatos commented on GitHub (Feb 18, 2019): I can confirm a huge increase in read performance after increasing the part size. In our case, 300% increase on large files (hundreds of GBs) with `-o multipart_size=100` using MINIO as a backend.
Author
Owner

@gaul commented on GitHub (Jun 25, 2019):

#941 tracks automatically tuning the part size.

<!-- gh-comment-id:505611626 --> @gaul commented on GitHub (Jun 25, 2019): #941 tracks automatically tuning the part size.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#425
No description provided.