mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #743] How to make the performance turning for the big data files? #425
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#425
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @wangf8406 on GitHub (Apr 4, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/743
Version of s3fs being used (s3fs --version)
_example: 1.87
Details about issue
Hi there, we now working on backup the big files (usually 200G to 400G), how to configure the options to make the S3FS have best performance ? currently the connection bandwidth is enough to get the best transport performance? the main scenario is to transfer the big files .
@gaul commented on GitHub (Jan 26, 2019):
You should try tuning the
-o multipart_sizeflag since its default 10 MB is low for large files. This should influence both read and write speeds. Please share the results of your test which other users will value!@jonassvatos commented on GitHub (Feb 18, 2019):
I can confirm a huge increase in read performance after increasing the part size.
In our case, 300% increase on large files (hundreds of GBs) with
-o multipart_size=100using MINIO as a backend.@gaul commented on GitHub (Jun 25, 2019):
#941 tracks automatically tuning the part size.