mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #183] ParallelMultipartUploadRequest suffers from head of line blocking #100
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#100
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @gaul on GitHub (May 4, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/183
s3fs issues multipart uploads in batches (defaulting to 5) and does not issue the next batch until the current one completes. If ones of these uploads completes slowly, it blocks subsequent uploads and hurts performance. Instead of the blocking
curlmulti.Requestcall, s3fs should issue subsequent uploads as soon as previous ones complete.@deguich commented on GitHub (Mar 17, 2016):
Hi,
Indeed, It seems s3fs is like s3cmd, it don't really multi-thread request to S3 server. S3 Server log (a ceph radosgw) show upload of multipart chunk one by one. It's a real great oportunity to increase performance of s3fs.
Here is an example with s3fs :
fich_4G is 3788Mo.
3788/164 = 23Mo/s
With multithread, it's possible to up to the radosgw limit
Uploading example with radula (multithreading python boto wrapper) :