mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #1591] Multipart upload with first part size < 5MB #834
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#834
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @CarstenGrohmann on GitHub (Mar 3, 2021).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1591
In February I run a test with commit
4c6690fand uploaded more than 20000 files / 2 TB to a local S3 appliance.Some uploads started with a segment size smaller that 5MB. My appliance accepted This violates the AWS S3 standards.
Example: First segment is smaller than 5MB, the next are 10MB and the final 1.6 MB
Source file:
Extract from the debug log:
The debug output is reworked to hide internal names.
Version of s3fs being used (s3fs --version)
Commit:
4c6690fs3fs command line
@gaul commented on GitHub (Mar 3, 2021):
I am confused about the flags you use since
nomultipartis set in your example. Can you provide the exact steps you took that reproduce these symptoms?@CarstenGrohmann commented on GitHub (Mar 4, 2021):
Currently s3fs used multipart uploads even if
nomultipartis set. I've filed a separate issue #1595 to address this question, because it looks like a bug in s3fs.From my current perspective the
nomultipartflag can be ignored in this case.I see this issue only if I use multiple parallel
rsyncinstances to copy files to my bucket. I couldn't reproduce it with a singlecpcommand. I'll try simplifying the test case and share the results.@CarstenGrohmann commented on GitHub (Mar 5, 2021):
That's probably the same
/tmpissue as described in #1595. I will try to reproduce the problem with the new knowledge and attach a short test case.@CarstenGrohmann commented on GitHub (Mar 8, 2021):
Steps to reproduce:
Fill
/tmpexcept 1..2 MBStart s3fs
Fill copy a file to S3
Check output:
First uploaded part is just 1835008 bytes
Second uploaded part has the right size 100MB as specified with
-o multipart_size=100@gaul commented on GitHub (Apr 23, 2021):
@CarstenGrohmann Is this issue still valid?
@CarstenGrohmann commented on GitHub (Apr 24, 2021):
I would contribute three changes to address this issue:
739f499(not started yet)