mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1058] Fragmentation parameters do not work when the file size is greater than 8MB and less than 16MB #582
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#582
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @threadfly on GitHub (Jun 30, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1058
Additional Information
S3fs has a multipart_size option, but when the file size is larger than 8MB and less than 16MB, the multipart_size option is specified as 8MB but it does not work. still using normal PUT upload.
Version of s3fs being used (s3fs --version)
s3fs-fuse-1.83
s3fs-fuse-1.85
*) I only compiled these two versions.
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
fuse-2.9.2-10.el7.x86_64
fuse-libs-2.9.2-11.el7.x86_64
Kernel information (uname -r)
5.1.5-1.el7.elrepo.x86_64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
CentOS Linux 7 (Core)
s3fs command line used, if applicable
/etc/fstab entry, if applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue
@gaul commented on GitHub (Jul 2, 2019):
FdEntity::RowFlushchecks to see if the file is double the multipart size before issuing a multipart upload:This is a bit weird but does it pose a practical problem?
@threadfly commented on GitHub (Jul 3, 2019):
@gaul Some cloud storage implementations of the s3 protocol seem to have restrictions on the size of files uploaded by PUT, so this will cause this problem.