mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #1229] Fail to copy 230GB file to s3 ceph using s3fs #659
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#659
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @alphainets on GitHub (Jan 20, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1229
Version of s3fs being used (s3fs --version)
1.85
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.2
Kernel information (uname -r)
5.4.6-1.el7.elrepo.x86_64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
centos-7
s3fs command line used, if applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
Details about issue
I have problem uploading 230GB file to ceph s3 server,
I saw it was caching locally,
but when copy was donw, the file size in s3 became 0
May I know if there is any configuration mistakes?
I have no problem when file size is small, says 20GB.
@alphainets commented on GitHub (Jan 20, 2020):
After reading the docs carefully, it should be related to default multipart size.
I have changed the multiple size to fit my needs.
Now testing, if okay will close the issue~