mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #2068] large file upload faild #1044
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1044
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @welyss on GitHub (Nov 29, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2068
Version of s3fs being used (
s3fs --version)Amazon Simple Storage Service File System V1.91 (commit:unknown) with OpenSSL
Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)2.9.2
Kernel information (
uname -r)3.10.0-1062.el7.x86_64
GNU/Linux Distribution, if applicable (
cat /etc/os-release)CentOS Linux release 7.7.1908 (Core)
How to run s3fs, if applicable
[dbsbackup /wystest fuse.s3fs _netdev,allow_other,use_path_request_style,url=http://thor.dev.hbec.com/namespaces/rook-ceph/services/rook-ceph-rgw-hbec-store,uid=1000,gid=1001,logfile=/root/s3fs.log,dbglevel=info,curldbg 0 0
] /etc/fstab
s3fs syslog messages (
grep s3fs /var/log/syslog,journalctl | grep s3fs, ors3fs outputs)https://github.com/welyss/mypubinfo/blob/38758f375da661ac434fa81fa64a6b9abeac101d/s3fs_bak.log
Details about issue
the file which size over max_dirty_data (default="5120"), my case is 6G(4G success), after progress stay near in 5G, CopyMultipartPost would be called, for now everything is ok, when 5G done, the file which left 2G start upload, error throw, and process never done, holding on front
@welyss commented on GitHub (Nov 30, 2022):
mount with nocopyapi param works for me