mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1017] 0 byte files #557
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#557
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @aleksandarkostevski on GitHub (Apr 19, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1017
Version of s3fs being used (s3fs --version)
Amazon Simple Storage Service File System V1.85 (commit:381835e) with OpenSSL
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.4
Kernel information (uname -r)
4.4.0-1079-aws
GNU/Linux Distribution, if applicable (cat /etc/os-release)
NAME="Ubuntu"
VERSION="16.04.5 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04.5 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
VERSION_CODENAME=xenial
UBUNTU_CODENAME=xenial
s3fs command line used, if applicable
/etc/fstab entry, if applicable
s3fs#test.cylindev.com /media/bucket fuse _netdev,allow_other,use_path_request_style,multipart_size=400,dbglevel=err,passwd_file=/etc/.passwd-s3fs,uid=ubuntu,gid=ubuntu,umask=0000 0 0
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
Details about issue
We are using s3fs as a gateway to S3 for bulk file transfers around 10MB in size, and a SMB service to provide a network share to those resources (I've tried with NFS, but Windows shows better performance using SMB).
It happens, 1 in 1000 file transfers (not exact number, let's say it happens from time to time) to be 0 bytes in size.
Is this an open issue with s3fs and is there a way for this to be handled or transfer to be retried if destination file is 0 bytes in size?
Thanks!
@ggtakec commented on GitHub (Apr 22, 2019):
While transferring 1000 files, some errors may have occurred.
If you can, please specify debug options(-d or -o dbglevel=... and -o curldbg) and try to get a log.
(The log may be large, but I think there is a log about the file that got the error.)
Thanks in advance for your assistance.
@gaul commented on GitHub (Jul 4, 2019):
Could you test with master? It includes #1063 which fixes another truncation symptom.
@gaul commented on GitHub (Feb 3, 2020):
Closing due to inactivity. Please reopen if symptoms persist.