mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1243] Error uploading large file when writing via nfs #666
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#666
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @judassssss on GitHub (Feb 6, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1243
Additional Information
I use s3fs-fuse to mount object-storage bucket,and share the mounted directory for others by NFS, when I write a big file (for exmple 200M or more) to NFS client, s3fs multipart tries to split the file for upload, but the truncation must be repeated many times resulting in a file error
Version of s3fs being used (s3fs --version)
Amazon Simple Storage Service File System V1.86 (commit:bb20fc3) with OpenSSL
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
Version: 2.9.4-1ubuntu3.1
Kernel information (uname -r)
4.4.0-131-generic
GNU/Linux Distribution, if applicable (cat /etc/os-release)
Description: Ubuntu 16.04.5 LTS
s3fs command line used, if applicable
s3fs bucket /s3fs/ -o url=https://xxx.com -o use_cache=/cache -o use_path_request_style -o sigv2 -o curldbg -d
nfs3, nfs4.1
Details about issue
At nfs-server I wrote a large file to try mount successfully, check logs that s3fs only split the file once (eg: 200MB file divided into 20 parts and uploaded).
But at nfs client, splitting the file over and over again, for example, when uploading a 200 MB file initially divided into 12 parts after a while uploading s3fs divided into 6 parts again and I don't know if it stopped. Larger files are more prone to errors
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
logs erros
curl.cpp:url_to_host(99): url is https://xxxx.com [ERR] curl.cpp:RequestPerform(2424): HTTP response code 400, returning EIO. Body Text:
EntityTooSmallYour proposed upload is smaller than the minimum allowed sizenfs-server/mount s3fs:
nfs client:
@gaul commented on GitHub (Apr 22, 2020):
Could you test with the latest master? It includes a fix for the mixupload optimization.
@gaul commented on GitHub (Jun 4, 2020):
Closing; please reopen if symptoms persist.
@shuangzai21 commented on GitHub (Aug 31, 2021):
Have you solved this problem? I also met the same problem.