mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #1450] The latest version of s3fs reports IO Error when continuous appending to one single file #759
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#759
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @xrefft on GitHub (Oct 14, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1450
The latest version of s3fs reports IO Error when continuous appending to one single file.
we noticed
[ERR] curl.cpp:RequestPerform(2596): ### CURLE_GOT_NOTHINGerror in curldbg log. which probably means concurrency problem.Additional Information
Version of s3fs being used (s3fs --version)
It happend on s3fs V1.87 (commit:8b7dd82) with OpenSSL.
Commit #1448 .
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
Kernel information (uname -r)
3.10.0-957.27.2.el7.x86_64GNU/Linux Distribution (cat /etc/os-release)
s3fs command line used
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
Details about issue
The latest version of s3fs reports IO Error when continuous appending to one single file.
we noticed
[ERR] curl.cpp:RequestPerform(2596): ### CURLE_GOT_NOTHINGerror in curldbg log. which probably means concurrency problem.PS:When using AWS SDK, OSS will not report this error, but using s3fs will cause this error.
Python test code we use is:
@dhananjays commented on GitHub (Jul 8, 2021):
We believe we're facing the same issue once we have started using v1.89. Did you figure out a solution/root-cause/workaround for this @xrefft ?
Mount commands we have tried:
You should note that while we have not specified any disk cache in either command above, we create the
/tmpas a tmpfs before running the above commands like this (so as to avoid disk activity on ec2 ebs volumes):