mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1056] Uploading files larger than available tmp space fails (NoCacheLoadAndPost) #579
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#579
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @bmeekhof on GitHub (Jun 27, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1056
Version of s3fs being used (s3fs --version)
V1.85 (commit:a78d8d1) with OpenSSL
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.2-11.el7
Kernel information (uname -r)
3.10.0-957.10.1.el7.x86_64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
NAME="Scientific Linux"
VERSION="7.6 (Nitrogen)"
ID="scientific"
ID_LIKE="rhel centos fedora"
VERSION_ID="7.6"
PRETTY_NAME="Scientific Linux 7.6 (Nitrogen)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.6:GA"
HOME_URL="http://www.scientificlinux.org//"
BUG_REPORT_URL="mailto:scientific-linux-devel@listserv.fnal.gov"
REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.6
REDHAT_SUPPORT_PRODUCT="Scientific Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"
s3fs command line used, if applicable
s3fs output
Details about issue
Files which exceed the size of available /tmp space fail to upload with I/O error and errors from s3fs as noted in logs above. If the files fit into /tmp they do not seem to go down the same NoCache code path and the upload works fine. Likewise if I specify a cache directory with enough space it does not trigger the issue. Using latest github commit, compiled on same system indicated in information above. The 'ensure_diskfree' flag can be used to simulate the issue as well - if a file would go over the diskfree requirement then the same error is encountered.
Result of copy command:
Another issue referring specifically to 10GB files also seems likely to be the same, I would not be surprised if the reason a 10GB file fails is because that is the limit of /tmp on that particular system: https://github.com/s3fs-fuse/s3fs-fuse/issues/1033
@gaul commented on GitHub (Oct 10, 2020):
Related to #1257.