[GH-ISSUE #1056] Uploading files larger than available tmp space fails (NoCacheLoadAndPost) #579

Closed
opened 2026-03-04 01:46:53 +03:00 by kerem · 1 comment
Owner

Originally created by @bmeekhof on GitHub (Jun 27, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1056

Version of s3fs being used (s3fs --version)

V1.85 (commit:a78d8d1) with OpenSSL

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.2-11.el7

Kernel information (uname -r)

3.10.0-957.10.1.el7.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Scientific Linux"
VERSION="7.6 (Nitrogen)"
ID="scientific"
ID_LIKE="rhel centos fedora"
VERSION_ID="7.6"
PRETTY_NAME="Scientific Linux 7.6 (Nitrogen)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.6:GA"
HOME_URL="http://www.scientificlinux.org//"
BUG_REPORT_URL="mailto:scientific-linux-devel@listserv.fnal.gov"

REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7"
REDHAT_BUGZILLA_PRODUCT_VERSION=7.6
REDHAT_SUPPORT_PRODUCT="Scientific Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="7.6"

s3fs command line used, if applicable

s3fs bucket /fuse/bucket  -o use_path_request_style -o instance_name="bucket" -o url=https://rgw.our.host -o curldbg -d -d -f 

s3fs output

s3fs output (-d -d -f -o curldbg):

(starting from the last few write commands before error is encountered)....

write[10] 65536 bytes to 1028325376 flags: 0x8001
   write[10] 65536 bytes to 1028325376
   unique: 15699, success, outsize: 24
unique: 15700, opcode: WRITE (16), nodeid: 2, insize: 65616, pid: 424409
write[10] 65536 bytes to 1028390912 flags: 0x8001
   write[10] 65536 bytes to 1028390912
   unique: 15700, success, outsize: 24
unique: 15701, opcode: WRITE (16), nodeid: 2, insize: 65616, pid: 424409
write[10] 65536 bytes to 1028456448 flags: 0x8001
[INF] fdcache.cpp:CleanupCacheDir(2279): cache cleanup requested
[INF]       curl.cpp:PreMultipartPostRequest(3356): [tpath=/1GB_file]
[INF]       curl.cpp:prepare_url(4527): URL is https://rgw.our.host/bucket/1GB_file?uploads
[INF]       curl.cpp:prepare_url(4559): URL changed is https://rgw.our.host/bucket/1GB_file?uploads
[INF]       curl.cpp:insertV4Headers(2610): computing signature [POST] [/1GB_file] [uploads] []
[INF]       curl.cpp:url_to_host(101): url is https://rgw.our.host
* Found bundle for host rgw.our.host: 0x7fc2fc019160
* Re-using existing connection! (#0) with host rgw.our.host
* Connected to rgw.our.host (1.2.3.4) port 443 (#0)
> POST /bucket/1GB_file?uploads HTTP/1.1
... (removed) ....
< HTTP/1.1 200 OK
< x-amz-request-id: tx000000000000000001b2a-005d14ca89-560cee5-default
< Content-Type: application/xml
< Content-Length: 250
< Date: Thu, 27 Jun 2019 13:54:17 GMT
< 
* Connection #0 to host rgw.our.host left intact
[INF]       curl.cpp:RequestPerform(2252): HTTP response code 200
[INF]       fdcache.cpp:NoCacheLoadAndPost(1256): [path=/1GB_file][fd=10][offset=0][size=1028456448]
[INF]       curl.cpp:MultipartUploadRequest(3932): [upload_id=2~_907KyGS1so_p59auKEjqbD03kASPC4][tpath=/1GB_file][fd=10][offset=0][size=10485760]
[INF]       curl.cpp:UploadMultipartPostRequest(3690): [tpath=/1GB_file][start=0][size=10485760][part=1]
[INF]       curl.cpp:UploadMultipartPostSetup(3630): [tpath=/1GB_file][start=0][size=10485760][part=1]
[INF]       curl.cpp:prepare_url(4527): URL is https://rgw.our.host/bucket/1GB_file?partNumber=1&uploadId=2~_907KyGS1so_p59auKEjqbD03kASPC4
[INF]       curl.cpp:prepare_url(4559): URL changed is https://rgw.our.host/bucket/1GB_file?partNumber=1&uploadId=2~_907KyGS1so_p59auKEjqbD03kASPC4
[INF]       curl.cpp:insertV4Headers(2610): computing signature [PUT] [/1GB_file] [partNumber=1&uploadId=2~_907KyGS1so_p59auKEjqbD03kASPC4] [e5b844cc57f57094ea4585e235f36c78c1cd222262bb89d53c94dcb4d6b3e55d]
[INF]       curl.cpp:url_to_host(101): url is https://rgw.our.host
[ERR] curl.cpp:RequestPerform(2399): ###curlCode: 43  msg: A libcurl function was given a bad argument
[ERR] curl.cpp:MultipartUploadRequest(3955): failed uploading part(-5)
[ERR] fdcache.cpp:NoCacheLoadAndPost(1372): failed to multipart post(start=0, size=10485760) for file(10).
[ERR] fdcache.cpp:Write(1739): failed to load uninitialized area and multipart uploading it(errno=-5)
[WAN] s3fs.cpp:s3fs_write(2243): failed to write file(/1GB_file). result=-5
   unique: 15701, error: -5 (Input/output error), outsize: 16
unique: 15702, opcode: FLUSH (25), nodeid: 2, insize: 64, pid: 424409
flush[10]
[INF] s3fs.cpp:s3fs_flush(2267): [path=/1GB_file][fd=10]
[INF]       fdcache.cpp:RowFlush(1478): [tpath=][path=/1GB_file][fd=10]
[INF]       curl.cpp:CompleteMultipartPostRequest(3466): [tpath=/1GB_file][parts=1]
[ERR] curl.cpp:CompleteMultipartPostRequest(3477): 1 file part is not finished uploading.
[ERR] fdcache.cpp:RowFlush(1588): failed to complete(finish) multipart post for file(10).
   unique: 15702, error: -1 (Operation not permitted), outsize: 16
unique: 15703, opcode: RELEASE (18), nodeid: 2, insize: 64, pid: 0
release[10] flags: 0x8001
[INF] s3fs.cpp:s3fs_release(2322): [path=/1GB_file][fd=10]
[INF]       cache.cpp:DelStat(544): delete stat cache entry[path=/1GB_file]
[INF]       fdcache.cpp:GetFdEntity(2077): [path=/1GB_file][fd=10]
   unique: 15703, success, outsize: 16
[INF] s3fs.cpp:s3fs_destroy(3511): destroy

Details about issue

Files which exceed the size of available /tmp space fail to upload with I/O error and errors from s3fs as noted in logs above. If the files fit into /tmp they do not seem to go down the same NoCache code path and the upload works fine. Likewise if I specify a cache directory with enough space it does not trigger the issue. Using latest github commit, compiled on same system indicated in information above. The 'ensure_diskfree' flag can be used to simulate the issue as well - if a file would go over the diskfree requirement then the same error is encountered.

Result of copy command:

cp 1GB_file /fuse/bucket/ 
cp: error writing ‘/fuse/bucket/1GB_file’: Input/output error
cp: failed to extend ‘/fuse/bucket/1GB_file’: Input/output error
cp: failed to close ‘/fuse/bucket/1GB_file’: Operation not permitted

Another issue referring specifically to 10GB files also seems likely to be the same, I would not be surprised if the reason a 10GB file fails is because that is the limit of /tmp on that particular system: https://github.com/s3fs-fuse/s3fs-fuse/issues/1033

Originally created by @bmeekhof on GitHub (Jun 27, 2019). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1056 #### Version of s3fs being used (s3fs --version) V1.85 (commit:a78d8d1) with OpenSSL #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.2-11.el7 #### Kernel information (uname -r) 3.10.0-957.10.1.el7.x86_64 #### GNU/Linux Distribution, if applicable (cat /etc/os-release) NAME="Scientific Linux" VERSION="7.6 (Nitrogen)" ID="scientific" ID_LIKE="rhel centos fedora" VERSION_ID="7.6" PRETTY_NAME="Scientific Linux 7.6 (Nitrogen)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:scientificlinux:scientificlinux:7.6:GA" HOME_URL="http://www.scientificlinux.org//" BUG_REPORT_URL="mailto:scientific-linux-devel@listserv.fnal.gov" REDHAT_BUGZILLA_PRODUCT="Scientific Linux 7" REDHAT_BUGZILLA_PRODUCT_VERSION=7.6 REDHAT_SUPPORT_PRODUCT="Scientific Linux" REDHAT_SUPPORT_PRODUCT_VERSION="7.6" #### s3fs command line used, if applicable ``` s3fs bucket /fuse/bucket -o use_path_request_style -o instance_name="bucket" -o url=https://rgw.our.host -o curldbg -d -d -f ``` #### s3fs output ``` s3fs output (-d -d -f -o curldbg): (starting from the last few write commands before error is encountered).... write[10] 65536 bytes to 1028325376 flags: 0x8001 write[10] 65536 bytes to 1028325376 unique: 15699, success, outsize: 24 unique: 15700, opcode: WRITE (16), nodeid: 2, insize: 65616, pid: 424409 write[10] 65536 bytes to 1028390912 flags: 0x8001 write[10] 65536 bytes to 1028390912 unique: 15700, success, outsize: 24 unique: 15701, opcode: WRITE (16), nodeid: 2, insize: 65616, pid: 424409 write[10] 65536 bytes to 1028456448 flags: 0x8001 [INF] fdcache.cpp:CleanupCacheDir(2279): cache cleanup requested [INF] curl.cpp:PreMultipartPostRequest(3356): [tpath=/1GB_file] [INF] curl.cpp:prepare_url(4527): URL is https://rgw.our.host/bucket/1GB_file?uploads [INF] curl.cpp:prepare_url(4559): URL changed is https://rgw.our.host/bucket/1GB_file?uploads [INF] curl.cpp:insertV4Headers(2610): computing signature [POST] [/1GB_file] [uploads] [] [INF] curl.cpp:url_to_host(101): url is https://rgw.our.host * Found bundle for host rgw.our.host: 0x7fc2fc019160 * Re-using existing connection! (#0) with host rgw.our.host * Connected to rgw.our.host (1.2.3.4) port 443 (#0) > POST /bucket/1GB_file?uploads HTTP/1.1 ... (removed) .... < HTTP/1.1 200 OK < x-amz-request-id: tx000000000000000001b2a-005d14ca89-560cee5-default < Content-Type: application/xml < Content-Length: 250 < Date: Thu, 27 Jun 2019 13:54:17 GMT < * Connection #0 to host rgw.our.host left intact [INF] curl.cpp:RequestPerform(2252): HTTP response code 200 [INF] fdcache.cpp:NoCacheLoadAndPost(1256): [path=/1GB_file][fd=10][offset=0][size=1028456448] [INF] curl.cpp:MultipartUploadRequest(3932): [upload_id=2~_907KyGS1so_p59auKEjqbD03kASPC4][tpath=/1GB_file][fd=10][offset=0][size=10485760] [INF] curl.cpp:UploadMultipartPostRequest(3690): [tpath=/1GB_file][start=0][size=10485760][part=1] [INF] curl.cpp:UploadMultipartPostSetup(3630): [tpath=/1GB_file][start=0][size=10485760][part=1] [INF] curl.cpp:prepare_url(4527): URL is https://rgw.our.host/bucket/1GB_file?partNumber=1&uploadId=2~_907KyGS1so_p59auKEjqbD03kASPC4 [INF] curl.cpp:prepare_url(4559): URL changed is https://rgw.our.host/bucket/1GB_file?partNumber=1&uploadId=2~_907KyGS1so_p59auKEjqbD03kASPC4 [INF] curl.cpp:insertV4Headers(2610): computing signature [PUT] [/1GB_file] [partNumber=1&uploadId=2~_907KyGS1so_p59auKEjqbD03kASPC4] [e5b844cc57f57094ea4585e235f36c78c1cd222262bb89d53c94dcb4d6b3e55d] [INF] curl.cpp:url_to_host(101): url is https://rgw.our.host [ERR] curl.cpp:RequestPerform(2399): ###curlCode: 43 msg: A libcurl function was given a bad argument [ERR] curl.cpp:MultipartUploadRequest(3955): failed uploading part(-5) [ERR] fdcache.cpp:NoCacheLoadAndPost(1372): failed to multipart post(start=0, size=10485760) for file(10). [ERR] fdcache.cpp:Write(1739): failed to load uninitialized area and multipart uploading it(errno=-5) [WAN] s3fs.cpp:s3fs_write(2243): failed to write file(/1GB_file). result=-5 unique: 15701, error: -5 (Input/output error), outsize: 16 unique: 15702, opcode: FLUSH (25), nodeid: 2, insize: 64, pid: 424409 flush[10] [INF] s3fs.cpp:s3fs_flush(2267): [path=/1GB_file][fd=10] [INF] fdcache.cpp:RowFlush(1478): [tpath=][path=/1GB_file][fd=10] [INF] curl.cpp:CompleteMultipartPostRequest(3466): [tpath=/1GB_file][parts=1] [ERR] curl.cpp:CompleteMultipartPostRequest(3477): 1 file part is not finished uploading. [ERR] fdcache.cpp:RowFlush(1588): failed to complete(finish) multipart post for file(10). unique: 15702, error: -1 (Operation not permitted), outsize: 16 unique: 15703, opcode: RELEASE (18), nodeid: 2, insize: 64, pid: 0 release[10] flags: 0x8001 [INF] s3fs.cpp:s3fs_release(2322): [path=/1GB_file][fd=10] [INF] cache.cpp:DelStat(544): delete stat cache entry[path=/1GB_file] [INF] fdcache.cpp:GetFdEntity(2077): [path=/1GB_file][fd=10] unique: 15703, success, outsize: 16 [INF] s3fs.cpp:s3fs_destroy(3511): destroy ``` ### Details about issue Files which exceed the size of available /tmp space fail to upload with I/O error and errors from s3fs as noted in logs above. If the files fit into /tmp they do not seem to go down the same NoCache code path and the upload works fine. Likewise if I specify a cache directory with enough space it does not trigger the issue. Using latest github commit, compiled on same system indicated in information above. The 'ensure_diskfree' flag can be used to simulate the issue as well - if a file would go over the diskfree requirement then the same error is encountered. Result of copy command: ``` cp 1GB_file /fuse/bucket/ cp: error writing ‘/fuse/bucket/1GB_file’: Input/output error cp: failed to extend ‘/fuse/bucket/1GB_file’: Input/output error cp: failed to close ‘/fuse/bucket/1GB_file’: Operation not permitted ``` Another issue referring specifically to 10GB files also seems likely to be the same, I would not be surprised if the reason a 10GB file fails is because that is the limit of /tmp on that particular system: https://github.com/s3fs-fuse/s3fs-fuse/issues/1033
kerem 2026-03-04 01:46:53 +03:00
Author
Owner

@gaul commented on GitHub (Oct 10, 2020):

Related to #1257.

<!-- gh-comment-id:706528779 --> @gaul commented on GitHub (Oct 10, 2020): Related to #1257.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#579
No description provided.