[GH-ISSUE #1595] -o nomultipart will be ignored #840

Closed
opened 2026-03-04 01:49:14 +03:00 by kerem · 7 comments
Owner

Originally created by @CarstenGrohmann on GitHub (Mar 4, 2021).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1595

As I understand -o nomultipart disables multipart uploads. For me, this means that only files up to 5GB (limit of the single upload) can be written to the bucket. Writing larger files would have to be aborted with an EFBIG error.

But currently s3fs uses multipart upload request (POST /mybucket/bigfile?uploads and PUT /mybucket/bigfile?partNumber=1...) instead of a simple PUT /bigfile as described in PutObject API documentation (https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html).

Is this intended or an error in the application?

Version of s3fs being used (s3fs --version)

$ ./s3fs --version
Amazon Simple Storage Service File System V1.89 (commit:8c58ba8) with OpenSSL
Copyright (C) 2010 Randy Rizun <rrizun@gmail.com>
License GPL2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Steps to reproduce

  1. Mount bucket
./s3fs mybucket /s3/mybucket -o url=http://mys3service:8080,use_path_request_style,notsup_compat_dir,enable_noobj_cache,nomultipart,curldbg,dbglevel=debug -d -d -f
  1. Show local file
$ ll -h bigfile
-rw-r--r-- 1 tcpdump tcpdump 7.0G Jan 27 11:20 bigfile
  1. Copy file
$ cp -p bigfile /s3/mybucket/

$ md5sum /s3/mybucket/bigfile bigfile
92b6de21419129456f82da791e225c5b  /s3/mybucket/bigfile
92b6de21419129456f82da791e225c5b  bigfile
  1. Check log

s3fs command line and debug output

$ ./s3fs mybucket /s3/mybucket -o url=http://mys3service:8080,use_path_request_style,notsup_compat_dir,enable_noobj_cache,nomultipart,curldbg,dbglevel=debug -d -d -f
2021-03-04T07:12:32.282Z [CRT] s3fs_logger.cpp:LowSetLogLevel(219): change debug level from [CRT] to [DBG]
2021-03-04T07:12:32.282Z [INF]     s3fs.cpp:set_mountpoint_attribute(4020): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
2021-03-04T07:12:32.282Z [DBG] curl.cpp:InitMimeType(408): Try to load mime types from /etc/mime.types file.
2021-03-04T07:12:32.282Z [DBG] curl.cpp:InitMimeType(413): The old mime types are cleared to load new mime types.
2021-03-04T07:12:32.284Z [INF] curl.cpp:InitMimeType(436): Loaded mime information from /etc/mime.types
2021-03-04T07:12:32.284Z [INF] fdcache_stat.cpp:CheckCacheFileStatTopDir(79): The path to cache top dir is empty, thus not need to check permission.
FUSE library version: 2.9.2
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.22
flags=0x0000f7fb
max_readahead=0x00020000
2021-03-04T07:12:32.285Z [INF] s3fs.cpp:s3fs_init(3331): init v1.89(commit:8c58ba8) with OpenSSL

First write request:

unique: 58672, opcode: WRITE (16), nodeid: 2, insize: 65616, pid: 32565
write[5] 65536 bytes to 3844472832 flags: 0x8001
2021-03-04T07:43:09.096Z [DBG] s3fs.cpp:s3fs_write(2298): [path=/bigfile][size=65536][offset=3844472832][fd=5]
2021-03-04T07:43:09.096Z [DBG] fdcache.cpp:ExistOpen(526): [path=/bigfile][fd=5][ignore_existfd=false]
2021-03-04T07:43:09.096Z [DBG] fdcache.cpp:Open(446): [path=/bigfile][size=-1][time=-1]
2021-03-04T07:43:09.096Z [DBG] fdcache_entity.cpp:Dup(246): [path=/bigfile][fd=5][refcnt=2]
2021-03-04T07:43:09.096Z [DBG] fdcache_entity.cpp:Open(315): [path=/bigfile][fd=5][size=-1][time=-1]
2021-03-04T07:43:09.096Z [DBG] fdcache_entity.cpp:Dup(246): [path=/bigfile][fd=5][refcnt=3]
2021-03-04T07:43:09.096Z [DBG] fdcache_entity.cpp:Close(200): [path=/bigfile][fd=5][refcnt=2]
2021-03-04T07:43:09.096Z [DBG] fdcache_entity.cpp:Write(1439): [path=/bigfile][fd=5][offset=3844472832][size=65536]
2021-03-04T07:43:09.096Z [INF]       curl.cpp:PreMultipartPostRequest(3452): [tpath=/bigfile]
2021-03-04T07:43:09.096Z [DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 30
2021-03-04T07:43:09.096Z [INF]       curl_util.cpp:prepare_url(250): URL is http://mys3service:8080/mybucket/bigfile?uploads
2021-03-04T07:43:09.096Z [INF]       curl_util.cpp:prepare_url(283): URL changed is http://mys3service:8080/mybucket/bigfile?uploads
2021-03-04T07:43:09.097Z [DBG] curl.cpp:RequestPerform(2254): connecting to URL http://mys3service:8080/mybucket/bigfile?uploads
2021-03-04T07:43:09.097Z [INF]       curl.cpp:insertV4Headers(2640): computing signature [POST] [/bigfile] [uploads] []
2021-03-04T07:43:09.097Z [INF]       curl_util.cpp:url_to_host(327): url is http://mys3service:8080
2021-03-04T07:43:09.097Z [CURL DBG] * Found bundle for host mys3service: 0x7f52f0000b60
2021-03-04T07:43:09.097Z [CURL DBG] * Re-using existing connection! (#1) with host mys3service
2021-03-04T07:43:09.097Z [CURL DBG] * Connected to mys3service (192.168.1.1) port 8080 (#1)
2021-03-04T07:43:09.097Z [CURL DBG] > POST /mybucket/bigfile?uploads HTTP/1.1
2021-03-04T07:43:09.097Z [CURL DBG] > User-Agent: s3fs/1.89 (commit hash 8c58ba8; OpenSSL)
2021-03-04T07:43:09.097Z [CURL DBG] > Authorization: AWS4-HMAC-SHA256 Credential=NHTL282ERGST9B40156H/20210304/us-east-1/s3/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-atime;x-amz-meta-ctime;x-amz-meta-gid;x-amz-meta-mode;x-amz-meta-mtime;x-amz-meta-uid, Signature=15003ddad1001ce128ad6a1650b092544259b6f8faae2e15d035d9f61dcf2f11
2021-03-04T07:43:09.097Z [CURL DBG] > Content-Type: application/octet-stream
2021-03-04T07:43:09.097Z [CURL DBG] > host: mys3service:8080
2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-date: 20210304T074309Z
2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-meta-atime: 1614843757
2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-meta-ctime: 1614843757
2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-meta-gid: 0
2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-meta-mode: 33152
2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-meta-mtime: 1614843757
2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-meta-uid: 0
2021-03-04T07:43:09.097Z [CURL DBG] >
2021-03-04T07:43:09.102Z [CURL DBG] < HTTP/1.1 200 OK
2021-03-04T07:43:09.102Z [CURL DBG] < Date: Thu, 04 Mar 2021 07:43:09 GMT
2021-03-04T07:43:09.102Z [CURL DBG] < Connection: KEEP-ALIVE
2021-03-04T07:43:09.102Z [CURL DBG] < Server: StorageGRID/11.4.0.2
2021-03-04T07:43:09.102Z [CURL DBG] < x-amz-request-id: 1614843757702774
2021-03-04T07:43:09.102Z [CURL DBG] < x-amz-id-2: 12293711
2021-03-04T07:43:09.102Z [CURL DBG] < Content-Length: 313
2021-03-04T07:43:09.102Z [CURL DBG] < Content-Type: application/xml
2021-03-04T07:43:09.102Z [CURL DBG] <
2021-03-04T07:43:09.102Z [CURL DBG] * Connection #1 to host mys3service left intact
2021-03-04T07:43:09.102Z [INF]       curl.cpp:RequestPerform(2287): HTTP response code 200
2021-03-04T07:43:09.102Z [DBG] curl_handlerpool.cpp:ReturnHandler(103): Return handler to pool
2021-03-04T07:43:09.102Z [INF]       fdcache_entity.cpp:NoCacheLoadAndPost(962): [path=/bigfile][fd=5][offset=0][size=3844472832]
2021-03-04T07:43:09.102Z [INF]       curl.cpp:MultipartUploadRequest(4034): [upload_id=GDuv-32lWOMe1vljQ40_TKVQHvFRKKRqe576N1KaJVWgObbtt9MdfzQNzw][tpath=/bigfile][fd=5][offset=0][size=10485760]
2021-03-04T07:43:09.102Z [INF]       curl.cpp:UploadMultipartPostRequest(3770): [tpath=/bigfile][start=0][size=10485760][part=1]
2021-03-04T07:43:09.102Z [INF]       curl.cpp:UploadMultipartPostSetup(3711): [tpath=/bigfile][start=0][size=10485760][part=1]
2021-03-04T07:43:09.102Z [INF]       curl_util.cpp:prepare_url(250): URL is http://mys3service:8080/mybucket/bigfile?partNumber=1&uploadId=GDuv-32lWOMe1vljQ40_TKVQHvFRKKRqe576N1KaJVWgObbtt9MdfzQNzw
2021-03-04T07:43:09.102Z [INF]       curl_util.cpp:prepare_url(283): URL changed is http://mys3service:8080/mybucket/bigfile?partNumber=1&uploadId=GDuv-32lWOMe1vljQ40_TKVQHvFRKKRqe576N1KaJVWgObbtt9MdfzQNzw
2021-03-04T07:43:09.102Z [DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 30
2021-03-04T07:43:09.102Z [DBG] curl.cpp:RequestPerform(2254): connecting to URL http://mys3service:8080/mybucket/bigfile?partNumber=1&uploadId=GDuv-32lWOMe1vljQ40_TKVQHvFRKKRqe576N1KaJVWgObbtt9MdfzQNzw
2021-03-04T07:43:09.149Z [INF]       curl.cpp:insertV4Headers(2640): computing signature [PUT] [/bigfile] [partNumber=1&uploadId=GDuv-32lWOMe1vljQ40_TKVQHvFRKKRqe576N1KaJVWgObbtt9MdfzQNzw] [02f8307fbcb86f45bc19ff1ca86d126d1ac247f470a490c9fedd3d6e1aed2051]
2021-03-04T07:43:09.149Z [INF]       curl_util.cpp:url_to_host(327): url is http://mys3service:8080
2021-03-04T07:43:09.149Z [CURL DBG] * Found bundle for host mys3service: 0x7f52f0000b60
2021-03-04T07:43:09.149Z [CURL DBG] * Re-using existing connection! (#1) with host mys3service
2021-03-04T07:43:09.149Z [CURL DBG] * Connected to mys3service (192.168.1.1) port 8080 (#1)
2021-03-04T07:43:09.149Z [CURL DBG] > PUT /mybucket/bigfile?partNumber=1&uploadId=GDuv-32lWOMe1vljQ40_TKVQHvFRKKRqe576N1KaJVWgObbtt9MdfzQNzw HTTP/1.1
2021-03-04T07:43:09.149Z [CURL DBG] > User-Agent: s3fs/1.89 (commit hash 8c58ba8; OpenSSL)
2021-03-04T07:43:09.149Z [CURL DBG] > Authorization: AWS4-HMAC-SHA256 Credential=NHTL282ERGST9B40156H/20210304/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=dd8e49cab1d6eda5e17724b47dacae7d0925626094ddc1b44b259a16233e3011
2021-03-04T07:43:09.149Z [CURL DBG] > host: mys3service:8080
2021-03-04T07:43:09.149Z [CURL DBG] > x-amz-content-sha256: 02f8307fbcb86f45bc19ff1ca86d126d1ac247f470a490c9fedd3d6e1aed2051
2021-03-04T07:43:09.149Z [CURL DBG] > x-amz-date: 20210304T074309Z
2021-03-04T07:43:09.149Z [CURL DBG] > Content-Length: 10485760
2021-03-04T07:43:09.149Z [CURL DBG] > Expect: 100-continue
2021-03-04T07:43:09.149Z [CURL DBG] >
2021-03-04T07:43:09.151Z [CURL DBG] < HTTP/1.1 100 Continue
2021-03-04T07:43:09.231Z [CURL DBG] * We are completely uploaded and fine
2021-03-04T07:43:09.247Z [CURL DBG] < HTTP/1.1 200 OK
2021-03-04T07:43:09.247Z [CURL DBG] < Date: Thu, 04 Mar 2021 07:43:09 GMT
2021-03-04T07:43:09.247Z [CURL DBG] < Connection: KEEP-ALIVE
2021-03-04T07:43:09.247Z [CURL DBG] < Server: StorageGRID/11.4.0.2
2021-03-04T07:43:09.247Z [CURL DBG] < x-amz-request-id: 1614843757702774
2021-03-04T07:43:09.247Z [CURL DBG] < x-amz-id-2: 12293711
2021-03-04T07:43:09.247Z [CURL DBG] < Content-Length: 0
2021-03-04T07:43:09.247Z [CURL DBG] < ETag: "2bf6522b1e4c6e8d91663abd9220dbb7"
2021-03-04T07:43:09.247Z [CURL DBG] <
2021-03-04T07:43:09.247Z [CURL DBG] * Connection #1 to host mys3service left intact
2021-03-04T07:43:09.247Z [INF]       curl.cpp:RequestPerform(2287): HTTP response code 200
Originally created by @CarstenGrohmann on GitHub (Mar 4, 2021). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1595 As I understand `-o nomultipart` disables multipart uploads. For me, this means that only files up to 5GB (limit of the single upload) can be written to the bucket. Writing larger files would have to be aborted with an EFBIG error. But currently s3fs uses multipart upload request (`POST /mybucket/bigfile?uploads` and `PUT /mybucket/bigfile?partNumber=1...`) instead of a simple `PUT /bigfile` as described in PutObject API documentation (https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html). Is this intended or an error in the application? #### Version of s3fs being used (s3fs --version) ``` $ ./s3fs --version Amazon Simple Storage Service File System V1.89 (commit:8c58ba8) with OpenSSL Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. ``` #### Steps to reproduce 1. Mount bucket ``` ./s3fs mybucket /s3/mybucket -o url=http://mys3service:8080,use_path_request_style,notsup_compat_dir,enable_noobj_cache,nomultipart,curldbg,dbglevel=debug -d -d -f ``` 2. Show local file ``` $ ll -h bigfile -rw-r--r-- 1 tcpdump tcpdump 7.0G Jan 27 11:20 bigfile ``` 3. Copy file ``` $ cp -p bigfile /s3/mybucket/ $ md5sum /s3/mybucket/bigfile bigfile 92b6de21419129456f82da791e225c5b /s3/mybucket/bigfile 92b6de21419129456f82da791e225c5b bigfile ``` 4. Check log #### s3fs command line and debug output ``` $ ./s3fs mybucket /s3/mybucket -o url=http://mys3service:8080,use_path_request_style,notsup_compat_dir,enable_noobj_cache,nomultipart,curldbg,dbglevel=debug -d -d -f 2021-03-04T07:12:32.282Z [CRT] s3fs_logger.cpp:LowSetLogLevel(219): change debug level from [CRT] to [DBG] 2021-03-04T07:12:32.282Z [INF] s3fs.cpp:set_mountpoint_attribute(4020): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) 2021-03-04T07:12:32.282Z [DBG] curl.cpp:InitMimeType(408): Try to load mime types from /etc/mime.types file. 2021-03-04T07:12:32.282Z [DBG] curl.cpp:InitMimeType(413): The old mime types are cleared to load new mime types. 2021-03-04T07:12:32.284Z [INF] curl.cpp:InitMimeType(436): Loaded mime information from /etc/mime.types 2021-03-04T07:12:32.284Z [INF] fdcache_stat.cpp:CheckCacheFileStatTopDir(79): The path to cache top dir is empty, thus not need to check permission. FUSE library version: 2.9.2 nullpath_ok: 0 nopath: 0 utime_omit_ok: 0 unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0 INIT: 7.22 flags=0x0000f7fb max_readahead=0x00020000 2021-03-04T07:12:32.285Z [INF] s3fs.cpp:s3fs_init(3331): init v1.89(commit:8c58ba8) with OpenSSL ``` First write request: ``` unique: 58672, opcode: WRITE (16), nodeid: 2, insize: 65616, pid: 32565 write[5] 65536 bytes to 3844472832 flags: 0x8001 2021-03-04T07:43:09.096Z [DBG] s3fs.cpp:s3fs_write(2298): [path=/bigfile][size=65536][offset=3844472832][fd=5] 2021-03-04T07:43:09.096Z [DBG] fdcache.cpp:ExistOpen(526): [path=/bigfile][fd=5][ignore_existfd=false] 2021-03-04T07:43:09.096Z [DBG] fdcache.cpp:Open(446): [path=/bigfile][size=-1][time=-1] 2021-03-04T07:43:09.096Z [DBG] fdcache_entity.cpp:Dup(246): [path=/bigfile][fd=5][refcnt=2] 2021-03-04T07:43:09.096Z [DBG] fdcache_entity.cpp:Open(315): [path=/bigfile][fd=5][size=-1][time=-1] 2021-03-04T07:43:09.096Z [DBG] fdcache_entity.cpp:Dup(246): [path=/bigfile][fd=5][refcnt=3] 2021-03-04T07:43:09.096Z [DBG] fdcache_entity.cpp:Close(200): [path=/bigfile][fd=5][refcnt=2] 2021-03-04T07:43:09.096Z [DBG] fdcache_entity.cpp:Write(1439): [path=/bigfile][fd=5][offset=3844472832][size=65536] 2021-03-04T07:43:09.096Z [INF] curl.cpp:PreMultipartPostRequest(3452): [tpath=/bigfile] 2021-03-04T07:43:09.096Z [DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 30 2021-03-04T07:43:09.096Z [INF] curl_util.cpp:prepare_url(250): URL is http://mys3service:8080/mybucket/bigfile?uploads 2021-03-04T07:43:09.096Z [INF] curl_util.cpp:prepare_url(283): URL changed is http://mys3service:8080/mybucket/bigfile?uploads 2021-03-04T07:43:09.097Z [DBG] curl.cpp:RequestPerform(2254): connecting to URL http://mys3service:8080/mybucket/bigfile?uploads 2021-03-04T07:43:09.097Z [INF] curl.cpp:insertV4Headers(2640): computing signature [POST] [/bigfile] [uploads] [] 2021-03-04T07:43:09.097Z [INF] curl_util.cpp:url_to_host(327): url is http://mys3service:8080 2021-03-04T07:43:09.097Z [CURL DBG] * Found bundle for host mys3service: 0x7f52f0000b60 2021-03-04T07:43:09.097Z [CURL DBG] * Re-using existing connection! (#1) with host mys3service 2021-03-04T07:43:09.097Z [CURL DBG] * Connected to mys3service (192.168.1.1) port 8080 (#1) 2021-03-04T07:43:09.097Z [CURL DBG] > POST /mybucket/bigfile?uploads HTTP/1.1 2021-03-04T07:43:09.097Z [CURL DBG] > User-Agent: s3fs/1.89 (commit hash 8c58ba8; OpenSSL) 2021-03-04T07:43:09.097Z [CURL DBG] > Authorization: AWS4-HMAC-SHA256 Credential=NHTL282ERGST9B40156H/20210304/us-east-1/s3/aws4_request, SignedHeaders=content-type;host;x-amz-content-sha256;x-amz-date;x-amz-meta-atime;x-amz-meta-ctime;x-amz-meta-gid;x-amz-meta-mode;x-amz-meta-mtime;x-amz-meta-uid, Signature=15003ddad1001ce128ad6a1650b092544259b6f8faae2e15d035d9f61dcf2f11 2021-03-04T07:43:09.097Z [CURL DBG] > Content-Type: application/octet-stream 2021-03-04T07:43:09.097Z [CURL DBG] > host: mys3service:8080 2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-date: 20210304T074309Z 2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-meta-atime: 1614843757 2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-meta-ctime: 1614843757 2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-meta-gid: 0 2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-meta-mode: 33152 2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-meta-mtime: 1614843757 2021-03-04T07:43:09.097Z [CURL DBG] > x-amz-meta-uid: 0 2021-03-04T07:43:09.097Z [CURL DBG] > 2021-03-04T07:43:09.102Z [CURL DBG] < HTTP/1.1 200 OK 2021-03-04T07:43:09.102Z [CURL DBG] < Date: Thu, 04 Mar 2021 07:43:09 GMT 2021-03-04T07:43:09.102Z [CURL DBG] < Connection: KEEP-ALIVE 2021-03-04T07:43:09.102Z [CURL DBG] < Server: StorageGRID/11.4.0.2 2021-03-04T07:43:09.102Z [CURL DBG] < x-amz-request-id: 1614843757702774 2021-03-04T07:43:09.102Z [CURL DBG] < x-amz-id-2: 12293711 2021-03-04T07:43:09.102Z [CURL DBG] < Content-Length: 313 2021-03-04T07:43:09.102Z [CURL DBG] < Content-Type: application/xml 2021-03-04T07:43:09.102Z [CURL DBG] < 2021-03-04T07:43:09.102Z [CURL DBG] * Connection #1 to host mys3service left intact 2021-03-04T07:43:09.102Z [INF] curl.cpp:RequestPerform(2287): HTTP response code 200 2021-03-04T07:43:09.102Z [DBG] curl_handlerpool.cpp:ReturnHandler(103): Return handler to pool 2021-03-04T07:43:09.102Z [INF] fdcache_entity.cpp:NoCacheLoadAndPost(962): [path=/bigfile][fd=5][offset=0][size=3844472832] 2021-03-04T07:43:09.102Z [INF] curl.cpp:MultipartUploadRequest(4034): [upload_id=GDuv-32lWOMe1vljQ40_TKVQHvFRKKRqe576N1KaJVWgObbtt9MdfzQNzw][tpath=/bigfile][fd=5][offset=0][size=10485760] 2021-03-04T07:43:09.102Z [INF] curl.cpp:UploadMultipartPostRequest(3770): [tpath=/bigfile][start=0][size=10485760][part=1] 2021-03-04T07:43:09.102Z [INF] curl.cpp:UploadMultipartPostSetup(3711): [tpath=/bigfile][start=0][size=10485760][part=1] 2021-03-04T07:43:09.102Z [INF] curl_util.cpp:prepare_url(250): URL is http://mys3service:8080/mybucket/bigfile?partNumber=1&uploadId=GDuv-32lWOMe1vljQ40_TKVQHvFRKKRqe576N1KaJVWgObbtt9MdfzQNzw 2021-03-04T07:43:09.102Z [INF] curl_util.cpp:prepare_url(283): URL changed is http://mys3service:8080/mybucket/bigfile?partNumber=1&uploadId=GDuv-32lWOMe1vljQ40_TKVQHvFRKKRqe576N1KaJVWgObbtt9MdfzQNzw 2021-03-04T07:43:09.102Z [DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 30 2021-03-04T07:43:09.102Z [DBG] curl.cpp:RequestPerform(2254): connecting to URL http://mys3service:8080/mybucket/bigfile?partNumber=1&uploadId=GDuv-32lWOMe1vljQ40_TKVQHvFRKKRqe576N1KaJVWgObbtt9MdfzQNzw 2021-03-04T07:43:09.149Z [INF] curl.cpp:insertV4Headers(2640): computing signature [PUT] [/bigfile] [partNumber=1&uploadId=GDuv-32lWOMe1vljQ40_TKVQHvFRKKRqe576N1KaJVWgObbtt9MdfzQNzw] [02f8307fbcb86f45bc19ff1ca86d126d1ac247f470a490c9fedd3d6e1aed2051] 2021-03-04T07:43:09.149Z [INF] curl_util.cpp:url_to_host(327): url is http://mys3service:8080 2021-03-04T07:43:09.149Z [CURL DBG] * Found bundle for host mys3service: 0x7f52f0000b60 2021-03-04T07:43:09.149Z [CURL DBG] * Re-using existing connection! (#1) with host mys3service 2021-03-04T07:43:09.149Z [CURL DBG] * Connected to mys3service (192.168.1.1) port 8080 (#1) 2021-03-04T07:43:09.149Z [CURL DBG] > PUT /mybucket/bigfile?partNumber=1&uploadId=GDuv-32lWOMe1vljQ40_TKVQHvFRKKRqe576N1KaJVWgObbtt9MdfzQNzw HTTP/1.1 2021-03-04T07:43:09.149Z [CURL DBG] > User-Agent: s3fs/1.89 (commit hash 8c58ba8; OpenSSL) 2021-03-04T07:43:09.149Z [CURL DBG] > Authorization: AWS4-HMAC-SHA256 Credential=NHTL282ERGST9B40156H/20210304/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=dd8e49cab1d6eda5e17724b47dacae7d0925626094ddc1b44b259a16233e3011 2021-03-04T07:43:09.149Z [CURL DBG] > host: mys3service:8080 2021-03-04T07:43:09.149Z [CURL DBG] > x-amz-content-sha256: 02f8307fbcb86f45bc19ff1ca86d126d1ac247f470a490c9fedd3d6e1aed2051 2021-03-04T07:43:09.149Z [CURL DBG] > x-amz-date: 20210304T074309Z 2021-03-04T07:43:09.149Z [CURL DBG] > Content-Length: 10485760 2021-03-04T07:43:09.149Z [CURL DBG] > Expect: 100-continue 2021-03-04T07:43:09.149Z [CURL DBG] > 2021-03-04T07:43:09.151Z [CURL DBG] < HTTP/1.1 100 Continue 2021-03-04T07:43:09.231Z [CURL DBG] * We are completely uploaded and fine 2021-03-04T07:43:09.247Z [CURL DBG] < HTTP/1.1 200 OK 2021-03-04T07:43:09.247Z [CURL DBG] < Date: Thu, 04 Mar 2021 07:43:09 GMT 2021-03-04T07:43:09.247Z [CURL DBG] < Connection: KEEP-ALIVE 2021-03-04T07:43:09.247Z [CURL DBG] < Server: StorageGRID/11.4.0.2 2021-03-04T07:43:09.247Z [CURL DBG] < x-amz-request-id: 1614843757702774 2021-03-04T07:43:09.247Z [CURL DBG] < x-amz-id-2: 12293711 2021-03-04T07:43:09.247Z [CURL DBG] < Content-Length: 0 2021-03-04T07:43:09.247Z [CURL DBG] < ETag: "2bf6522b1e4c6e8d91663abd9220dbb7" 2021-03-04T07:43:09.247Z [CURL DBG] < 2021-03-04T07:43:09.247Z [CURL DBG] * Connection #1 to host mys3service left intact 2021-03-04T07:43:09.247Z [INF] curl.cpp:RequestPerform(2287): HTTP response code 200 ```
kerem closed this issue 2026-03-04 01:49:14 +03:00
Author
Owner

@gaul commented on GitHub (Mar 4, 2021):

This is not what I observe:

s3fs BUCKET PATH -o url=https://s3.amazonaws.com -o nomultipart -o curldbg -f
...
2021-03-04T12:25:59.400Z [CURL DBG] > PUT /25MB HTTP/1.1
2021-03-04T12:25:59.400Z [CURL DBG] > Host: XXX.s3.amazonaws.com
2021-03-04T12:25:59.400Z [CURL DBG] > User-Agent: s3fs/1.89 (commit hash unknown; OpenSSL)
2021-03-04T12:25:59.400Z [CURL DBG] > Accept: */*
2021-03-04T12:25:59.400Z [CURL DBG] > Authorization: XXX
2021-03-04T12:25:59.400Z [CURL DBG] > Content-Type: application/octet-stream
2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-content-sha256: 730764fea9c8872054da8bb1ec48ed45b51a9d1a14846222569f6287daee80ed
2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-date: 20210304T122559Z
2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-meta-atime: 1614860758
2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-meta-ctime: 1614860759
2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-meta-gid: 1000
2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-meta-mode: 33204
2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-meta-mtime: 1614860759
2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-meta-uid: 1000
2021-03-04T12:25:59.400Z [CURL DBG] > Content-Length: 26214400
2021-03-04T12:25:59.400Z [CURL DBG] > Expect: 100-continue
<!-- gh-comment-id:790583993 --> @gaul commented on GitHub (Mar 4, 2021): This is not what I observe: ``` s3fs BUCKET PATH -o url=https://s3.amazonaws.com -o nomultipart -o curldbg -f ... 2021-03-04T12:25:59.400Z [CURL DBG] > PUT /25MB HTTP/1.1 2021-03-04T12:25:59.400Z [CURL DBG] > Host: XXX.s3.amazonaws.com 2021-03-04T12:25:59.400Z [CURL DBG] > User-Agent: s3fs/1.89 (commit hash unknown; OpenSSL) 2021-03-04T12:25:59.400Z [CURL DBG] > Accept: */* 2021-03-04T12:25:59.400Z [CURL DBG] > Authorization: XXX 2021-03-04T12:25:59.400Z [CURL DBG] > Content-Type: application/octet-stream 2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-content-sha256: 730764fea9c8872054da8bb1ec48ed45b51a9d1a14846222569f6287daee80ed 2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-date: 20210304T122559Z 2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-meta-atime: 1614860758 2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-meta-ctime: 1614860759 2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-meta-gid: 1000 2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-meta-mode: 33204 2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-meta-mtime: 1614860759 2021-03-04T12:25:59.400Z [CURL DBG] > x-amz-meta-uid: 1000 2021-03-04T12:25:59.400Z [CURL DBG] > Content-Length: 26214400 2021-03-04T12:25:59.400Z [CURL DBG] > Expect: 100-continue ```
Author
Owner

@CarstenGrohmann commented on GitHub (Mar 4, 2021):

I don't know when s3fs will switch to multipart uploads. It uses single uploads till 3000M, as shown in your example. Starting somewhere between 3615MB and 3750M s3fs is using multipart uploads.

[CURL DBG] > PUT /mybucket/38M HTTP/1.1
[CURL DBG] > PUT /mybucket/750M HTTP/1.1
[CURL DBG] > PUT /mybucket/1500M HTTP/1.1
[CURL DBG] > PUT /mybucket/3000M HTTP/1.1
[CURL DBG] > PUT /mybucket/3410M HTTP/1.1
[CURL DBG] > PUT /mybucket/3615M HTTP/1.1
[CURL DBG] > POST /mybucket/3750M?uploads HTTP/1.1

These tests have been executed with

$ ./s3fs mybucket /s3/mybucket -o url=http://mys3service:8080,use_path_request_style,notsup_compat_dir,enable_noobj_cache,nomultipart,curldbg -f
<!-- gh-comment-id:790627038 --> @CarstenGrohmann commented on GitHub (Mar 4, 2021): I don't know when s3fs will switch to multipart uploads. It uses single uploads till 3000M, as shown in your example. Starting somewhere between 3615MB and 3750M s3fs is using multipart uploads. ``` [CURL DBG] > PUT /mybucket/38M HTTP/1.1 [CURL DBG] > PUT /mybucket/750M HTTP/1.1 [CURL DBG] > PUT /mybucket/1500M HTTP/1.1 [CURL DBG] > PUT /mybucket/3000M HTTP/1.1 [CURL DBG] > PUT /mybucket/3410M HTTP/1.1 [CURL DBG] > PUT /mybucket/3615M HTTP/1.1 [CURL DBG] > POST /mybucket/3750M?uploads HTTP/1.1 ``` These tests have been executed with ``` $ ./s3fs mybucket /s3/mybucket -o url=http://mys3service:8080,use_path_request_style,notsup_compat_dir,enable_noobj_cache,nomultipart,curldbg -f ```
Author
Owner

@gaul commented on GitHub (Mar 4, 2021):

Something strange is going on. I successfully uploaded the same file with a single part:

s3fs BUCKET PATH -o url=https://s3.amazonaws.com -o nomultipart -o curldbg -f
...
2021-03-04T13:53:19.162Z [CURL DBG] > PUT /3750MB HTTP/1.1
2021-03-04T13:53:19.162Z [CURL DBG] > Host: XXX.s3.amazonaws.com
2021-03-04T13:53:19.162Z [CURL DBG] > User-Agent: s3fs/1.89 (commit hash unknown; OpenSSL)
2021-03-04T13:53:19.162Z [CURL DBG] > Accept: */*
2021-03-04T13:53:19.162Z [CURL DBG] > Authorization: XXX
2021-03-04T13:53:19.162Z [CURL DBG] > Content-Type: application/octet-stream
2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-content-sha256: 372f922e90c893d60ce73448c84de0410ce37cdf461083b7378a56def60187ec
2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-date: 20210304T135318Z
2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-meta-atime: 1614865870
2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-meta-ctime: 1614865983
2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-meta-gid: 1000
2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-meta-mode: 33204
2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-meta-mtime: 1614865983
2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-meta-uid: 1000
2021-03-04T13:53:19.162Z [CURL DBG] > Content-Length: 3932160000
2021-03-04T13:53:19.162Z [CURL DBG] > Expect: 100-continue
<!-- gh-comment-id:790639151 --> @gaul commented on GitHub (Mar 4, 2021): Something strange is going on. I successfully uploaded the same file with a single part: ``` s3fs BUCKET PATH -o url=https://s3.amazonaws.com -o nomultipart -o curldbg -f ... 2021-03-04T13:53:19.162Z [CURL DBG] > PUT /3750MB HTTP/1.1 2021-03-04T13:53:19.162Z [CURL DBG] > Host: XXX.s3.amazonaws.com 2021-03-04T13:53:19.162Z [CURL DBG] > User-Agent: s3fs/1.89 (commit hash unknown; OpenSSL) 2021-03-04T13:53:19.162Z [CURL DBG] > Accept: */* 2021-03-04T13:53:19.162Z [CURL DBG] > Authorization: XXX 2021-03-04T13:53:19.162Z [CURL DBG] > Content-Type: application/octet-stream 2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-content-sha256: 372f922e90c893d60ce73448c84de0410ce37cdf461083b7378a56def60187ec 2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-date: 20210304T135318Z 2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-meta-atime: 1614865870 2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-meta-ctime: 1614865983 2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-meta-gid: 1000 2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-meta-mode: 33204 2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-meta-mtime: 1614865983 2021-03-04T13:53:19.162Z [CURL DBG] > x-amz-meta-uid: 1000 2021-03-04T13:53:19.162Z [CURL DBG] > Content-Length: 3932160000 2021-03-04T13:53:19.162Z [CURL DBG] > Expect: 100-continue ```
Author
Owner

@CarstenGrohmann commented on GitHub (Mar 5, 2021):

We see differences in the size of /tmp only.

The free disk space is checked in github.com/s3fs-fuse/s3fs-fuse@8c58ba8ac0/src/fdcache_entity.cpp (L1437)

at github.com/s3fs-fuse/s3fs-fuse@8c58ba8ac0/src/fdcache_entity.cpp (L1467)

and when this check fails, all content will be immediately uploaded github.com/s3fs-fuse/s3fs-fuse@8c58ba8ac0/src/fdcache_entity.cpp (L1482-L1492)

I've 3997376 KB free in /tmp. That's enough space for the smaller transfers till ca. 3.5GB. These will be uploaded with a single operation. But's not enough for larger transfers. Therefore they will be uploaded using multipart as described above.

You should able to reproduce this issue even with smaller transfers, if you fill /tmp temporarily.

What would be the correct behaviour of s3fs for "-o nomultipart" when /tmp is full? Return from FdEntity::Write() with ENOSPC ("No space left on device")?

<!-- gh-comment-id:791537125 --> @CarstenGrohmann commented on GitHub (Mar 5, 2021): We see differences in the size of `/tmp` only. The free disk space is checked in https://github.com/s3fs-fuse/s3fs-fuse/blob/8c58ba8ac03158a096e96d23d0eb79e8147953f1/src/fdcache_entity.cpp#L1437 at https://github.com/s3fs-fuse/s3fs-fuse/blob/8c58ba8ac03158a096e96d23d0eb79e8147953f1/src/fdcache_entity.cpp#L1467 and when this check fails, all content will be immediately uploaded https://github.com/s3fs-fuse/s3fs-fuse/blob/8c58ba8ac03158a096e96d23d0eb79e8147953f1/src/fdcache_entity.cpp#L1482-L1492 I've 3997376 KB free in `/tmp`. That's enough space for the smaller transfers till ca. 3.5GB. These will be uploaded with a single operation. But's not enough for larger transfers. Therefore they will be uploaded using multipart as described above. You should able to reproduce this issue even with smaller transfers, if you fill `/tmp` temporarily. What would be the correct behaviour of s3fs for "-o nomultipart" when `/tmp` is full? Return from `FdEntity::Write()` with `ENOSPC ` ("No space left on device")?
Author
Owner

@CarstenGrohmann commented on GitHub (Mar 5, 2021):

A small hack

diff --git a/src/fdcache_entity.cpp b/src/fdcache_entity.cpp
index fa4dc6c..15c6f18 100644
--- a/src/fdcache_entity.cpp
+++ b/src/fdcache_entity.cpp
@@ -1480,6 +1480,15 @@ ssize_t FdEntity::Write(const char* bytes, off_t start, size_t size)
                 return static_cast<ssize_t>(result);
             }
         }else{
+    off_t free;
+    struct statvfs vfsbuf;
+    std::string ctoppath;
+#define TMPFILE_DIR_0PATH   "/tmp"
+    ctoppath = TMPFILE_DIR_0PATH "/";
+    statvfs(ctoppath.c_str(), &vfsbuf);
+    free = (vfsbuf.f_bavail * vfsbuf.f_frsize);
+    S3FS_PRN_DBG("Not enough disk space [path=%s][fd=%d][size=%zu][free=%lld]", path.c_str(), fd, size, static_cast<long long int>(free));
+
             // no enough disk space
             if(0 != (result = NoCachePreMultipartPost())){
                 S3FS_PRN_ERR("failed to switch multipart uploading with no cache(errno=%d)", result);

confirms the execution path described in the last comment:

2021-03-05T16:28:31.360Z [DBG] fdcache_entity.cpp:Write(1439): [path=/4500M][fd=6][offset=3844472832][size=65536]
2021-03-05T16:28:31.360Z [DBG] fdcache_entity.cpp:Write(1490): Not enough disk space [path=/4500M][fd=6][size=65536][free=61440]
2021-03-05T16:28:31.360Z [INF]       curl.cpp:PreMultipartPostRequest(3452): [tpath=/4500M]
[...]
2021-03-05T16:28:31.361Z [CURL DBG] > POST /mybucket/4500M?uploads HTTP/1.1
<!-- gh-comment-id:791544801 --> @CarstenGrohmann commented on GitHub (Mar 5, 2021): A small hack ``` diff --git a/src/fdcache_entity.cpp b/src/fdcache_entity.cpp index fa4dc6c..15c6f18 100644 --- a/src/fdcache_entity.cpp +++ b/src/fdcache_entity.cpp @@ -1480,6 +1480,15 @@ ssize_t FdEntity::Write(const char* bytes, off_t start, size_t size) return static_cast<ssize_t>(result); } }else{ + off_t free; + struct statvfs vfsbuf; + std::string ctoppath; +#define TMPFILE_DIR_0PATH "/tmp" + ctoppath = TMPFILE_DIR_0PATH "/"; + statvfs(ctoppath.c_str(), &vfsbuf); + free = (vfsbuf.f_bavail * vfsbuf.f_frsize); + S3FS_PRN_DBG("Not enough disk space [path=%s][fd=%d][size=%zu][free=%lld]", path.c_str(), fd, size, static_cast<long long int>(free)); + // no enough disk space if(0 != (result = NoCachePreMultipartPost())){ S3FS_PRN_ERR("failed to switch multipart uploading with no cache(errno=%d)", result); ``` confirms the execution path described in the last comment: ``` 2021-03-05T16:28:31.360Z [DBG] fdcache_entity.cpp:Write(1439): [path=/4500M][fd=6][offset=3844472832][size=65536] 2021-03-05T16:28:31.360Z [DBG] fdcache_entity.cpp:Write(1490): Not enough disk space [path=/4500M][fd=6][size=65536][free=61440] 2021-03-05T16:28:31.360Z [INF] curl.cpp:PreMultipartPostRequest(3452): [tpath=/4500M] [...] 2021-03-05T16:28:31.361Z [CURL DBG] > POST /mybucket/4500M?uploads HTTP/1.1 ```
Author
Owner

@CarstenGrohmann commented on GitHub (Mar 8, 2021):

I can provide this small fix that returns ENOSPC for nomultipart instead of silently switching to multipart uploads if the temp space has been filled. I can submit a PR if you agree with this change.

diff --git a/src/fdcache_entity.cpp b/src/fdcache_entity.cpp
index fa4dc6c..6657c15 100644
--- a/src/fdcache_entity.cpp
+++ b/src/fdcache_entity.cpp
@@ -1481,6 +1481,11 @@ ssize_t FdEntity::Write(const char* bytes, off_t start, size_t size)
             }
         }else{
             // no enough disk space
+            if (nomultipart) {
+                S3FS_PRN_WARN("Not enough local storage to cache write request: [path=%s][fd=%d][offset=%lld][size=%zu]", path.c_str(), fd, static_cast<long long int>(start), size);
+                return -ENOSPC;   // No space left on device
+            }
+            // no enough disk space
             if(0 != (result = NoCachePreMultipartPost())){
                 S3FS_PRN_ERR("failed to switch multipart uploading with no cache(errno=%d)", result);
                 return static_cast<ssize_t>(result);

Exampe:

$ cp -p 3000M /s3/mybucket/
cp: error writing "/s3/mybucket/3000M": No space left on device
cp: failed to extend "/s3/mybucket/3000M": No space left on device

New syslog output w/ warning:

2021-03-08T19:12:16.609323+01:00 mysrv <user.warning> s3fs[4842]:fdcache_entity.cpp:Write(1485): Not enough local storage to cache write request: [path=/3000M][fd=6][offset=1835008][size=65536]
2021-03-08T19:12:16.609522+01:00 mysrv <user.warning> s3fs[4842]:s3fs.cpp:s3fs_write(2310): failed to write file(/3000M). result=-28

New debug output w/ warning:

unique: 35, opcode: WRITE (16), nodeid: 2, insize: 65616, pid: 3759
write[5] 65536 bytes to 1835008 flags: 0x8001
2021-03-08T19:23:37.509Z [DBG] s3fs.cpp:s3fs_write(2298): [path=/3000M][size=65536][offset=1835008][fd=5]
2021-03-08T19:23:37.509Z [DBG] fdcache.cpp:ExistOpen(526): [path=/3000M][fd=5][ignore_existfd=false]
2021-03-08T19:23:37.509Z [DBG] fdcache.cpp:Open(446): [path=/3000M][size=-1][time=-1]
2021-03-08T19:23:37.509Z [DBG] fdcache_entity.cpp:Dup(246): [path=/3000M][fd=5][refcnt=2]
2021-03-08T19:23:37.509Z [DBG] fdcache_entity.cpp:Open(315): [path=/3000M][fd=5][size=-1][time=-1]
2021-03-08T19:23:37.509Z [DBG] fdcache_entity.cpp:Dup(246): [path=/3000M][fd=5][refcnt=3]
2021-03-08T19:23:37.509Z [DBG] fdcache_entity.cpp:Close(200): [path=/3000M][fd=5][refcnt=2]
2021-03-08T19:23:37.509Z [DBG] fdcache_entity.cpp:Write(1439): [path=/3000M][fd=5][offset=1835008][size=65536]
2021-03-08T19:23:37.509Z [WAN] fdcache_entity.cpp:Write(1485): Not enough local storage to cache write request: [path=/3000M][fd=5][offset=1835008][size=65536]
2021-03-08T19:23:37.509Z [WAN] s3fs.cpp:s3fs_write(2310): failed to write file(/3000M). result=-28
2021-03-08T19:23:37.509Z [DBG] fdcache.cpp:Close(593): [ent->file=/3000M][ent->fd=5]
2021-03-08T19:23:37.509Z [DBG] fdcache_entity.cpp:Close(200): [path=/3000M][fd=5][refcnt=1]
   unique: 35, error: -28 (No space left on device), outsize: 16
<!-- gh-comment-id:793041371 --> @CarstenGrohmann commented on GitHub (Mar 8, 2021): I can provide this small fix that returns `ENOSPC` for `nomultipart` instead of silently switching to multipart uploads if the temp space has been filled. I can submit a PR if you agree with this change. ``` diff --git a/src/fdcache_entity.cpp b/src/fdcache_entity.cpp index fa4dc6c..6657c15 100644 --- a/src/fdcache_entity.cpp +++ b/src/fdcache_entity.cpp @@ -1481,6 +1481,11 @@ ssize_t FdEntity::Write(const char* bytes, off_t start, size_t size) } }else{ // no enough disk space + if (nomultipart) { + S3FS_PRN_WARN("Not enough local storage to cache write request: [path=%s][fd=%d][offset=%lld][size=%zu]", path.c_str(), fd, static_cast<long long int>(start), size); + return -ENOSPC; // No space left on device + } + // no enough disk space if(0 != (result = NoCachePreMultipartPost())){ S3FS_PRN_ERR("failed to switch multipart uploading with no cache(errno=%d)", result); return static_cast<ssize_t>(result); ``` **Exampe:** ``` $ cp -p 3000M /s3/mybucket/ cp: error writing "/s3/mybucket/3000M": No space left on device cp: failed to extend "/s3/mybucket/3000M": No space left on device ``` **New syslog output w/ warning:** ``` 2021-03-08T19:12:16.609323+01:00 mysrv <user.warning> s3fs[4842]:fdcache_entity.cpp:Write(1485): Not enough local storage to cache write request: [path=/3000M][fd=6][offset=1835008][size=65536] 2021-03-08T19:12:16.609522+01:00 mysrv <user.warning> s3fs[4842]:s3fs.cpp:s3fs_write(2310): failed to write file(/3000M). result=-28 ``` **New debug output w/ warning:** ``` unique: 35, opcode: WRITE (16), nodeid: 2, insize: 65616, pid: 3759 write[5] 65536 bytes to 1835008 flags: 0x8001 2021-03-08T19:23:37.509Z [DBG] s3fs.cpp:s3fs_write(2298): [path=/3000M][size=65536][offset=1835008][fd=5] 2021-03-08T19:23:37.509Z [DBG] fdcache.cpp:ExistOpen(526): [path=/3000M][fd=5][ignore_existfd=false] 2021-03-08T19:23:37.509Z [DBG] fdcache.cpp:Open(446): [path=/3000M][size=-1][time=-1] 2021-03-08T19:23:37.509Z [DBG] fdcache_entity.cpp:Dup(246): [path=/3000M][fd=5][refcnt=2] 2021-03-08T19:23:37.509Z [DBG] fdcache_entity.cpp:Open(315): [path=/3000M][fd=5][size=-1][time=-1] 2021-03-08T19:23:37.509Z [DBG] fdcache_entity.cpp:Dup(246): [path=/3000M][fd=5][refcnt=3] 2021-03-08T19:23:37.509Z [DBG] fdcache_entity.cpp:Close(200): [path=/3000M][fd=5][refcnt=2] 2021-03-08T19:23:37.509Z [DBG] fdcache_entity.cpp:Write(1439): [path=/3000M][fd=5][offset=1835008][size=65536] 2021-03-08T19:23:37.509Z [WAN] fdcache_entity.cpp:Write(1485): Not enough local storage to cache write request: [path=/3000M][fd=5][offset=1835008][size=65536] 2021-03-08T19:23:37.509Z [WAN] s3fs.cpp:s3fs_write(2310): failed to write file(/3000M). result=-28 2021-03-08T19:23:37.509Z [DBG] fdcache.cpp:Close(593): [ent->file=/3000M][ent->fd=5] 2021-03-08T19:23:37.509Z [DBG] fdcache_entity.cpp:Close(200): [path=/3000M][fd=5][refcnt=1] unique: 35, error: -28 (No space left on device), outsize: 16 ```
Author
Owner

@gaul commented on GitHub (Mar 9, 2021):

I can provide this small fix that returns ENOSPC for nomultipart instead of silently switching to multipart uploads if the temp space has been filled. I can submit a PR if you agree with this change.

Great investigative work! I agree with this approach; could you submit a pull request?

<!-- gh-comment-id:793585416 --> @gaul commented on GitHub (Mar 9, 2021): > I can provide this small fix that returns `ENOSPC` for `nomultipart` instead of silently switching to multipart uploads if the temp space has been filled. I can submit a PR if you agree with this change. Great investigative work! I agree with this approach; could you submit a pull request?
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#840
No description provided.