[GH-ISSUE #125] Can't upload file larger than 20MB with 1.78 (1.77 works fine) #77

Closed
opened 2026-03-04 01:41:48 +03:00 by kerem · 10 comments
Owner

Originally created by @hplato on GitHub (Feb 19, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/125

To reproduce

root@x:/tmp# dd if=/dev/zero of=20500.dat bs=1024 count=20500
20500+0 records in
20500+0 records out
20992000 bytes (21 MB) copied, 0.213735 s, 98.2 MB/s
root@x:/tmp# ls -l
total 20540
-rw-r--r-- 1 root root 20992000 Feb 18 20:02 20500.dat
root@x:/tmp# cp 20500.dat /mnt/s3/
cp: failed to close ‘/mnt/s3/20500.dat’: Operation not permitted
root@x:/tmp# dd if=/dev/zero of=20000.dat bs=1024 count=20000
20000+0 records in
20000+0 records out
20480000 bytes (20 MB) copied, 0.215942 s, 94.8 MB/s
root@x:/tmp# ls -l
total 40540
-rw-r--r-- 1 root root 20480000 Feb 18 20:03 20000.dat
-rw-r--r-- 1 root root 20992000 Feb 18 20:02 20500.dat
root@x:/tmp# cp 20000.dat /mnt/s3

No problem. Now remount with version 1.77 and:

root@x:/tmp# cp 20500.dat /mnt/s3/
root@x:/tmp# ls -l /mnt/s3/
total 40502
-rw-r--r-- 1 root root 20480000 Feb 18 20:03 20000.dat
-rw-r--r-- 1 root root 20992000 Feb 18 20:10 20500.dat

Logs from 1.78

s3fs_getattr(720): [path=/20500.dat]
HeadRequest(2112): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20500.dat/]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20500.dat_$folder$]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
list_bucket(2277): [path=/20500.dat]
ListBucketRequest(2585): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 200
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
s3fs_getattr(720): [path=/20500.dat]
HeadRequest(2112): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20500.dat/]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20500.dat_$folder$]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
list_bucket(2277): [path=/20500.dat]
ListBucketRequest(2585): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 200
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
s3fs_create(829): [path=/20500.dat][mode=100644][flags=32961]
HeadRequest(2112): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20500.dat/]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20500.dat_$folder$]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
list_bucket(2277): [path=/20500.dat]
ListBucketRequest(2585): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 200
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
create_file_object(788): [path=/20500.dat][mode=100644]
PutRequest(2279): [tpath=/20500.dat]
PutRequest(2293): create zero byte file object.
PutRequest(2403): uploading... [path=/20500.dat][fd=-1][size=0]
RequestPerform(1588): HTTP response code 200
DelStat(370): delete stat cache entry[path=/20500.dat]
s3fs_getattr(720): [path=/20500.dat]
HeadRequest(2112): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 200
AddStat(248): add stat cache entry[path=/20500.dat]
GetStat(171): stat cache hit [path=/20500.dat][time=1424314949][hit count=0]
s3fs_flush(2006): [path=/20500.dat][fd=5]
GetStat(171): stat cache hit [path=/20500.dat][time=1424314949][hit count=1]
GetStat(171): stat cache hit [path=/20500.dat][time=1424314949][hit count=2]
ParallelMultipartUploadRequest(998): [tpath=/20500.dat][fd=5]
PreMultipartPostRequest(2657): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 403
RequestPerform(1607): HTTP response code 403 was returned, returning EPERM
s3fs_release(2046): [path=/20500.dat][fd=5]
DelStat(370): delete stat cache entry[path=/20500.dat]
s3fs_getattr(720): [path=/]
s3fs_getattr(720): [path=/20000.dat]
HeadRequest(2112): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20000.dat/]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20000.dat_$folder$]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
list_bucket(2277): [path=/20000.dat]
ListBucketRequest(2585): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 200
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
s3fs_getattr(720): [path=/20000.dat]
HeadRequest(2112): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20000.dat/]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20000.dat_$folder$]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
list_bucket(2277): [path=/20000.dat]
ListBucketRequest(2585): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 200
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
s3fs_create(829): [path=/20000.dat][mode=100644][flags=32961]
HeadRequest(2112): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20000.dat/]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20000.dat_$folder$]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
list_bucket(2277): [path=/20000.dat]
ListBucketRequest(2585): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 200
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
create_file_object(788): [path=/20000.dat][mode=100644]
PutRequest(2279): [tpath=/20000.dat]
PutRequest(2293): create zero byte file object.
PutRequest(2403): uploading... [path=/20000.dat][fd=-1][size=0]
RequestPerform(1588): HTTP response code 200
DelStat(370): delete stat cache entry[path=/20000.dat]
s3fs_getattr(720): [path=/20000.dat]
HeadRequest(2112): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 200
AddStat(248): add stat cache entry[path=/20000.dat]
GetStat(171): stat cache hit [path=/20000.dat][time=1424315020][hit count=0]
s3fs_flush(2006): [path=/20000.dat][fd=5]
GetStat(171): stat cache hit [path=/20000.dat][time=1424315020][hit count=1]
GetStat(171): stat cache hit [path=/20000.dat][time=1424315020][hit count=2]
PutRequest(2279): [tpath=/20000.dat]
PutRequest(2403): uploading... [path=/20000.dat][fd=5][size=20480000]
RequestPerform(1588): HTTP response code 200
s3fs_release(2046): [path=/20000.dat][fd=5]
DelStat(370): delete stat cache entry[path=/20000.dat]

Originally created by @hplato on GitHub (Feb 19, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/125 To reproduce root@x:/tmp# dd if=/dev/zero of=20500.dat bs=1024 count=20500 20500+0 records in 20500+0 records out 20992000 bytes (21 MB) copied, 0.213735 s, 98.2 MB/s root@x:/tmp# ls -l total 20540 -rw-r--r-- 1 root root 20992000 Feb 18 20:02 20500.dat root@x:/tmp# cp 20500.dat /mnt/s3/ cp: failed to close ‘/mnt/s3/20500.dat’: Operation not permitted root@x:/tmp# dd if=/dev/zero of=20000.dat bs=1024 count=20000 20000+0 records in 20000+0 records out 20480000 bytes (20 MB) copied, 0.215942 s, 94.8 MB/s root@x:/tmp# ls -l total 40540 -rw-r--r-- 1 root root 20480000 Feb 18 20:03 20000.dat -rw-r--r-- 1 root root 20992000 Feb 18 20:02 20500.dat root@x:/tmp# cp 20000.dat /mnt/s3 No problem. Now remount with version 1.77 and: root@x:/tmp# cp 20500.dat /mnt/s3/ root@x:/tmp# ls -l /mnt/s3/ total 40502 -rw-r--r-- 1 root root 20480000 Feb 18 20:03 20000.dat -rw-r--r-- 1 root root 20992000 Feb 18 20:10 20500.dat Logs from 1.78 s3fs_getattr(720): [path=/20500.dat] HeadRequest(2112): [tpath=/20500.dat] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT HeadRequest(2112): [tpath=/20500.dat/] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT HeadRequest(2112): [tpath=/20500.dat_$folder$] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT list_bucket(2277): [path=/20500.dat] ListBucketRequest(2585): [tpath=/20500.dat] RequestPerform(1588): HTTP response code 200 append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty. append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty. s3fs_getattr(720): [path=/20500.dat] HeadRequest(2112): [tpath=/20500.dat] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT HeadRequest(2112): [tpath=/20500.dat/] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT HeadRequest(2112): [tpath=/20500.dat_$folder$] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT list_bucket(2277): [path=/20500.dat] ListBucketRequest(2585): [tpath=/20500.dat] RequestPerform(1588): HTTP response code 200 append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty. append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty. s3fs_create(829): [path=/20500.dat][mode=100644][flags=32961] HeadRequest(2112): [tpath=/20500.dat] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT HeadRequest(2112): [tpath=/20500.dat/] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT HeadRequest(2112): [tpath=/20500.dat_$folder$] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT list_bucket(2277): [path=/20500.dat] ListBucketRequest(2585): [tpath=/20500.dat] RequestPerform(1588): HTTP response code 200 append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty. append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty. create_file_object(788): [path=/20500.dat][mode=100644] PutRequest(2279): [tpath=/20500.dat] PutRequest(2293): create zero byte file object. PutRequest(2403): uploading... [path=/20500.dat][fd=-1][size=0] RequestPerform(1588): HTTP response code 200 DelStat(370): delete stat cache entry[path=/20500.dat] s3fs_getattr(720): [path=/20500.dat] HeadRequest(2112): [tpath=/20500.dat] RequestPerform(1588): HTTP response code 200 AddStat(248): add stat cache entry[path=/20500.dat] GetStat(171): stat cache hit [path=/20500.dat][time=1424314949][hit count=0] s3fs_flush(2006): [path=/20500.dat][fd=5] GetStat(171): stat cache hit [path=/20500.dat][time=1424314949][hit count=1] GetStat(171): stat cache hit [path=/20500.dat][time=1424314949][hit count=2] ParallelMultipartUploadRequest(998): [tpath=/20500.dat][fd=5] PreMultipartPostRequest(2657): [tpath=/20500.dat] RequestPerform(1588): HTTP response code 403 RequestPerform(1607): HTTP response code 403 was returned, returning EPERM s3fs_release(2046): [path=/20500.dat][fd=5] DelStat(370): delete stat cache entry[path=/20500.dat] s3fs_getattr(720): [path=/] s3fs_getattr(720): [path=/20000.dat] HeadRequest(2112): [tpath=/20000.dat] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT HeadRequest(2112): [tpath=/20000.dat/] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT HeadRequest(2112): [tpath=/20000.dat_$folder$] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT list_bucket(2277): [path=/20000.dat] ListBucketRequest(2585): [tpath=/20000.dat] RequestPerform(1588): HTTP response code 200 append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty. append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty. s3fs_getattr(720): [path=/20000.dat] HeadRequest(2112): [tpath=/20000.dat] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT HeadRequest(2112): [tpath=/20000.dat/] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT HeadRequest(2112): [tpath=/20000.dat_$folder$] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT list_bucket(2277): [path=/20000.dat] ListBucketRequest(2585): [tpath=/20000.dat] RequestPerform(1588): HTTP response code 200 append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty. append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty. s3fs_create(829): [path=/20000.dat][mode=100644][flags=32961] HeadRequest(2112): [tpath=/20000.dat] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT HeadRequest(2112): [tpath=/20000.dat/] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT HeadRequest(2112): [tpath=/20000.dat_$folder$] RequestPerform(1588): HTTP response code 404 RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT list_bucket(2277): [path=/20000.dat] ListBucketRequest(2585): [tpath=/20000.dat] RequestPerform(1588): HTTP response code 200 append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty. append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty. create_file_object(788): [path=/20000.dat][mode=100644] PutRequest(2279): [tpath=/20000.dat] PutRequest(2293): create zero byte file object. PutRequest(2403): uploading... [path=/20000.dat][fd=-1][size=0] RequestPerform(1588): HTTP response code 200 DelStat(370): delete stat cache entry[path=/20000.dat] s3fs_getattr(720): [path=/20000.dat] HeadRequest(2112): [tpath=/20000.dat] RequestPerform(1588): HTTP response code 200 AddStat(248): add stat cache entry[path=/20000.dat] GetStat(171): stat cache hit [path=/20000.dat][time=1424315020][hit count=0] s3fs_flush(2006): [path=/20000.dat][fd=5] GetStat(171): stat cache hit [path=/20000.dat][time=1424315020][hit count=1] GetStat(171): stat cache hit [path=/20000.dat][time=1424315020][hit count=2] PutRequest(2279): [tpath=/20000.dat] PutRequest(2403): uploading... [path=/20000.dat][fd=5][size=20480000] RequestPerform(1588): HTTP response code 200 s3fs_release(2046): [path=/20000.dat][fd=5] DelStat(370): delete stat cache entry[path=/20000.dat]
kerem closed this issue 2026-03-04 01:41:48 +03:00
Author
Owner

@ggtakec commented on GitHub (Mar 1, 2015):

@hplato I merged @andrewgaul PR codes to master branch, please test it.
@andrewgaul Thank you for your help!.

<!-- gh-comment-id:76597916 --> @ggtakec commented on GitHub (Mar 1, 2015): @hplato I merged @andrewgaul PR codes to master branch, please test it. @andrewgaul Thank you for your help!.
Author
Owner

@hplato commented on GitHub (Mar 19, 2015):

Thanks, I currently have the old version loaded and am doing some data transfers. I'll try do give this a try again by end of the month

<!-- gh-comment-id:83269886 --> @hplato commented on GitHub (Mar 19, 2015): Thanks, I currently have the old version loaded and am doing some data transfers. I'll try do give this a try again by end of the month
Author
Owner

@macropin commented on GitHub (Apr 19, 2017):

I'm seeing this same 20M issue with v1.80. Saving to NetApp

root@971f54453e66:/mnt/s3# dd if=/dev/zero of=./15m count=15 bs=1M
15+0 records in
15+0 records out
15728640 bytes (16 MB) copied, 0.732486 s, 21.5 MB/s
root@971f54453e66:/mnt/s3# dd if=/dev/zero of=./20m count=20 bs=1M
dd: closing output file './20m': Operation not permitted
root@971f54453e66:/mnt/s3# dd if=/dev/zero of=./19m count=19 bs=1M
19+0 records in
19+0 records out
19922944 bytes (20 MB) copied, 1.86639 s, 10.7 MB/s
root@971f54453e66:/mnt/s3# dd if=/dev/zero of=./21m count=21 bs=1M
dd: closing output file './21m': Operation not permitted
root@971f54453e66:/mnt/s3# s3fs --version
Amazon Simple Storage Service File System V1.80(commit:unknown) with OpenSSL
Copyright (C) 2010 Randy Rizun <rrizun@gmail.com>
License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html>
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
<!-- gh-comment-id:295053774 --> @macropin commented on GitHub (Apr 19, 2017): I'm seeing this same 20M issue with v1.80. Saving to NetApp ``` root@971f54453e66:/mnt/s3# dd if=/dev/zero of=./15m count=15 bs=1M 15+0 records in 15+0 records out 15728640 bytes (16 MB) copied, 0.732486 s, 21.5 MB/s root@971f54453e66:/mnt/s3# dd if=/dev/zero of=./20m count=20 bs=1M dd: closing output file './20m': Operation not permitted root@971f54453e66:/mnt/s3# dd if=/dev/zero of=./19m count=19 bs=1M 19+0 records in 19+0 records out 19922944 bytes (20 MB) copied, 1.86639 s, 10.7 MB/s root@971f54453e66:/mnt/s3# dd if=/dev/zero of=./21m count=21 bs=1M dd: closing output file './21m': Operation not permitted root@971f54453e66:/mnt/s3# s3fs --version Amazon Simple Storage Service File System V1.80(commit:unknown) with OpenSSL Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. ```
Author
Owner

@gaul commented on GitHub (Apr 19, 2017):

@macropin Please run again with -f -d -o curldbg to get debug logs.

<!-- gh-comment-id:295055414 --> @gaul commented on GitHub (Apr 19, 2017): @macropin Please run again with `-f -d -o curldbg` to get debug logs.
Author
Owner

@macropin commented on GitHub (Apr 19, 2017):

@andrewgaul I've got a log here. I'm going to clean it up and get sign off before I send it over.

<!-- gh-comment-id:295060382 --> @macropin commented on GitHub (Apr 19, 2017): @andrewgaul I've got a log here. I'm going to clean it up and get sign off before I send it over.
Author
Owner

@acolatto commented on GitHub (Apr 19, 2017):

Hello,
I'm having the same problem copying a 9GB file with the "cp".
The cp error is: "cp: failed to close '/mnt/aws/za/db/00hs/bk_19_21.bak.cpt': Operation not permitted"

User @ server: ~ $ s3fs --version
Amazon Simple Storage Service File System V1.80 (commit: 6affeff) with OpenSSL
Copyright (C) 2010 Randy Rizun rrizun@gmail.com
License GPL2: GNU GPL version 2 http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

When I send another file with 4.4GB it goes with no problems at all.

Any idea how to solve it?

<!-- gh-comment-id:295395372 --> @acolatto commented on GitHub (Apr 19, 2017): Hello, I'm having the same problem copying a 9GB file with the "cp". The cp error is: "cp: failed to close '/mnt/aws/za/db/00hs/bk_19_21.bak.cpt': Operation not permitted" User @ server: ~ $ s3fs --version Amazon Simple Storage Service File System V1.80 (commit: 6affeff) with OpenSSL Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. When I send another file with 4.4GB it goes with no problems at all. Any idea how to solve it?
Author
Owner

@macropin commented on GitHub (Apr 20, 2017):

Log file attached: log.txt Log was captured whilst dd'ing a large file onto the mount.

S3fs was run using this container https://github.com/panubo/docker-s3fs

<!-- gh-comment-id:295554845 --> @macropin commented on GitHub (Apr 20, 2017): Log file attached: [log.txt](https://github.com/s3fs-fuse/s3fs-fuse/files/939550/log.txt) Log was captured whilst dd'ing a large file onto the mount. S3fs was run using this container https://github.com/panubo/docker-s3fs
Author
Owner

@gaul commented on GitHub (Apr 20, 2017):

StorageGRID may not fully support the Initiate Multipart Upload request. s3fs sends:

> POST /media-files/20m?uploads= HTTP/1.1
User-Agent: s3fs/1.80 (commit hash unknown; OpenSSL)
Authorization: AWS4-HMAC-SHA256 Credential=8FYA25QP7HMUW5676YM3/20170419/us-east-1/s3/aws4_request, SignedHeaders=accept;content-length;content-type;host;x-amz-acl;x-amz-content-sha256;x-amz-date;x-amz-meta-gid;x-amz-meta-mode;x-amz-meta-mtime;x-amz-meta-uid, Signature=d555399f70f2f1353d189fe3d26b65813edb30b70ff90c347ec0d34e68847426
Content-Type: application/octet-stream
host: storgrid-s3.example.com:8082
x-amz-acl: private
x-amz-content-sha256: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
x-amz-date: 20170419T031657Z
x-amz-meta-gid: 0
x-amz-meta-mode: 33188
x-amz-meta-mtime: 1492571817
x-amz-meta-uid: 0

According to the StorageGRID 10.3 documentation https://library.netapp.com/ecm/ecm_download_file/ECMLP2412007 page 24:

The following request headers are supported:
• Content-Type
• x-amz-meta- name-value pairs for user-defined metadata

Further page 22 documents PUT Object request as not supporting x-amz-acl. In src/curl.cpp:S3fsCurl::PreMultipartPostRequest could you try commenting out

requestHeaders = curl_slist_sort_insert(requestHeaders, "x-amz-acl", S3fsCurl::default_acl.c_str());

Perhaps s3fs should not send this header unless explicitly set by the user and rely on the default from the server otherwise.

<!-- gh-comment-id:295635668 --> @gaul commented on GitHub (Apr 20, 2017): StorageGRID may not fully support the Initiate Multipart Upload request. s3fs sends: ``` > POST /media-files/20m?uploads= HTTP/1.1 User-Agent: s3fs/1.80 (commit hash unknown; OpenSSL) Authorization: AWS4-HMAC-SHA256 Credential=8FYA25QP7HMUW5676YM3/20170419/us-east-1/s3/aws4_request, SignedHeaders=accept;content-length;content-type;host;x-amz-acl;x-amz-content-sha256;x-amz-date;x-amz-meta-gid;x-amz-meta-mode;x-amz-meta-mtime;x-amz-meta-uid, Signature=d555399f70f2f1353d189fe3d26b65813edb30b70ff90c347ec0d34e68847426 Content-Type: application/octet-stream host: storgrid-s3.example.com:8082 x-amz-acl: private x-amz-content-sha256: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx x-amz-date: 20170419T031657Z x-amz-meta-gid: 0 x-amz-meta-mode: 33188 x-amz-meta-mtime: 1492571817 x-amz-meta-uid: 0 ``` According to the StorageGRID 10.3 documentation https://library.netapp.com/ecm/ecm_download_file/ECMLP2412007 page 24: > The following request headers are supported: > • Content-Type > • x-amz-meta- name-value pairs for user-defined metadata Further page 22 documents PUT Object request as not supporting `x-amz-acl`. In `src/curl.cpp:S3fsCurl::PreMultipartPostRequest` could you try commenting out ``` requestHeaders = curl_slist_sort_insert(requestHeaders, "x-amz-acl", S3fsCurl::default_acl.c_str()); ``` Perhaps s3fs should not send this header unless explicitly set by the user and rely on the default from the server otherwise.
Author
Owner

@kodiak-sdommeti commented on GitHub (Sep 13, 2017):

s3fs seems to append a '=' to the url (POST /media-files/20m?uploads= HTTP/1.1) which is in violation of the S3 REST API specification (POST /ObjectName?uploads HTTP/1.1), and that is the reason why the S3 storage returns 403 error.

<!-- gh-comment-id:329298047 --> @kodiak-sdommeti commented on GitHub (Sep 13, 2017): s3fs seems to append a '=' to the url (POST /media-files/20m?uploads= HTTP/1.1) which is in violation of the S3 REST API specification (POST /ObjectName?uploads HTTP/1.1), and that is the reason why the S3 storage returns 403 error.
Author
Owner

@ggtakec commented on GitHub (Sep 17, 2017):

@kodiak-sdommeti Thanks for your post.
I separated your problem point into #643
Regards,

<!-- gh-comment-id:330034349 --> @ggtakec commented on GitHub (Sep 17, 2017): @kodiak-sdommeti Thanks for your post. I separated your problem point into #643 Regards,
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#77
No description provided.