mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #125] Can't upload file larger than 20MB with 1.78 (1.77 works fine) #77
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#77
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @hplato on GitHub (Feb 19, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/125
To reproduce
root@x:/tmp# dd if=/dev/zero of=20500.dat bs=1024 count=20500
20500+0 records in
20500+0 records out
20992000 bytes (21 MB) copied, 0.213735 s, 98.2 MB/s
root@x:/tmp# ls -l
total 20540
-rw-r--r-- 1 root root 20992000 Feb 18 20:02 20500.dat
root@x:/tmp# cp 20500.dat /mnt/s3/
cp: failed to close ‘/mnt/s3/20500.dat’: Operation not permitted
root@x:/tmp# dd if=/dev/zero of=20000.dat bs=1024 count=20000
20000+0 records in
20000+0 records out
20480000 bytes (20 MB) copied, 0.215942 s, 94.8 MB/s
root@x:/tmp# ls -l
total 40540
-rw-r--r-- 1 root root 20480000 Feb 18 20:03 20000.dat
-rw-r--r-- 1 root root 20992000 Feb 18 20:02 20500.dat
root@x:/tmp# cp 20000.dat /mnt/s3
No problem. Now remount with version 1.77 and:
root@x:/tmp# cp 20500.dat /mnt/s3/
root@x:/tmp# ls -l /mnt/s3/
total 40502
-rw-r--r-- 1 root root 20480000 Feb 18 20:03 20000.dat
-rw-r--r-- 1 root root 20992000 Feb 18 20:10 20500.dat
Logs from 1.78
s3fs_getattr(720): [path=/20500.dat]
HeadRequest(2112): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20500.dat/]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20500.dat_$folder$]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
list_bucket(2277): [path=/20500.dat]
ListBucketRequest(2585): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 200
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
s3fs_getattr(720): [path=/20500.dat]
HeadRequest(2112): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20500.dat/]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20500.dat_$folder$]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
list_bucket(2277): [path=/20500.dat]
ListBucketRequest(2585): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 200
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
s3fs_create(829): [path=/20500.dat][mode=100644][flags=32961]
HeadRequest(2112): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20500.dat/]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20500.dat_$folder$]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
list_bucket(2277): [path=/20500.dat]
ListBucketRequest(2585): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 200
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
create_file_object(788): [path=/20500.dat][mode=100644]
PutRequest(2279): [tpath=/20500.dat]
PutRequest(2293): create zero byte file object.
PutRequest(2403): uploading... [path=/20500.dat][fd=-1][size=0]
RequestPerform(1588): HTTP response code 200
DelStat(370): delete stat cache entry[path=/20500.dat]
s3fs_getattr(720): [path=/20500.dat]
HeadRequest(2112): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 200
AddStat(248): add stat cache entry[path=/20500.dat]
GetStat(171): stat cache hit [path=/20500.dat][time=1424314949][hit count=0]
s3fs_flush(2006): [path=/20500.dat][fd=5]
GetStat(171): stat cache hit [path=/20500.dat][time=1424314949][hit count=1]
GetStat(171): stat cache hit [path=/20500.dat][time=1424314949][hit count=2]
ParallelMultipartUploadRequest(998): [tpath=/20500.dat][fd=5]
PreMultipartPostRequest(2657): [tpath=/20500.dat]
RequestPerform(1588): HTTP response code 403
RequestPerform(1607): HTTP response code 403 was returned, returning EPERM
s3fs_release(2046): [path=/20500.dat][fd=5]
DelStat(370): delete stat cache entry[path=/20500.dat]
s3fs_getattr(720): [path=/]
s3fs_getattr(720): [path=/20000.dat]
HeadRequest(2112): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20000.dat/]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20000.dat_$folder$]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
list_bucket(2277): [path=/20000.dat]
ListBucketRequest(2585): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 200
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
s3fs_getattr(720): [path=/20000.dat]
HeadRequest(2112): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20000.dat/]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20000.dat_$folder$]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
list_bucket(2277): [path=/20000.dat]
ListBucketRequest(2585): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 200
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
s3fs_create(829): [path=/20000.dat][mode=100644][flags=32961]
HeadRequest(2112): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20000.dat/]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
HeadRequest(2112): [tpath=/20000.dat_$folder$]
RequestPerform(1588): HTTP response code 404
RequestPerform(1612): HTTP response code 404 was returned, returning ENOENT
list_bucket(2277): [path=/20000.dat]
ListBucketRequest(2585): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 200
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
append_objects_from_xml_ex(2373): contents_xp->nodesetval is empty.
create_file_object(788): [path=/20000.dat][mode=100644]
PutRequest(2279): [tpath=/20000.dat]
PutRequest(2293): create zero byte file object.
PutRequest(2403): uploading... [path=/20000.dat][fd=-1][size=0]
RequestPerform(1588): HTTP response code 200
DelStat(370): delete stat cache entry[path=/20000.dat]
s3fs_getattr(720): [path=/20000.dat]
HeadRequest(2112): [tpath=/20000.dat]
RequestPerform(1588): HTTP response code 200
AddStat(248): add stat cache entry[path=/20000.dat]
GetStat(171): stat cache hit [path=/20000.dat][time=1424315020][hit count=0]
s3fs_flush(2006): [path=/20000.dat][fd=5]
GetStat(171): stat cache hit [path=/20000.dat][time=1424315020][hit count=1]
GetStat(171): stat cache hit [path=/20000.dat][time=1424315020][hit count=2]
PutRequest(2279): [tpath=/20000.dat]
PutRequest(2403): uploading... [path=/20000.dat][fd=5][size=20480000]
RequestPerform(1588): HTTP response code 200
s3fs_release(2046): [path=/20000.dat][fd=5]
DelStat(370): delete stat cache entry[path=/20000.dat]
@ggtakec commented on GitHub (Mar 1, 2015):
@hplato I merged @andrewgaul PR codes to master branch, please test it.
@andrewgaul Thank you for your help!.
@hplato commented on GitHub (Mar 19, 2015):
Thanks, I currently have the old version loaded and am doing some data transfers. I'll try do give this a try again by end of the month
@macropin commented on GitHub (Apr 19, 2017):
I'm seeing this same 20M issue with v1.80. Saving to NetApp
@gaul commented on GitHub (Apr 19, 2017):
@macropin Please run again with
-f -d -o curldbgto get debug logs.@macropin commented on GitHub (Apr 19, 2017):
@andrewgaul I've got a log here. I'm going to clean it up and get sign off before I send it over.
@acolatto commented on GitHub (Apr 19, 2017):
Hello,
I'm having the same problem copying a 9GB file with the "cp".
The cp error is: "cp: failed to close '/mnt/aws/za/db/00hs/bk_19_21.bak.cpt': Operation not permitted"
User @ server: ~ $ s3fs --version
Amazon Simple Storage Service File System V1.80 (commit:
6affeff) with OpenSSLCopyright (C) 2010 Randy Rizun rrizun@gmail.com
License GPL2: GNU GPL version 2 http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
When I send another file with 4.4GB it goes with no problems at all.
Any idea how to solve it?
@macropin commented on GitHub (Apr 20, 2017):
Log file attached: log.txt Log was captured whilst dd'ing a large file onto the mount.
S3fs was run using this container https://github.com/panubo/docker-s3fs
@gaul commented on GitHub (Apr 20, 2017):
StorageGRID may not fully support the Initiate Multipart Upload request. s3fs sends:
According to the StorageGRID 10.3 documentation https://library.netapp.com/ecm/ecm_download_file/ECMLP2412007 page 24:
Further page 22 documents PUT Object request as not supporting
x-amz-acl. Insrc/curl.cpp:S3fsCurl::PreMultipartPostRequestcould you try commenting outPerhaps s3fs should not send this header unless explicitly set by the user and rely on the default from the server otherwise.
@kodiak-sdommeti commented on GitHub (Sep 13, 2017):
s3fs seems to append a '=' to the url (POST /media-files/20m?uploads= HTTP/1.1) which is in violation of the S3 REST API specification (POST /ObjectName?uploads HTTP/1.1), and that is the reason why the S3 storage returns 403 error.
@ggtakec commented on GitHub (Sep 17, 2017):
@kodiak-sdommeti Thanks for your post.
I separated your problem point into #643
Regards,