mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #1220] EntityTooSmall in multipart upload #651
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#651
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @cnanakos on GitHub (Dec 20, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1220
[ERR] curl.cpp:RequestPerform(2423): HTTP response code 400, returning EIO. Body Text: <Error><Code>EntityTooSmall</Code><Message>Your proposed upload is smaller than the minimum allowed size</Message><ProposedSize>528384</ProposedSize><$inSizeAllowed>5242880</MinSizeAllowed><PartNumber>1</PartNumber><ETag>480294879a56052a5937e78cb358580c</ETag><RequestId>AA1936A5569E43CF</RequestId><HostId>1pnskFIU9nckO2A1FqBhqCAV/92ZJaMbSixIEy+6HgS1BwyUN5PDhu4tLn81hxx229H+iSK2yzjN</Hos$Id></Error>The size of the file is 1.6GiB and the multipart_size is 16MiB. Everything else is default.
The part size is set as:
-o multipart_size=16Amazon Simple Storage Service File System V1.85 (commit:cc4a307) with OpenSSL
Copyright (C) 2010 Randy Rizun rrizun@gmail.com
License GPL2: GNU GPL version 2 https://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
@sqlbot commented on GitHub (Dec 20, 2019):
Will you reformat that error as a code block, please, and verify that it's pasted correctly? It appears to be corrupted or some of the XML tags are being interpreted or stripped.
Also please show exactly how you're configuring the part size when mounting.
@cnanakos commented on GitHub (Dec 20, 2019):
Of course, updated the initial comment. Thanks.
@cnanakos commented on GitHub (Dec 21, 2019):
[INF] curl.cpp:insertV4Headers(2746): computing signature [POST] [/backups/test.vbk] [uploadId=fF_BMap7QeyO7dyTRX9k5iAw-LEVb3L3TqcihvagDwt2qh82EzsN324LuB15LpkU6xdCE4ORCGC6tM0fV3gOiKpBN jx-MdlSSoqdDC7kUY9QClarHT7lmebFMdHBSroV] [7d75563ff4aed76905da416ef6911ef7a4217f80a7319bed33135f94f36a1ba5] [INF] curl.cpp:url_to_host(99): url is https://s3.xxxx.com [ERR] curl.cpp:RequestPerform(2424): HTTP response code 400, returning EIO. Body Text: <Error><Code>EntityTooSmall</Code><Message>Your proposed upload is smaller than the minimum allowed size</Message><ProposedSize>528384</ProposedSize><M inSizeAllowed>5242880</MinSizeAllowed><PartNumber>1</PartNumber><ETag>a258d269ee19b30c020cf1be6b1b35c7</ETag><RequestId>0AB807EA3064BDE2</RequestId><HostId>/CsTOSn+zTHdxrbm+JaRrkICPtiv2qcxCOG1UBqw+nSr8yYJ1loHvPrza6uEZBcpo5sKRj3g7DhO</Host Id></Error>It's easily reproducible though, this is from a 9.6GiB file. I've managed to use the latest build from Git but the problem remains.
Amazon Simple Storage Service File System V1.85 (commit:bdfb9ee) with OpenSSL Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.@sqlbot commented on GitHub (Dec 21, 2019):
It appears that s3fs is trying to upload 528384 = 516 × 1024 bytes per part. What do you get with a different part size, or no part size specified? Specifically, does the
<ProposedSize>in the error change?@cnanakos commented on GitHub (Dec 21, 2019):
I've managed to run the same test with a part size of 16, 32, 52, 64, 1024 MiB, it always fails at the same point. The
<ProposedSize>doesn't change, it remains the same.@gaul commented on GitHub (Feb 2, 2020):
Could you test again with
-o curldbgand share the logs?@gaul commented on GitHub (Feb 3, 2020):
Also, are you using Amazon S3 or another implementation?
@cnanakos commented on GitHub (Feb 3, 2020):
No I'm using Wasabi instead of Amazon S3.
@gaul commented on GitHub (Feb 3, 2020):
Unfortunately @wasabi does not have any kind of developer account so I cannot test this. You could try reaching out to them referencing this issue and how their policies impact user compatibility. I would like to help debug this but I don't understand how their behavior differs from Amazon S3.
@cnanakos commented on GitHub (Feb 3, 2020):
It has a 30-day trial account, I used the same tbh.
@gaul commented on GitHub (Feb 3, 2020):
Honestly I used the same last year to write the wiki entry:
https://github.com/s3fs-fuse/s3fs-fuse/wiki/Non-Amazon-S3#wasabi
I inquired about developer accounts but they declined:
@gaul commented on GitHub (Feb 5, 2020):
I cannot reproduce these symptoms. I tested both 1.85 and master with:
Do you have any other configuration?
@cnanakos commented on GitHub (Feb 5, 2020):
No the configuration is exactly the same as yours but I suspect that you use cp to transfer files to the mounpoint. My test involved Veeam which was using the s3fs mountpoint via a NFS export. I hope this helps.
@judassssss commented on GitHub (Feb 6, 2020):
hi everyone, I am having the same problem you have, I think it's due to the multipart mechanism via nfs having problems. You can see #1243
@cnanakos commented on GitHub (Feb 6, 2020):
Yes the problem is the same, writing directly to s3fs mountpoint has no issues at all.
@judassssss commented on GitHub (Feb 7, 2020):
You can try turn off multipart with -o nomultipart, will solve that problem, but when turn off multipart we got another problem, the checksum local file doesn't match with one on S3. @gaul is there any recommend about this issue?
@gaul commented on GitHub (Mar 1, 2020):
@ggtakec This is likely a regression due to mixed uploads. I don't have a minimal reproduceable case yet but changing
write_multiple_offsets.pyto this:Generates an error:
@gaul commented on GitHub (Apr 20, 2020):
You should be able to work around this with
-o nomixupload.@gaul commented on GitHub (Apr 22, 2020):
Fixed by #1277.