mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[PR #1313] [MERGED] Fixed about ParallelMixMultipartUpload #1915
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1915
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/s3fs-fuse/s3fs-fuse/pull/1313
Author: @ggtakec
Created: 6/21/2020
Status: ✅ Merged
Merged: 6/24/2020
Merged by: @gaul
Base:
master← Head:bug_fix📝 Commits (1)
6c0b958Fixed about ParallelMixMultipartUpload📊 Changes
5 files changed (+240 additions, -275 deletions)
View changed files
📝
src/curl.cpp(+2 -14)📝
src/curl.h(+3 -2)📝
src/fdcache.cpp(+203 -253)📝
src/fdcache.h(+3 -6)📝
test/integration-test-main.sh(+29 -0)📄 Description
Relevant Issue (if applicable)
n/a(or some issues)
Details
The following bugs were found in the processing of mixed multipart upload.
Details of the bug
For example, there is a 500MB file and it doesn't exist in the s3fs cache file yet.
In this case, if user perform the following operations on the target file, the result will be different for each.
This is working correctly.
It fails with a 400 HTTP error(EIO).
It fails with a 400 HTTP error(EIO).
It also unnecessarily downloads before uploading, which reduces performance.
These fatal errors are
EntityTooSmall.Cause
In the processing of mixed multipart upload, there was a mistake in the calculation of the range of Copy and Upload.
It may be possible to complete normally, but it will fail depending on the write position without the cache file.
Fixes
Made following fixes.
Download and upload range processing
In case of mixed multi-part upload, the insufficient area is downloaded in advance by s3fs because of the minimum upload range.
For this reason, it is necessary to calculate the areas of Copy and Uoload.
But there were some bugs in this logic.
For fixing, the following functions that performed these processes were changed/deleted.
It was rewritten significantly.
And the function name(PageList::GetPageListsForMultipartUpload) and arguments have also been changed.
It has been absorbed in PageList::GetPageListsForMultipartUpload, is no longer needed, and has been removed.
It has been absorbed in PageList::GetPageListsForMultipartUpload, is no longer needed, and has been removed.
It has been modified to prepare local functions to operate on fdcache_list_t and only call it.
This has changed with the changes to GetPageListsForMultipartUpload.
Changes such as calling ParallelMixMultipartUploadRequest
Others
Impact
In mixed multipart upload, EIO(HTTP response code=400) problem may be the same cause as this PR.
Also, there are times when performance is poor, which may also be affecting it.(Performance gets worse because it downloads a range that does not need to be downloaded due to a bug.)
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.