mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[PR #1647] [MERGED] Don't do a multipart upload with first part smaller than 5MB #2089
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#2089
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
📋 Pull Request Information
Original PR: https://github.com/s3fs-fuse/s3fs-fuse/pull/1647
Author: @CarstenGrohmann
Created: 5/4/2021
Status: ✅ Merged
Merged: 5/27/2021
Merged by: @gaul
Base:
master← Head:noupload_on_space_shortage📝 Commits (1)
d003832Ensuring multipart size even when storage is low📊 Changes
1 file changed (+4 additions, -0 deletions)
View changed files
📝
src/fdcache_entity.cpp(+4 -0)📄 Description
When the temporary storage filled up, the old implementation starts a multipart upload with the current data even if the minimum multipart size is not reached. This can cause errors depending on the S3 implementation.
There is no real solution for a shortage of temporary storage. The change will implement 2 mitigations. They may help or may not. This depends on the speed of the incoming data vs. the speed of writing data to S3.
The new implementation handles two different scenarios:
minimum part not reached: emits a warning and returns -ENOSPC
temporary storage is between minimum part size and set multipart size: permanently reduce multipart size to the current size
This scenario may cause the multipart size to be repeatedly, incrementally, and permanently reduced to the minimum size.
There is no guarantee that this frees enough memory fast enough. As you see in "Example Scenario 2", the multipart is reduced but copy fails nevertheless.
Maybe this can be solved by an additional
ftruncate.This condition can be merged with the first to emit a warning and return -ENOSPC for all requests smaller than the multipart size.
What do you think about this change?
Starting s3fs
Example Scenario 1
New debug output w/ warning:
Example Scenario 2
The multipart size is reduced from 256M to 166MB:
and the writing still fails a few seconds later
🔄 This issue represents a GitHub Pull Request. It cannot be merged through Gitea due to API limitations.