mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #2094] [Feature] Limit the size of maximum storage space. #1065
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1065
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @fxzxmic on GitHub (Jan 12, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2094
To avoid high bills due to errors or mistakes, request to add a new feature to set and limit the size of maximum storage space.
@ggtakec commented on GitHub (Jan 15, 2023):
@fxzxmic Thanks for the feature request.
As you said, it would be nice if there was a function to set the number of requests and the upper limit of storage capacity.
However, I think it is difficult to control them on the s3fs side.
For example, in the case of AWS, I think it would be easier to monitor the maximum number of requests and storage capacity in the settings on the AWS S3 side.
Currently, I think it is difficult to check the server side capacity (measure the size of all objects) on the client(s3fs) side.
Could you set the restrictions that you want to enforce be restricted on the server side?
@fxzxmic commented on GitHub (Jan 16, 2023):
No, that's why I want to set restrictions on the client side.
@ggtakec commented on GitHub (Jan 22, 2023):
As per the previous answer, I think it will be difficult to support this feature.
Example)
Even if this were provided, it would still take a long time to process, depending on the objects in the bucket.
This might be possible to implement, but the behavior of s3fs when the limit is reached is difficult.(forced termination? / make all errors? etc.)
In above case, I'm sorry I think it is realistically difficult to impose restrictions from the client side at this time.
But, if your desire is to set a file size limit for a single uploaded file, you can consider implementing it or not.
@fxzxmic commented on GitHub (Jan 22, 2023):
I understand. If the server cannot directly return the used storage capacity, the implementation on the client will sacrifice a lot of performance.
Let me close this issue.