[GH-ISSUE #2094] [Feature] Limit the size of maximum storage space. #1065

Closed
opened 2026-03-04 01:51:04 +03:00 by kerem · 4 comments
Owner

Originally created by @fxzxmic on GitHub (Jan 12, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2094

To avoid high bills due to errors or mistakes, request to add a new feature to set and limit the size of maximum storage space.

Originally created by @fxzxmic on GitHub (Jan 12, 2023). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2094 To avoid high bills due to errors or mistakes, request to add a new feature to set and limit the size of maximum storage space.
kerem closed this issue 2026-03-04 01:51:04 +03:00
Author
Owner

@ggtakec commented on GitHub (Jan 15, 2023):

@fxzxmic Thanks for the feature request.

As you said, it would be nice if there was a function to set the number of requests and the upper limit of storage capacity.
However, I think it is difficult to control them on the s3fs side.
For example, in the case of AWS, I think it would be easier to monitor the maximum number of requests and storage capacity in the settings on the AWS S3 side.
Currently, I think it is difficult to check the server side capacity (measure the size of all objects) on the client(s3fs) side.

Could you set the restrictions that you want to enforce be restricted on the server side?

<!-- gh-comment-id:1383062102 --> @ggtakec commented on GitHub (Jan 15, 2023): @fxzxmic Thanks for the feature request. As you said, it would be nice if there was a function to set the number of requests and the upper limit of storage capacity. However, I think it is difficult to control them on the s3fs side. For example, in the case of AWS, I think it would be easier to monitor the maximum number of requests and storage capacity in the settings on the AWS S3 side. Currently, I think it is difficult to check the server side capacity (measure the size of all objects) on the client(s3fs) side. Could you set the restrictions that you want to enforce be restricted on the server side?
Author
Owner

@fxzxmic commented on GitHub (Jan 16, 2023):

@fxzxmic Thanks for the feature request.

As you said, it would be nice if there was a function to set the number of requests and the upper limit of storage capacity. However, I think it is difficult to control them on the s3fs side. For example, in the case of AWS, I think it would be easier to monitor the maximum number of requests and storage capacity in the settings on the AWS S3 side. Currently, I think it is difficult to check the server side capacity (measure the size of all objects) on the client(s3fs) side.

Could you set the restrictions that you want to enforce be restricted on the server side?

No, that's why I want to set restrictions on the client side.

<!-- gh-comment-id:1383528726 --> @fxzxmic commented on GitHub (Jan 16, 2023): > @fxzxmic Thanks for the feature request. > > As you said, it would be nice if there was a function to set the number of requests and the upper limit of storage capacity. However, I think it is difficult to control them on the s3fs side. For example, in the case of AWS, I think it would be easier to monitor the maximum number of requests and storage capacity in the settings on the AWS S3 side. Currently, I think it is difficult to check the server side capacity (measure the size of all objects) on the client(s3fs) side. > > Could you set the restrictions that you want to enforce be restricted on the server side? No, that's why I want to set restrictions on the client side.
Author
Owner

@ggtakec commented on GitHub (Jan 22, 2023):

As per the previous answer, I think it will be difficult to support this feature.

Example)

  • If you put a limit on the total amount of objects in the bucket, when s3fs(client) starts, we have to get the size of all files(objects) in the bucket.
    Even if this were provided, it would still take a long time to process, depending on the objects in the bucket.
  • If you want to limit the communication capacity and number of times after starting s3fs, you have to accumulate the communication capacity and number of times from the start of s3fs to the end.
    This might be possible to implement, but the behavior of s3fs when the limit is reached is difficult.(forced termination? / make all errors? etc.)

In above case, I'm sorry I think it is realistically difficult to impose restrictions from the client side at this time.

But, if your desire is to set a file size limit for a single uploaded file, you can consider implementing it or not.

<!-- gh-comment-id:1399431194 --> @ggtakec commented on GitHub (Jan 22, 2023): As per the previous answer, I think it will be difficult to support this feature. Example) - If you put a limit on the total amount of objects in the bucket, when s3fs(client) starts, we have to get the size of all files(objects) in the bucket. Even if this were provided, it would still take a long time to process, depending on the objects in the bucket. - If you want to limit the communication capacity and number of times after starting s3fs, you have to accumulate the communication capacity and number of times from the start of s3fs to the end. This might be possible to implement, but the behavior of s3fs when the limit is reached is difficult.(forced termination? / make all errors? etc.) In above case, I'm sorry I think it is realistically difficult to impose restrictions from the client side at this time. But, if your desire is to set a file size limit for a single uploaded file, you can consider implementing it or not.
Author
Owner

@fxzxmic commented on GitHub (Jan 22, 2023):

I understand. If the server cannot directly return the used storage capacity, the implementation on the client will sacrifice a lot of performance.
Let me close this issue.

<!-- gh-comment-id:1399494816 --> @fxzxmic commented on GitHub (Jan 22, 2023): I understand. If the server cannot directly return the used storage capacity, the implementation on the client will sacrifice a lot of performance. Let me close this issue.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1065
No description provided.