[GH-ISSUE #262] What's the minimum REST API requirement for the backend object storage? #133

Closed
opened 2026-03-04 01:42:29 +03:00 by kerem · 6 comments
Owner

Originally created by @akiradeveloper on GitHub (Sep 11, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/262

This project is quite interesting to me. So I will ask you a question:

It doesn't seem like s3fs use all the s3 APIs. For example, the bucket is given so the "PUT Bucket" API isn't required.

My guess is it only uses restricted version of

  • PUT Object
  • DELETE Object
  • GET Object
  • LIST Bucket

restricted means subset of parameters and headers are required within the APIs.

Originally created by @akiradeveloper on GitHub (Sep 11, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/262 This project is quite interesting to me. So I will ask you a question: It doesn't seem like s3fs use all the s3 APIs. For example, the bucket is given so the "PUT Bucket" API isn't required. My guess is it only uses restricted version of - PUT Object - DELETE Object - GET Object - LIST Bucket restricted means subset of parameters and headers are required within the APIs.
kerem closed this issue 2026-03-04 01:42:29 +03:00
Author
Owner

@gaul commented on GitHub (Sep 11, 2015):

@akiradeveloper s3fs calls a few more APIs:

  • POST multi-part upload
  • POST multi-part copy
  • PUT single-part copy
  • HEAD Object
  • GET Bucket (used for checking credentials)

You can grep through the source for CalcSignatureV2 to see creation of RPCs. We should collect the full list so that users can create restricted IAM roles.

<!-- gh-comment-id:139439430 --> @gaul commented on GitHub (Sep 11, 2015): @akiradeveloper s3fs calls a few more APIs: - POST multi-part upload - POST multi-part copy - PUT single-part copy - HEAD Object - GET Bucket (used for checking credentials) You can grep through the source for `CalcSignatureV2` to see creation of RPCs. We should collect the full list so that users can create restricted IAM roles.
Author
Owner

@akiradeveloper commented on GitHub (Sep 11, 2015):

Oh, multipart upload is required. I guess this is required because file can be larger than threshold (5MB?) but it's not easy to implement as normal PUT.

Ideally, the requirement should be as small as possible so only simple GET/HEAD/PUT/LIST are required because they are simpler.

Do you have a plan to add a special mode that s3fs assumes the files are smaller than threshold and never use multipart uploads?

<!-- gh-comment-id:139466501 --> @akiradeveloper commented on GitHub (Sep 11, 2015): Oh, multipart upload is required. I guess this is required because file can be larger than threshold (5MB?) but it's not easy to implement as normal PUT. Ideally, the requirement should be as small as possible so only simple GET/HEAD/PUT/LIST are required because they are simpler. Do you have a plan to add a special mode that s3fs assumes the files are smaller than threshold and never use multipart uploads?
Author
Owner

@gaul commented on GitHub (Sep 11, 2015):

@akiradeveloper You can control multipart behavior via -o multipart_size or disable it via -o nomultipart. You can also emulate copy via -o nocopyapi. Which object store are you using that does not support these operations?

<!-- gh-comment-id:139571327 --> @gaul commented on GitHub (Sep 11, 2015): @akiradeveloper You can control multipart behavior via `-o multipart_size` or disable it via `-o nomultipart`. You can also emulate copy via `-o nocopyapi`. Which object store are you using that does not support these operations?
Author
Owner

@akiradeveloper commented on GitHub (Sep 12, 2015):

Thanks. But not particularly yet. It was just of my curiosity from technical aspect. This project is nice.

<!-- gh-comment-id:139739426 --> @akiradeveloper commented on GitHub (Sep 12, 2015): Thanks. But not particularly yet. It was just of my curiosity from technical aspect. This project is nice.
Author
Owner

@ggtakec commented on GitHub (Sep 13, 2015):

Hi,
Additional information, you are going to using object strage like S3 but it does not support all of S3 API, you can see man page about "nocopyapi", "norenameapi" and "nomultipart".
These helps to connect S3 compatible object storage.

<!-- gh-comment-id:139848318 --> @ggtakec commented on GitHub (Sep 13, 2015): Hi, Additional information, you are going to using object strage like S3 but it does not support all of S3 API, you can see man page about "nocopyapi", "norenameapi" and "nomultipart". These helps to connect S3 compatible object storage.
Author
Owner

@akiradeveloper commented on GitHub (Sep 13, 2015):

Thank you for the information.

<!-- gh-comment-id:139849847 --> @akiradeveloper commented on GitHub (Sep 13, 2015): Thank you for the information.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#133
No description provided.