[GH-ISSUE #2151] Issue mounting with prefix and federation token #1098

Closed
opened 2026-03-04 01:51:21 +03:00 by kerem · 3 comments
Owner

Originally created by @nchaly on GitHub (Apr 19, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2151

Additional Information

Version of s3fs being used (s3fs --version)

Amazon Simple Storage Service File System V1.91 (commit:c4f95f1) with OpenSSL

(manually built from master).

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

2.9.9

Kernel information (uname -r)

5.19.0-1022-aws

GNU/Linux Distribution, if applicable (cat /etc/os-release)

PRETTY_NAME="Ubuntu 22.04.2 LTS"
NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.2 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy

Details about issue

Scenario: one of our services issues to clients federation tokens to access specified prefix ("folder"). Credentials consists of key id, key, session token, and limit access only to specified prefix for certain amount of time.

Client mounts specified prefix with the command:

s3fs -o ro -o endpoint=eu-west-1 -o dbglevel=info -o use_session_token -o use_cache=/tmp/s3fs -o del_cache -o curldbg ${BUCKET}:${PREFIX} ${MNT_PATH}

Credentials are passed via environment. Result is that bucket is not mounted. Checking s3fs logfiles shows the following sequence of requests (with retry added in https://github.com/s3fs-fuse/s3fs-fuse/pull/2087 ):

  • request GET /
  • response 403
  • request GET ${PREFIX}
  • response 404

(detailed logs below).

Note that using the same credentials, the aws s3 ls ${BUCKET}${PREFIX} returns list of files correctly.

So the question is - is there something wrong with my scenario or permissions?
Otherwise - does this second retry with prefix work as expected? Because it does not match S3 api methods:

s3fs logs (grep s3fs /var/log/syslog) (shortened):

init v1.91(commit:c4f95f1) with OpenSSL, credential-library(built-in)
check services.
      check a bucket.
      
> GET / HTTP/1.1
> Host: BUCKET.s3.amazonaws.com
> User-Agent: s3fs/1.91 (commit hash c4f95f1; OpenSSL)
> Authorization: **

< HTTP/1.1 403 Forbidden
< x-amz-bucket-region: eu-west-1
curl.cpp:RequestPerform(2566): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>#012<Error><Code>AccessDenied
* Re-using existing connection! (#0) with host BUCKET.s3.amazonaws.com
* Connected to BUCKET.s3.amazonaws.com (52.218.106.50) port 443 (#0)

> GET /60/59/f4/2b/adcf8d6234dbeb0e/ HTTP/1.1
> Host: BUCKET.s3.amazonaws.com
> User-Agent: s3fs/1.91 (commit hash c4f95f1; OpenSSL)
> Accept: */*
> Authorization: **


< HTTP/1.1 404 Not Found
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Wed, 19 Apr 2023 07:27:23 GMT
< Server: AmazonS3
< 
* Connection #0 to host BUCKET.s3.amazonaws.com left intact
      HTTP response code 404 was returned, returning ENOENT
curl.cpp:RequestPerform(2572): Body Text: <?xml version="1.0" encoding="UTF-8"?>#012<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>60/59/f4/2b/adcf8d6234dbeb0e/</Key>
s3fs.cpp:s3fs_check_service(4366): bucket or key not found(host=https://s3.amazonaws.com) - result of checking service.
curl_handlerpool.cpp:ReturnHandler(101): Return handler to pool
s3fs.cpp:s3fs_exit_fuseloop(4116): Exiting FUSE event loop due to errors

Just in case, permissions used for the role:

"Action": [
                "sts:AssumeRole",
                "s3:PutObjectAcl",
                "s3:PutObject",
                "s3:ListMultipartUploadParts",
                "s3:ListBucketMultipartUploads",
                "s3:ListBucket",
                "s3:GetObjectAcl",
                "s3:GetObject",
                "s3:DeleteObject",
                "s3:AbortMultipartUpload"
            ],
            "Effect": "Allow",
Originally created by @nchaly on GitHub (Apr 19, 2023). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2151 <!-- -------------------------------------------------------------------------- The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all. Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD. --------------------------------------------------------------------------- --> ### Additional Information #### Version of s3fs being used (`s3fs --version`) <!-- example: V1.91 (commit:b19262a) --> Amazon Simple Storage Service File System V1.91 (commit:c4f95f1) with OpenSSL (manually built from master). #### Version of fuse being used (`pkg-config --modversion fuse`, `rpm -qi fuse` or `dpkg -s fuse`) <!-- example: 2.9.2 --> 2.9.9 #### Kernel information (`uname -r`) <!-- example: 5.10.96-90.460.amzn2.x86_64 --> 5.19.0-1022-aws #### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) <!-- command result --> PRETTY_NAME="Ubuntu 22.04.2 LTS" NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.2 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ### Details about issue <!-- Please describe the content of the issue in detail. --> Scenario: one of our services issues to clients federation tokens to access specified prefix ("folder"). Credentials consists of key id, key, session token, and limit access only to specified prefix for certain amount of time. Client mounts specified prefix with the command: ``` s3fs -o ro -o endpoint=eu-west-1 -o dbglevel=info -o use_session_token -o use_cache=/tmp/s3fs -o del_cache -o curldbg ${BUCKET}:${PREFIX} ${MNT_PATH} ``` Credentials are passed via environment. Result is that bucket is not mounted. Checking s3fs logfiles shows the following sequence of requests (with retry added in https://github.com/s3fs-fuse/s3fs-fuse/pull/2087 ): - request GET / - response 403 - request GET ${PREFIX} - response 404 (detailed logs below). Note that using the same credentials, the `aws s3 ls ${BUCKET}${PREFIX}` returns list of files correctly. So the question is - is there something wrong with my scenario or permissions? Otherwise - does this second retry with prefix work as expected? Because it does not match S3 api methods: - GetObject request expects valid key (i.e. full path to a file) - https://docs.aws.amazon.com/AmazonS3/latest/API/API_GetObject.html (and naturally 404 is normal). - ListObjects request should pass prefix as a parameter: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html (and that's why aws cli works). s3fs logs (grep s3fs /var/log/syslog) (shortened): ``` init v1.91(commit:c4f95f1) with OpenSSL, credential-library(built-in) check services. check a bucket. > GET / HTTP/1.1 > Host: BUCKET.s3.amazonaws.com > User-Agent: s3fs/1.91 (commit hash c4f95f1; OpenSSL) > Authorization: ** < HTTP/1.1 403 Forbidden < x-amz-bucket-region: eu-west-1 curl.cpp:RequestPerform(2566): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>#012<Error><Code>AccessDenied * Re-using existing connection! (#0) with host BUCKET.s3.amazonaws.com * Connected to BUCKET.s3.amazonaws.com (52.218.106.50) port 443 (#0) > GET /60/59/f4/2b/adcf8d6234dbeb0e/ HTTP/1.1 > Host: BUCKET.s3.amazonaws.com > User-Agent: s3fs/1.91 (commit hash c4f95f1; OpenSSL) > Accept: */* > Authorization: ** < HTTP/1.1 404 Not Found < Content-Type: application/xml < Transfer-Encoding: chunked < Date: Wed, 19 Apr 2023 07:27:23 GMT < Server: AmazonS3 < * Connection #0 to host BUCKET.s3.amazonaws.com left intact HTTP response code 404 was returned, returning ENOENT curl.cpp:RequestPerform(2572): Body Text: <?xml version="1.0" encoding="UTF-8"?>#012<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>60/59/f4/2b/adcf8d6234dbeb0e/</Key> s3fs.cpp:s3fs_check_service(4366): bucket or key not found(host=https://s3.amazonaws.com) - result of checking service. curl_handlerpool.cpp:ReturnHandler(101): Return handler to pool s3fs.cpp:s3fs_exit_fuseloop(4116): Exiting FUSE event loop due to errors ``` Just in case, permissions used for the role: ``` "Action": [ "sts:AssumeRole", "s3:PutObjectAcl", "s3:PutObject", "s3:ListMultipartUploadParts", "s3:ListBucketMultipartUploads", "s3:ListBucket", "s3:GetObjectAcl", "s3:GetObject", "s3:DeleteObject", "s3:AbortMultipartUpload" ], "Effect": "Allow", ```
kerem closed this issue 2026-03-04 01:51:21 +03:00
Author
Owner

@nchaly commented on GitHub (Apr 21, 2023):

An update: probably this is by design. Here is what it says in the code:

// [NOTE]
// This checks whether access to the bucket when s3fs is started.
//
// The following patterns for mount point are supported by s3fs:
// (1) Mount the bucket top
// (2) Mount the directory(folder) under the bucket. In this case, there are
//     the following cases:
//     (2A) Directories created by clients other than s3fs
//     (2B) Directory created by s3fs
//
// At first in this functoin, if user has access to the bucket, the checking
// access to the bucket succeeds and this function returns success. However,
// if user does not have access to the bucket and has permissions to the
// directory, this first check will fail.
// But if user specifies the directory for mount point, this function retries
// to check with the path containing the directory. And it will be success.
//
// In the case of (2A), the check will succeed if the bucket allows to access,
// but will fail if permissions are granted only to the directory, as it is not
// a directory recognized by s3fs. This combination is not supported by s3fs,
// so make sure user create the directory before starting s3fs.
// In case (2B), if user does not have access to bucket, the first check(to
// bucket) fails, but the retry check(with path) succeeds.
//
static int s3fs_check_service()
   ...

So it looks like it fails by design (but does not state this explicitly).

The question is that this approach does not look reasonable, especially in read-only scenarios. It may be because I am not really aware about specifics of "folder objects" usage -- but before s3fs had been working rather fine for me in scenarios, where the whole bucket is being used, and no directories had been made by s3fs.

I've added a rather clumsy commit - github.com/nchaly/s3fs-fuse@5670dd9dab - which does what I need in principle.

In fact the check "make sure remote mountpath exists and is a directory" - https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/s3fs.cpp#L4379 - may be relaxed a little, and just make sure the key does not exist.

Are there scenarios that actually require that s3 mountpath is a "folder object"?

<!-- gh-comment-id:1517612649 --> @nchaly commented on GitHub (Apr 21, 2023): An update: probably this is by design. Here is what it says in the code: ``` // [NOTE] // This checks whether access to the bucket when s3fs is started. // // The following patterns for mount point are supported by s3fs: // (1) Mount the bucket top // (2) Mount the directory(folder) under the bucket. In this case, there are // the following cases: // (2A) Directories created by clients other than s3fs // (2B) Directory created by s3fs // // At first in this functoin, if user has access to the bucket, the checking // access to the bucket succeeds and this function returns success. However, // if user does not have access to the bucket and has permissions to the // directory, this first check will fail. // But if user specifies the directory for mount point, this function retries // to check with the path containing the directory. And it will be success. // // In the case of (2A), the check will succeed if the bucket allows to access, // but will fail if permissions are granted only to the directory, as it is not // a directory recognized by s3fs. This combination is not supported by s3fs, // so make sure user create the directory before starting s3fs. // In case (2B), if user does not have access to bucket, the first check(to // bucket) fails, but the retry check(with path) succeeds. // static int s3fs_check_service() ... ``` So it looks like it fails by design (but does not state this explicitly). The question is that this approach does not look reasonable, especially in read-only scenarios. It may be because I am not really aware about specifics of "folder objects" usage -- but before s3fs had been working rather fine for me in scenarios, where the whole bucket is being used, and no directories had been made by s3fs. I've added a rather clumsy commit - https://github.com/nchaly/s3fs-fuse/commit/5670dd9dab9962e55ab1d3457ddf8025fecc7653 - which does what I need in principle. In fact the check "make sure remote mountpath exists and is a directory" - https://github.com/s3fs-fuse/s3fs-fuse/blob/master/src/s3fs.cpp#L4379 - may be relaxed a little, and just make sure the key does not exist. Are there scenarios that actually require that s3 mountpath is a "folder object"?
Author
Owner

@ggtakec commented on GitHub (May 4, 2023):

@nchaly I'm sorry for my late reply.

I believe your code is working correctly.
I made #2153 based on that.
The difference between them is that they take advantage of the compat_dir option that s3fs already has.
Try using the code from that PR and specifying the compat_dir option to mount the directory in error.
I think it works fine.

If there are no problems, I will merge it.
Thanks in advance for your assistance.

<!-- gh-comment-id:1534618704 --> @ggtakec commented on GitHub (May 4, 2023): @nchaly I'm sorry for my late reply. I believe your code is working correctly. I made #2153 based on that. The difference between them is that they take advantage of the `compat_dir` option that s3fs already has. Try using the code from that PR and specifying the `compat_dir` option to mount the directory in error. I think it works fine. If there are no problems, I will merge it. Thanks in advance for your assistance.
Author
Owner

@ggtakec commented on GitHub (May 7, 2023):

#2153 has been merged, so I'm closing this issue.
If you still have problems, please reopen.

<!-- gh-comment-id:1537250777 --> @ggtakec commented on GitHub (May 7, 2023): #2153 has been merged, so I'm closing this issue. If you still have problems, please reopen.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1098
No description provided.