[GH-ISSUE #2129] should not require access to ListBucket on root to mount. #1086

Open
opened 2026-03-04 01:51:16 +03:00 by kerem · 5 comments
Owner

Originally created by @gposton on GitHub (Mar 14, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2129

Additional Information

Version of s3fs being used (s3fs --version)

1.91

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

2.9.2-11

Kernel information (uname -r)

3.10.0-1160.83.1.el7.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

How to run s3fs, if applicable

media-bucket:/flashteam /home/gposton/Desktop/Media fuse.s3fs _netdev,allow_other,iam_role=auto,uid=1001 0 0)

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

Mar 14 17:21:15 ip-172-16-16-34 s3fs[18627]: s3fs version 1.91(unknown) : s3fs -o rw,allow_other,iam_role=auto,uid=1001,dev,suid hotspring-production-us-west-2:/flashteam /home/pushkar/Desktop/Media
Mar 14 17:21:15 ip-172-16-16-34 s3fs[18627]: Loaded mime information from /etc/mime.types
Mar 14 17:21:15 ip-172-16-16-34 s3fs[18628]: init v1.91(commit:unknown) with OpenSSL
Mar 14 17:21:16 ip-172-16-16-34 s3fs[18628]: s3fs.cpp:s3fs_check_service(3547): Failed to connect region 'us-east-1'(default), so retry to connect region 'us-west-2'.
Mar 14 17:21:17 ip-172-16-16-34 s3fs[18628]: s3fs.cpp:s3fs_check_service(3572): Failed to connect by sigv4, so retry to connect by signature version 2.
Mar 14 17:21:17 ip-172-16-16-34 s3fs[18628]: s3fs.cpp:s3fs_check_service(3587): invalid credentials(host=https://s3-us-west-2.amazonaws.com) - result of checking service.

Details about issue

Previous versions of s3fs did not require ListBucket permission on the bucket root when using a specific sub-folder as the mount. The current version does.

Here is the IAM policy that worked with previous versions:

{
    "Statement": [
        {
            "Action": "s3:ListBucket",
            "Effect": "Allow",
            "Resource": "arn:aws:s3:::media-bucket",
            "Sid": "",
            "Condition": {
                "StringLike": {
                    "s3:prefix": [
                        "flashteam/*"
                    ]
                }
            }
        },
        {
            "Action": "s3:*",
            "Effect": "Allow",
            "Resource": "arn:aws:s3:::media-bucket/flashteam/*",
            "Sid": ""
        },
        {
            "Action": "s3:Delete*",
            "Effect": "Deny",
            "Resource": "arn:aws:s3:::media-bucket/flashteam/*",
            "Sid": ""
        }
    ],
    "Version": "2012-10-17"
}

With this version of s3fs, the condition on the ListBucket permission prevents successful mount. If I remove the condition on ListBucket, s3fs successfully mounts the bucket. Further, I can then update the IAM policy to re-add the condition back and everything continues to work fine. It looks like this ONLY prevents the initial mount.

Our requirement is to prevent IAM access to list buckets so that users can not use awscli (or similar tools) directly to view the directory structure of the entire bucket.

I do not believe that the permission to ListBucket on the root of the bucket should be required when mounting a bucket subfolder.

Originally created by @gposton on GitHub (Mar 14, 2023). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2129 <!-- -------------------------------------------------------------------------- The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all. Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD. --------------------------------------------------------------------------- --> ### Additional Information #### Version of s3fs being used (`s3fs --version`) ``` 1.91 ``` #### Version of fuse being used (`pkg-config --modversion fuse`, `rpm -qi fuse` or `dpkg -s fuse`) ``` 2.9.2-11 ``` #### Kernel information (`uname -r`) ``` 3.10.0-1160.83.1.el7.x86_64 ``` #### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) ``` NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" ``` #### How to run s3fs, if applicable <!-- Describe the s3fs "command line" or "/etc/fstab" entry used. --> ``` media-bucket:/flashteam /home/gposton/Desktop/Media fuse.s3fs _netdev,allow_other,iam_role=auto,uid=1001 0 0) ``` #### s3fs syslog messages (`grep s3fs /var/log/syslog`, `journalctl | grep s3fs`, or `s3fs outputs`) <!-- if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages. --> ``` Mar 14 17:21:15 ip-172-16-16-34 s3fs[18627]: s3fs version 1.91(unknown) : s3fs -o rw,allow_other,iam_role=auto,uid=1001,dev,suid hotspring-production-us-west-2:/flashteam /home/pushkar/Desktop/Media Mar 14 17:21:15 ip-172-16-16-34 s3fs[18627]: Loaded mime information from /etc/mime.types Mar 14 17:21:15 ip-172-16-16-34 s3fs[18628]: init v1.91(commit:unknown) with OpenSSL Mar 14 17:21:16 ip-172-16-16-34 s3fs[18628]: s3fs.cpp:s3fs_check_service(3547): Failed to connect region 'us-east-1'(default), so retry to connect region 'us-west-2'. Mar 14 17:21:17 ip-172-16-16-34 s3fs[18628]: s3fs.cpp:s3fs_check_service(3572): Failed to connect by sigv4, so retry to connect by signature version 2. Mar 14 17:21:17 ip-172-16-16-34 s3fs[18628]: s3fs.cpp:s3fs_check_service(3587): invalid credentials(host=https://s3-us-west-2.amazonaws.com) - result of checking service. ``` ### Details about issue Previous versions of s3fs did not require ListBucket permission on the bucket root when using a specific sub-folder as the mount. The current version does. Here is the IAM policy that worked with previous versions: ``` { "Statement": [ { "Action": "s3:ListBucket", "Effect": "Allow", "Resource": "arn:aws:s3:::media-bucket", "Sid": "", "Condition": { "StringLike": { "s3:prefix": [ "flashteam/*" ] } } }, { "Action": "s3:*", "Effect": "Allow", "Resource": "arn:aws:s3:::media-bucket/flashteam/*", "Sid": "" }, { "Action": "s3:Delete*", "Effect": "Deny", "Resource": "arn:aws:s3:::media-bucket/flashteam/*", "Sid": "" } ], "Version": "2012-10-17" } ``` With this version of s3fs, the condition on the ListBucket permission prevents successful mount. If I remove the condition on ListBucket, s3fs successfully mounts the bucket. Further, I can then update the IAM policy to re-add the condition back and everything continues to work fine. It looks like this ONLY prevents the initial mount. Our requirement is to prevent IAM access to list buckets so that users can not use awscli (or similar tools) directly to view the directory structure of the entire bucket. I do not believe that the permission to ListBucket on the root of the bucket should be required when mounting a bucket subfolder.
Author
Owner

@ggtakec commented on GitHub (Mar 26, 2023):

@gposton
Since v1.91, I think there is a fix related to this issue.
#2087(and #2114) may have solved your problem.

If you can build and test the code on the master branch, please try it.
Thanks in advance for your assistance.

<!-- gh-comment-id:1484101894 --> @ggtakec commented on GitHub (Mar 26, 2023): @gposton Since v1.91, I think there is a fix related to this issue. #2087(and #2114) may have solved your problem. If you can build and test the code on the master branch, please try it. Thanks in advance for your assistance.
Author
Owner

@schorfi commented on GitHub (Mar 30, 2023):

@ggtakec I tried it and it seems not to work as expected - I had a similar use case
s3fs mybucket:/foo/bar/ /mnt/bar/ -f -d -o passwd_file=/etc/passwd-s3fs -o use_path_request_style -o endpoint=eu-central-1 -o url=https://s3.eu-central-1.amazonaws.com

requirement: minimal access rights in this use-case: list/read only on a specific (set of) prefix
expectation: successful mount of the objects with prefix, despite others are forbidden

fails with 403 - AccessDenied and/or
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message>
on https://s3.eu-central-1.amazonaws.com/mybucket/foo/bar/

the same parameters from above work if I add a policy to include root read/list permissions on that bucket, which looks like

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::mybucket"
        }
    ]
}

besides the other one which does the trick without the extension from above on aws cli, e.g.
aws s3 ls s3://mybucket/foo/bar/
which is

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::mybucket/foo/bar/*",
                "arn:aws:s3:::mybucket/another/one/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "s3:ListBucket",
            "Resource": "arn:aws:s3:::mybucket",
            "Condition": {
                "StringLike": {
                    "s3:prefix": [
                        "foo/bar/*",
                        "another/one/*"
                    ]
                }
            }
        }
    ]
}

What I observed so far, accessing the htttps://s3.eu-ce.... /mybucket/foo/bar/ returns 404 on both cases (success or not)
but on the succes case, there is a request being done
?delimiter=/&max-keys=2&prefix=foo/bar
which succeeds
but not on the unsuccessful case, where it prints the S3 response message

<!-- gh-comment-id:1489919896 --> @schorfi commented on GitHub (Mar 30, 2023): @ggtakec I tried it and it seems not to work as expected - I had a similar use case `s3fs mybucket:/foo/bar/ /mnt/bar/ -f -d -o passwd_file=/etc/passwd-s3fs -o use_path_request_style -o endpoint=eu-central-1 -o url=https://s3.eu-central-1.amazonaws.com` requirement: minimal access rights in this use-case: list/read only on a specific (set of) prefix expectation: successful mount of the objects with prefix, despite others are forbidden fails with 403 - AccessDenied and/or `<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message>` on https://s3.eu-central-1.amazonaws.com/mybucket/foo/bar/ the same parameters from above work if I add a policy to include root read/list permissions on that bucket, which looks like ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::mybucket" } ] } ``` besides the other one which does the trick without the extension from above on aws cli, e.g. `aws s3 ls s3://mybucket/foo/bar/` which is ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::mybucket/foo/bar/*", "arn:aws:s3:::mybucket/another/one/*" ] }, { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::mybucket", "Condition": { "StringLike": { "s3:prefix": [ "foo/bar/*", "another/one/*" ] } } } ] } ``` What I observed so far, accessing the htttps://s3.eu-ce.... /mybucket/foo/bar/ returns 404 on both cases (success or not) but on the succes case, there is a request being done `?delimiter=/&max-keys=2&prefix=foo/bar` which succeeds but not on the unsuccessful case, where it prints the S3 response message
Author
Owner

@ggtakec commented on GitHub (May 8, 2023):

@schorfi
I changed the mount point multiple checks to simple.
If you can, please try to check with #2155.
Thanks in advance for your assistance.

<!-- gh-comment-id:1538795361 --> @ggtakec commented on GitHub (May 8, 2023): @schorfi I changed the mount point multiple checks to simple. If you can, please try to check with #2155. Thanks in advance for your assistance.
Author
Owner

@jmcarpenter2 commented on GitHub (Jan 12, 2024):

Hi there @ggtakec !

My team and I are also experiencing this issue. I will do my best to provide a detailed example.

Basic setup
Version of s3fs being used (s3fs --version)
1.92 and we also did try with 1.93 as well

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)
2.9.9

Kernel information (uname -r)
4.14.334-252.552.amzn2.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Ubuntu"
VERSION_ID="22.04"
VERSION="22.04.3 LTS (Jammy Jellyfish)"
VERSION_CODENAME=jammy
ID=ubuntu
ID_LIKE=debian
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
UBUNTU_CODENAME=jammy

Toy example, based on real issue

How to run s3fs, if applicable

s3fs "david-bucket:/home/David" "/home/jovyan/work/personal" -o sigv4,iam_role=auto -o endpoint=us-east-2 -o url=https://s3.us-east-2.amazonaws.com \
    -o allow_other -o uid=1000,gid=1000 -o umask="0000" \
    -o compat_dir,complement_stat,nonempty -o dbglevel=warn -f &

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

2024-01-12T18:14:02.304Z [ERR] s3fs.cpp:list_bucket(3402): ListBucketRequest returns with error.
2024-01-12T18:14:02.304Z [ERR] s3fs.cpp:directory_empty(1259): list_bucket returns error.
2024-01-12T18:14:02.304Z [WAN] s3fs.cpp:readdir_multi_head(3299): david_stuff/ object does not have any object under it(errno=-1),
2024-01-12T18:14:02.309Z [ERR] curl.cpp:RequestPerform(2566): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>XXX</RequestId><HostId>XXX</HostId></Error>

IAM Policy

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Sid": "bucket0nav",
			"Action": [
				"s3:ListBucket"
			],
			"Effect": "Allow",
			"Resource": [
				"arn:aws:s3:::david-bucket"
			],
			"Condition": {
				"StringEquals": {
					"s3:prefix": [
						"",
						"home",
						"home/David"
					],
					"s3:delimiter": [
						"/"
					]
				}
			}
		},
		{
			"Sid": "bucket0recurse",
			"Action": [
				"s3:ListBucket"
			],
			"Effect": "Allow",
			"Resource": [
				"arn:aws:s3:::david-bucket"
			],
			"Condition": {
				"StringLike": {
					"s3:prefix": [
						"home/David/*"
					]
				}
			}
		},
		{
			"Sid": "writeaccess",
			"Action": [
				"s3:AbortMultipartUpload",
				"s3:DeleteObject",
				"s3:GetObject",
				"s3:GetObjectVersion",
				"s3:ListMultipartUploadParts",
				"s3:PutObject"
			],
			"Effect": "Allow",
			"Resource": [
				"arn:aws:s3:::david-bucket/home/David/*",
			]
		}
	]
}

We are trying to list bucket which has contents like the following

home
|------David
           |------david_stuff
                      |-------Life and Health History Survey-responses.json
                      |-------file_2.txt
                      |-------etc..

Details
It appears that the spaces in the object key Life and Health History Survey-responses.json under the david_stuff prefix is causing issues somehow in relationship with the listbucket permissions on home/David/* and so we get that ListBucket access denied error, where david_stuff/ object does not have any object under it(errno=-1)

I am hoping you can help us with this. I will say that we confirmed removing the conditions from the s3:ListBucket statement does fix this, but is not an acceptable solution for us and our customers. Hoping you might know a better solution! Thank you so much in advance for looking into this.

<!-- gh-comment-id:1889763779 --> @jmcarpenter2 commented on GitHub (Jan 12, 2024): Hi there @ggtakec ! My team and I are also experiencing this issue. I will do my best to provide a detailed example. **Basic setup** Version of s3fs being used (s3fs --version) ```1.92``` and we also did try with ```1.93``` as well Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse) ```2.9.9``` Kernel information (uname -r) ```4.14.334-252.552.amzn2.x86_64``` GNU/Linux Distribution, if applicable (cat /etc/os-release) ``` NAME="Ubuntu" VERSION_ID="22.04" VERSION="22.04.3 LTS (Jammy Jellyfish)" VERSION_CODENAME=jammy ID=ubuntu ID_LIKE=debian HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" UBUNTU_CODENAME=jammy ``` **Toy example, based on real issue** How to run s3fs, if applicable ``` s3fs "david-bucket:/home/David" "/home/jovyan/work/personal" -o sigv4,iam_role=auto -o endpoint=us-east-2 -o url=https://s3.us-east-2.amazonaws.com \ -o allow_other -o uid=1000,gid=1000 -o umask="0000" \ -o compat_dir,complement_stat,nonempty -o dbglevel=warn -f & ``` s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) ``` 2024-01-12T18:14:02.304Z [ERR] s3fs.cpp:list_bucket(3402): ListBucketRequest returns with error. 2024-01-12T18:14:02.304Z [ERR] s3fs.cpp:directory_empty(1259): list_bucket returns error. 2024-01-12T18:14:02.304Z [WAN] s3fs.cpp:readdir_multi_head(3299): david_stuff/ object does not have any object under it(errno=-1), 2024-01-12T18:14:02.309Z [ERR] curl.cpp:RequestPerform(2566): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>XXX</RequestId><HostId>XXX</HostId></Error> ``` **IAM Policy** ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "bucket0nav", "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::david-bucket" ], "Condition": { "StringEquals": { "s3:prefix": [ "", "home", "home/David" ], "s3:delimiter": [ "/" ] } } }, { "Sid": "bucket0recurse", "Action": [ "s3:ListBucket" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::david-bucket" ], "Condition": { "StringLike": { "s3:prefix": [ "home/David/*" ] } } }, { "Sid": "writeaccess", "Action": [ "s3:AbortMultipartUpload", "s3:DeleteObject", "s3:GetObject", "s3:GetObjectVersion", "s3:ListMultipartUploadParts", "s3:PutObject" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::david-bucket/home/David/*", ] } ] } ``` We are trying to list bucket which has contents like the following ``` home |------David |------david_stuff |-------Life and Health History Survey-responses.json |-------file_2.txt |-------etc.. ``` **Details** It appears that the spaces in the object key `Life and Health History Survey-responses.json` under the `david_stuff` prefix is causing issues somehow in relationship with the listbucket permissions on `home/David/*` and so we get that ListBucket access denied error, where `david_stuff/ object does not have any object under it(errno=-1)` I am hoping you can help us with this. I will say that we confirmed removing the conditions from the `s3:ListBucket` statement does fix this, but is not an acceptable solution for us and our customers. Hoping you might know a better solution! Thank you so much in advance for looking into this.
Author
Owner

@ggtakec commented on GitHub (Feb 3, 2024):

I merged the bugs detected by @jmcarpenter2 and his fixes.
It would be helpful if someone could confirm if the problem has been resolved.

<!-- gh-comment-id:1925078196 --> @ggtakec commented on GitHub (Feb 3, 2024): I merged the bugs detected by @jmcarpenter2 and his fixes. It would be helpful if someone could confirm if the problem has been resolved.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1086
No description provided.