mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #2041] s3fs continues to not mount reliably #1029
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1029
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @cyb3rz3us on GitHub (Sep 28, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2041
Version of s3fs being used (s3fs --version)
s3fs-fuse.x86_64 1.91-1.el8 @epelVersion of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
Kernel information (uname -r)
4.18.0-372.9.1.el8.x86_64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
NAME="Rocky Linux"
VERSION="8.6 (Green Obsidian)"
s3fs command line used, if applicable
~/.aws/credentialsfile; or (b) the-o passwd_fileoption.AmazonS3FullAccesspermissions.s3fs -d [ BUCKET_NAME ] /s3-test -o url=https://s3-us-west-2.amazonaws.coms3fs -d [ BUCKET_NAME ] /s3-test -o url="https://s3-us-west-2.amazonaws.com"s3fs -d [ BUCKET_NAME ] /s3-test -o endpoint="us-west-2"s3fs -d [ BUCKET_NAME ] /s3-test -o endpoint="us-west-2" -o url=https://s3-us-west-2.amazonaws.coms3fs -d [ BUCKET_NAME ] /s3-test -o endpoint="us-west-2" -o url="https://s3-us-west-2.amazonaws.com"/etc/fstab entry, if applicable
Not applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
s3fs -d [ BUCKET_NAME ] /s3-test -o endpoint="us-west-2" -o passwd_file=~/passwd-s3fs.TMPbut the other CLI attempts result in nearly the exact same debug message:Details about issue
us-east-1; the issue only exists for buckets in all other regions despite using the-o endpointand\or-o urloptions.aws s3command all day on the same set of buckets using the exact same user credentials without any issue.@cyb3rz3us commented on GitHub (Sep 29, 2022):
This issue has been resolved and the root cause is very corner case. However, I am still going to post what happened to provide a possible clue as something to watch out for in your AWS implementation.
We had an old policy in place that limited which buckets could be accessed via a given S3 endpoint. This had been implemented in such a way that only a very small subset of connection attempts would experience this issue.
So for me, the takeaway is this: When in doubt - always double, triple, quadruple check your permissions and polices.