mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1718] Auto mount fails for some buckets #885
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#885
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @JohnPTobe on GitHub (Jul 9, 2021).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1718
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD
Version of s3fs being used (s3fs --version)
1.89
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
I couldn't find this. There was no package fuse as I yum installed a package s3fs-fuse.
Kernel information (uname -r)
3.10.0_1160.11.1.el7.x86_64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
Red Hat Enterprise Linux Server 7.9
s3fs command line used, if applicable
/etc/fstab entry, if applicable
00data /00data fuse.s3fs _netdev,allow_other,url=https://s3.us-east-1.amazonaws.com,iam_role=auto,endpoint=us-east-1 0 0
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue
When I try to add an entry to fstab to auto mount an s3 bucket on boot or by updating fstab and then running mount -a as part of my user data script, s3fs fails to mount the bucket with an error "s3fs: bucket name /00data contains an illegal character".
If I ssh into the ec2 instance after it is launched, the bucket isn't mounted. If I manually run "sudo mount -a", the bucket mounts without an issue. If I then reboot, the auto mount fails and I have to manually mount again with the same command. If I change the bucket name to something that starts with a letter it works in all cases.
@gaul commented on GitHub (Jul 10, 2021):
I believe this is because names starting with digits are invalid domain names. Can you test with
-o use_path_request_style?@JohnPTobe commented on GitHub (Jul 10, 2021):
What would the fstab entry look like for that? If I use the s3fs command it works and the fstab entry I have works if I run mount -a manually it works. It's just the auto mount when rebooting or when running the user-data script when the instance is first started that has problems. I'm less sure the bucket name is the issue now, however. I tried a different bucket that started with a letter and it worked which led me to think the digits were the problem but then I made a different bucket with just characters and copied data from the problematic bucket to it and had the same problem. I intend to look into it on Monday but I wonder if the actual issue is the contents of the bucket.
@JohnPTobe commented on GitHub (Jul 16, 2021):
I investigated this further and was unable to determine why it works with some buckets but not others. There must be a difference in how the auto mounting is done vs when someone manually types 'sudo mount -a' in the command prompt as the same fstab file works when I do that but doesn't when I reboot the system. Some buckets work all the time but not others. I eventually decided to punt on the issue and switched to using goofys for this bucket which works.