[GH-ISSUE #2412] S3FS mount failing due to use_cache argument #1187

Open
opened 2026-03-04 01:52:02 +03:00 by kerem · 1 comment
Owner

Originally created by @rushil-picturehealth on GitHub (Feb 14, 2024).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2412

Additional Information

Version of s3fs being used (s3fs --version)

V1.90

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

no fuse installed but s3fs still works

Kernel information (uname -r)

6.2.0-1017-aws

GNU/Linux Distribution, if applicable (cat /etc/os-release)

Ubuntu 22.04.3 LTS

How to run s3fs, if applicable

FSTAB_ENTRY="picturehealth-data /mounted-bucket fuse.s3fs _netdev,passwd_file=passwd-s3fs,allow_other 0 0"
echo "$FSTAB_ENTRY" | sudo tee -a /etc/fstab  
FSTAB_ENTRY="picturehealth-data /mounted-bucket fuse.s3fs _netdev,passwd_file=passwd-s3fs,allow_other 0 0"
echo "$FSTAB_ENTRY" | sudo tee -a /etc/fstab  

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

Details about issue

I am using s3fs to mount an s3 bucket ('experiment-input-bucket') onto an EC2 instance via ClearML's AWS Autoscaler, specifically within the init script which looks like this:

aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin REDACTED-ECR-URL

apt update -y
apt install s3fs -y

echo "REDACTED":"REDACTED" > passwd-s3fs
chmod 600 passwd-s3fs

mkdir -p /mounted-bucket

FSTAB_ENTRY="experiment-input-bucket /mounted-bucket fuse.s3fs _netdev,passwd_file=passwd-s3fs,allow_other 0 0"

echo "$FSTAB_ENTRY" | sudo tee -a /etc/fstab

mount -a

usermod -aG root ubuntu

chown ubuntu:ubuntu /mounted-bucket
chmod 555 /mounted-bucket

chown ubuntu:ubuntu /dev/sda1
chmod 555 /dev/sda1

Everything runs fine with the init script specified as above, but when I add a use_cache argument to the FSTAB_ENTRY like so:

FSTAB_ENTRY="experiment-input-bucket /mounted-bucket fuse.s3fs _netdev,use_cache=/dev/sda1,passwd_file=passwd-s3fs,allow_other 0 0"

Then I see that the first run works fine and as expected, but on subsequent runs I see this error message related to the mount:

docker: Error response from daemon: invalid mount config for type "bind": stat /mounted-bucket: transport endpoint is not connected.
See 'docker run --help'.

indicating that the s3fs bucket is disconnected from the EC2 instance.

Does anyone know how the s3fs cache could be causing this behavior? I'm almost certain that the s3fs cache is the culprit since the experiment runs successfully multiple times subsequently without specifying use_cache in the FSTAB_ENTRY. For added context, the cache directory is /dev/sda1 which is a 500GB gp3 EBS volume configured onto my autoscaled machine.

Originally created by @rushil-picturehealth on GitHub (Feb 14, 2024). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2412 <!-- -------------------------------------------------------------------------- The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all. Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD. --------------------------------------------------------------------------- --> ### Additional Information #### Version of s3fs being used (`s3fs --version`) <!-- example: V1.91 (commit:b19262a) --> V1.90 #### Version of fuse being used (`pkg-config --modversion fuse`, `rpm -qi fuse` or `dpkg -s fuse`) <!-- example: 2.9.2 --> no fuse installed but s3fs still works #### Kernel information (`uname -r`) <!-- example: 5.10.96-90.460.amzn2.x86_64 --> 6.2.0-1017-aws #### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) <!-- command result --> Ubuntu 22.04.3 LTS #### How to run s3fs, if applicable <!-- Describe the s3fs "command line" or "/etc/fstab" entry used. --> ``` FSTAB_ENTRY="picturehealth-data /mounted-bucket fuse.s3fs _netdev,passwd_file=passwd-s3fs,allow_other 0 0" echo "$FSTAB_ENTRY" | sudo tee -a /etc/fstab ``` <!-- Executed command line or /etc/fastab entry --> ``` FSTAB_ENTRY="picturehealth-data /mounted-bucket fuse.s3fs _netdev,passwd_file=passwd-s3fs,allow_other 0 0" echo "$FSTAB_ENTRY" | sudo tee -a /etc/fstab ``` #### s3fs syslog messages (`grep s3fs /var/log/syslog`, `journalctl | grep s3fs`, or `s3fs outputs`) <!-- if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages. --> ### Details about issue <!-- Please describe the content of the issue in detail. --> I am using s3fs to mount an s3 bucket ('experiment-input-bucket') onto an EC2 instance via ClearML's AWS Autoscaler, specifically within the init script which looks like this: ``` aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin REDACTED-ECR-URL apt update -y apt install s3fs -y echo "REDACTED":"REDACTED" > passwd-s3fs chmod 600 passwd-s3fs mkdir -p /mounted-bucket FSTAB_ENTRY="experiment-input-bucket /mounted-bucket fuse.s3fs _netdev,passwd_file=passwd-s3fs,allow_other 0 0" echo "$FSTAB_ENTRY" | sudo tee -a /etc/fstab mount -a usermod -aG root ubuntu chown ubuntu:ubuntu /mounted-bucket chmod 555 /mounted-bucket chown ubuntu:ubuntu /dev/sda1 chmod 555 /dev/sda1 ``` Everything runs fine with the init script specified as above, but when I add a use_cache argument to the FSTAB_ENTRY like so: ``` FSTAB_ENTRY="experiment-input-bucket /mounted-bucket fuse.s3fs _netdev,use_cache=/dev/sda1,passwd_file=passwd-s3fs,allow_other 0 0" ``` Then I see that the first run works fine and as expected, but on subsequent runs I see this error message related to the mount: ``` docker: Error response from daemon: invalid mount config for type "bind": stat /mounted-bucket: transport endpoint is not connected. See 'docker run --help'. ``` indicating that the s3fs bucket is disconnected from the EC2 instance. Does anyone know how the s3fs cache could be causing this behavior? I'm almost certain that the s3fs cache is the culprit since the experiment runs successfully multiple times subsequently without specifying use_cache in the FSTAB_ENTRY. For added context, the cache directory is /dev/sda1 which is a 500GB gp3 EBS volume configured onto my autoscaled machine.
Author
Owner

@ggtakec commented on GitHub (Feb 19, 2024):

Specify a local directory path, such as /tmp/cache, rather than a device such as /dev/sda1.
You can also obtain detailed information by specifying the option dbglevel, which outputs detailed error logs for s3fs.

<!-- gh-comment-id:1952436494 --> @ggtakec commented on GitHub (Feb 19, 2024): Specify a local directory path, such as `/tmp/cache`, rather than a device such as `/dev/sda1`. You can also obtain detailed information by specifying the option `dbglevel`, which outputs detailed error logs for s3fs.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1187
No description provided.