mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1510] Container writing into subdirectory of s3fs mounted director fails with Input/output error first, then succeeds #791
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#791
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @muzzah on GitHub (Jan 1, 2021).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1510
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD
Version of s3fs being used (s3fs --version)
Amazon Simple Storage Service File System V1.86 (commit:unknown) with GnuTLS(gcrypt)
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.9-3
Kernel information (uname -r)
5.4.0-54-generic
GNU/Linux Distribution, if applicable (cat /etc/os-release)
NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
s3fs command line used, if applicable
/etc/fstab entry, if applicable
Not applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue
I have a directory
/mnt/mahbucketwhich is mounted using the following commanduser id 999 is the userID the container itself creates internally to run mariadb.
When I start up the contIner for the first time it fails
And the s3fs log is as follows, not the last output which is an error for a chown command that the container tries to do.
The second time I start the container with the same command, everything seems to work. The first few lines I get as output from s3fs is that it checks the existing directors it created with the first run and the container continues on with creating init files with no issues
Im having a hard time understanding why the first run fails and the second run succeeds. Any idea?
@muzzah commented on GitHub (Jan 1, 2021):
Also, here is the docker file
Note that the volume being mounted is the path in the mounted s3fs directory that is causing the problem. So docker-compose could be trying to create that as root based on the output I see in the logs
Then, when the second run kicks off, docker compose sees the directory already there (since creation succeeded but failed when chowning in the first run) and must not try to chown it again, thus the second run continues.
This is all guesses but any advice on how to have the first run succeed would be very helpful.
@muzzah commented on GitHub (Jan 1, 2021):
Ok so after some more debugging, if I change the bind mount target for the localhost from
/mnt/mahbucket/nc/db/to the root of the mounted folder/mnt/mahbucketthen it seems we skip the chown operation. So my suspicions on docker-compose trying to create and then chown the directory seem to be correct.What I dont understand is, why does the chown fail on a subdirectory of the mounted s3 bucket that has the allow_other option enabled? Docker compose seems to create the directory fine but when it tries to change the ownership s3fs barfs. Anyway to get around this? Is this a bug since docker-compose is running as root?