mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #491] Chown mount by non-root user within container #275
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#275
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @bruceharrison1984 on GitHub (Oct 28, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/491
I'm trying to install Postgres within a docker container, while use s3fs to hold the data directory.
Postgres @ Docker Hub
The docker file creates a new user with:
RUN groupadd -r postgres --gid=999 && useradd -r -g postgres --uid=999 postgresWithin the init script of the postgres container, it attempts to change the permissions of the data folder to the postgres user, which causes an error:
chmod: changing permissions of ‘/var/lib/postgresql/data’: Input/output errorIs it not possible to allow this kind of access from a container to an s3fs share? I've tried using -o uid=999 -o gid=999 -o allow_other -o mp_umask=002, but still get the same error. The only solution I can think of would be to re-roll a postgres container that runs with root access instead of a user account.
@tspicer commented on GitHub (Dec 13, 2016):
Did you have any luck with this?
@ggtakec commented on GitHub (Jan 9, 2017):
@bruceharrison1984 I'm sorry for my late reply.
Could you get s3fs's log with "-o dbglevel"(and "-o curldbg" if you need).
We need to know what error is occurred at changing file mode, and that log helps us to slove this issue.
Thanks in advance for your assistance.
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
We launch new version 1.86, which fixed some problem(bugs).
Please use the latest version.
I will close this, but if the problem persists, please reopen or post a new issue.
@muzzah commented on GitHub (Jan 1, 2021):
I want to reopen this as I am having the same issue right now with a mariadb container.
I have a director
/mnt/mahbucketwhich is mounted using the following commandWhen I start up the contIner for the first time it fails
And the s3fs log is as follows, not the last output which is an error for a chown command that the container tries to do.
The second time I start the container with the same command, everything seems to work. The first few lines I get as output from s3fs is that it checks the existing directors it created with the first run and the container continues on with creating init files with no issues
Im having a hard time understanding why the first run fails and the second run succeeds. Any idea?
@muzzah commented on GitHub (Jan 1, 2021):
@ggtakec Just to also add, the -ouid=999 is the userId for the user the mariadb container creates internally. Without this the directory is mounted as root and the mariadb fails to start at all with a similar chown problem
@muzzah commented on GitHub (Jan 1, 2021):
Ended up creating a separate issue #1510 with more information about my setup and config