[GH-ISSUE #218] Chown results input/output error #121

Closed
opened 2026-03-04 01:42:18 +03:00 by kerem · 16 comments
Owner

Originally created by @kylegoch on GitHub (Aug 4, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/218

We are trying to chown and change the user/owner on the directory that the bucket is mounted at. We are basically having the same issue that is outlined here:
https://code.google.com/p/s3fs/issues/detail?id=192

We are running s3fs version 1.79.

What happens is that we issue:

chown : /mount/location

And get the input/output error.

We tried the allow others option and that worked but then everyone could get to it. We would prefer that this one user have control over the folder only rather than root or everyone.

Thanks in advance.

Originally created by @kylegoch on GitHub (Aug 4, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/218 We are trying to chown and change the user/owner on the directory that the bucket is mounted at. We are basically having the same issue that is outlined here: https://code.google.com/p/s3fs/issues/detail?id=192 We are running s3fs version 1.79. What happens is that we issue: chown <user>:<group> /mount/location And get the input/output error. We tried the allow others option and that worked but then everyone could get to it. We would prefer that this one user have control over the folder only rather than root or everyone. Thanks in advance.
kerem closed this issue 2026-03-04 01:42:19 +03:00
Author
Owner

@gaul commented on GitHub (Aug 7, 2015):

Could you share the debug output via:

s3fs -d -d -f -o f2 -o curldbg
<!-- gh-comment-id:128553692 --> @gaul commented on GitHub (Aug 7, 2015): Could you share the debug output via: ``` s3fs -d -d -f -o f2 -o curldbg ```
Author
Owner

@rotten commented on GitHub (Nov 25, 2015):

Since I'm seeing the same issue (and it is a worrying security concern), I thought I'd try the debug statement you mention above. When I run it, I get this error:

# s3fs -d -d -f -o f2 curldbg
[CRT] s3fs.cpp:set_s3fs_log_level(250): change debug level from [CRT] to [INF]
[CRT] s3fs.cpp:set_s3fs_log_level(250): change debug level from [INF] to [DBG]
s3fs: missing MOUNTPOINT argument.
Usage: s3fs BUCKET:[PATH] MOUNTPOINT [OPTION]...

I have the file system mounted through fstab on Ubuntu 14.04 like this:

s3fs#mybucket /my/mount/point fuse noatime,nobootwait,allow_other,use_cache=/tmp/s3cache     0  2

The mount looks like this (with "ls -l"):

drwxrwxrwx  1 root     root         0 Jan  1  1970 /my/mount/point
<!-- gh-comment-id:159745225 --> @rotten commented on GitHub (Nov 25, 2015): Since I'm seeing the same issue (and it is a worrying security concern), I thought I'd try the debug statement you mention above. When I run it, I get this error: ``` # s3fs -d -d -f -o f2 curldbg [CRT] s3fs.cpp:set_s3fs_log_level(250): change debug level from [CRT] to [INF] [CRT] s3fs.cpp:set_s3fs_log_level(250): change debug level from [INF] to [DBG] s3fs: missing MOUNTPOINT argument. Usage: s3fs BUCKET:[PATH] MOUNTPOINT [OPTION]... ``` I have the file system mounted through fstab on Ubuntu 14.04 like this: ``` s3fs#mybucket /my/mount/point fuse noatime,nobootwait,allow_other,use_cache=/tmp/s3cache 0 2 ``` The mount looks like this (with "ls -l"): ``` drwxrwxrwx 1 root root 0 Jan 1 1970 /my/mount/point ```
Author
Owner

@ggtakec commented on GitHub (Nov 29, 2015):

@rotten please following command.

s3fs <your bucket> /my/mount/point -o allow_other,use_cache=/tmp/s3cache,dbglevel=info -f 

This command line is as same as @andrewgaul said.
Probably you will be able to see many information which s3fs will put.
Thanks in advance for your help.

<!-- gh-comment-id:160435192 --> @ggtakec commented on GitHub (Nov 29, 2015): @rotten please following command. ``` s3fs <your bucket> /my/mount/point -o allow_other,use_cache=/tmp/s3cache,dbglevel=info -f ``` This command line is as same as @andrewgaul said. Probably you will be able to see many information which s3fs will put. Thanks in advance for your help.
Author
Owner

@rotten commented on GitHub (Nov 30, 2015):

Here is what I see when I mount using dbglevel=debug and then (in another window) try to chown the mountpoint. After that I run umount from the other window.

root@myserver# s3fs mybucket /my/mountpoint -o allow_other,use_cache=/tmp/s3cache,dbglevel=debug -f
[CRT] s3fs.cpp:set_s3fs_log_level(250): change debug level from [CRT] to [DBG]
[INF]     s3fs.cpp:set_moutpoint_attribute(4091): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
[CRT] s3fs.cpp:s3fs_init(3297): init v1.79(commit:5af6d4b) with OpenSSL
[INF] s3fs.cpp:s3fs_check_service(3653): check services.
[INF]       curl.cpp:CheckBucket(2647): check a bucket.
[INF]       curl.cpp:prepare_url(4140): URL is http://s3.amazonaws.com/mybucket/
[INF]       curl.cpp:prepare_url(4172): URL changed is http://mybucket.s3.amazonaws.com/
[INF]       curl.cpp:insertV4Headers(2069): computing signature [GET] [/] [] []
[INF]       curl.cpp:url_to_host(99): url is http://s3.amazonaws.com
[DBG] curl.cpp:RequestPerform(1726): connecting to URL http://mybucket.s3.amazonaws.com/
[INF]       curl.cpp:RequestPerform(1755): HTTP response code 400 was returned, returing EIO.
[DBG] curl.cpp:RequestPerform(1756): Body Text: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'</Message><Region>us-west-2</Region><RequestId>xxxxxxxxxxxx</RequestId><HostId>codl7hxxxxxxxzsUK+0=</HostId></Error>
[ERR] curl.cpp:CheckBucket(2685): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'</Message><Region>us-west-2</Region><RequestId>xxxxxxxxxxxxxx</RequestId><HostId>codl7hxxxxxxxxUK+0=</HostId></Error>
[CRT] s3fs.cpp:s3fs_check_service(3674): Could not connect wrong region us-east-1, so retry to connect region us-west-2.
[INF]       curl.cpp:CheckBucket(2647): check a bucket.
[INF]       curl.cpp:prepare_url(4140): URL is http://s3-us-west-2.amazonaws.com/mybucket/
[INF]       curl.cpp:prepare_url(4172): URL changed is http://mybucket.s3-us-west-2.amazonaws.com/
[INF]       curl.cpp:insertV4Headers(2069): computing signature [GET] [/] [] []
[INF]       curl.cpp:url_to_host(99): url is http://s3-us-west-2.amazonaws.com
[DBG] curl.cpp:RequestPerform(1726): connecting to URL http://mybucket.s3-us-west-2.amazonaws.com/
[INF]       curl.cpp:RequestPerform(1743): HTTP response code 200
[INF] s3fs.cpp:s3fs_getattr(797): [path=/]
[DBG] s3fs.cpp:check_parent_object_access(654): [path=/]
[DBG] s3fs.cpp:check_object_access(548): [path=/]
[DBG] s3fs.cpp:get_object_attribute(402): [path=/]
[DBG] fdcache.cpp:ExistOpen(1851): [path=/][fd=-1]
[DBG] fdcache.cpp:Open(1800): [path=/][size=-1][time=-1]

[DBG] s3fs.cpp:s3fs_getattr(821): [path=/] uid=0, gid=0, mode=40777
[INF] s3fs.cpp:s3fs_chown(1622): [path=/][uid=1234][gid=1234]
[ERR] s3fs.cpp:s3fs_chown(1625): Could not change owner for maount point.

[INF] s3fs.cpp:s3fs_destroy(3340): destroy

Could it have something to do with the region?

<!-- gh-comment-id:160644155 --> @rotten commented on GitHub (Nov 30, 2015): Here is what I see when I mount using dbglevel=debug and then (in another window) try to chown the mountpoint. After that I run umount from the other window. ``` root@myserver# s3fs mybucket /my/mountpoint -o allow_other,use_cache=/tmp/s3cache,dbglevel=debug -f [CRT] s3fs.cpp:set_s3fs_log_level(250): change debug level from [CRT] to [DBG] [INF] s3fs.cpp:set_moutpoint_attribute(4091): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) [CRT] s3fs.cpp:s3fs_init(3297): init v1.79(commit:5af6d4b) with OpenSSL [INF] s3fs.cpp:s3fs_check_service(3653): check services. [INF] curl.cpp:CheckBucket(2647): check a bucket. [INF] curl.cpp:prepare_url(4140): URL is http://s3.amazonaws.com/mybucket/ [INF] curl.cpp:prepare_url(4172): URL changed is http://mybucket.s3.amazonaws.com/ [INF] curl.cpp:insertV4Headers(2069): computing signature [GET] [/] [] [] [INF] curl.cpp:url_to_host(99): url is http://s3.amazonaws.com [DBG] curl.cpp:RequestPerform(1726): connecting to URL http://mybucket.s3.amazonaws.com/ [INF] curl.cpp:RequestPerform(1755): HTTP response code 400 was returned, returing EIO. [DBG] curl.cpp:RequestPerform(1756): Body Text: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'</Message><Region>us-west-2</Region><RequestId>xxxxxxxxxxxx</RequestId><HostId>codl7hxxxxxxxzsUK+0=</HostId></Error> [ERR] curl.cpp:CheckBucket(2685): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'</Message><Region>us-west-2</Region><RequestId>xxxxxxxxxxxxxx</RequestId><HostId>codl7hxxxxxxxxUK+0=</HostId></Error> [CRT] s3fs.cpp:s3fs_check_service(3674): Could not connect wrong region us-east-1, so retry to connect region us-west-2. [INF] curl.cpp:CheckBucket(2647): check a bucket. [INF] curl.cpp:prepare_url(4140): URL is http://s3-us-west-2.amazonaws.com/mybucket/ [INF] curl.cpp:prepare_url(4172): URL changed is http://mybucket.s3-us-west-2.amazonaws.com/ [INF] curl.cpp:insertV4Headers(2069): computing signature [GET] [/] [] [] [INF] curl.cpp:url_to_host(99): url is http://s3-us-west-2.amazonaws.com [DBG] curl.cpp:RequestPerform(1726): connecting to URL http://mybucket.s3-us-west-2.amazonaws.com/ [INF] curl.cpp:RequestPerform(1743): HTTP response code 200 [INF] s3fs.cpp:s3fs_getattr(797): [path=/] [DBG] s3fs.cpp:check_parent_object_access(654): [path=/] [DBG] s3fs.cpp:check_object_access(548): [path=/] [DBG] s3fs.cpp:get_object_attribute(402): [path=/] [DBG] fdcache.cpp:ExistOpen(1851): [path=/][fd=-1] [DBG] fdcache.cpp:Open(1800): [path=/][size=-1][time=-1] [DBG] s3fs.cpp:s3fs_getattr(821): [path=/] uid=0, gid=0, mode=40777 [INF] s3fs.cpp:s3fs_chown(1622): [path=/][uid=1234][gid=1234] [ERR] s3fs.cpp:s3fs_chown(1625): Could not change owner for maount point. [INF] s3fs.cpp:s3fs_destroy(3340): destroy ``` Could it have something to do with the region?
Author
Owner

@ggtakec commented on GitHub (Dec 13, 2015):

@rotten I'm sorry for replygin late and your debug log.
It was that s3fs failed to change owner for the mount point.
s3fs could not change the owner(group) for mount point. If you need to change those, you have to run s3fs with uid/gid option.

So we need to clear for this issue about following:

  1. who is /my/mountpoint owner, and what is permission?
  2. could you run s3fs with uid/gid option?
  3. is this log outputed by only start of s3fs?(did you use chown command manually?)

If you can, please set owner/permission for mount point to allow to access from uid(1234) / gid(1234).
Thanks in advance for your help.

<!-- gh-comment-id:164233492 --> @ggtakec commented on GitHub (Dec 13, 2015): @rotten I'm sorry for replygin late and your debug log. It was that s3fs failed to change owner for the mount point. s3fs could not change the owner(group) for mount point. If you need to change those, you have to run s3fs with uid/gid option. So we need to clear for this issue about following: 1) who is /my/mountpoint owner, and what is permission? 2) could you run s3fs with uid/gid option? 3) is this log outputed by only start of s3fs?(did you use chown command manually?) If you can, please set owner/permission for mount point to allow to access from uid(1234) / gid(1234). Thanks in advance for your help.
Author
Owner

@rockuw commented on GitHub (Dec 22, 2015):

@ggtakec
Hi, I tried run s3fs with uid/gid option and it didn't work.

ubuntu@ip-172-31-29-108:~/s3fs-fuse$ ll /tmp/ | grep s3
drwxrwxr-x  2 ubuntu ubuntu   4096 Dec 22 07:53 s3/
ubuntu@ip-172-31-29-108:~/s3fs-fuse$ ./src/s3fs bucket-name /tmp/s3 -o passwd_file=~/.s3.key -ouid=1001,gid=1001
ubuntu@ip-172-31-29-108:~/s3fs-fuse$ ll /tmp/ | grep s3
drwx------  1 s3fs   s3fs        0 Jan  1  1970 s3/
ubuntu@ip-172-31-29-108:~/s3fs-fuse$ sudo su s3fs
s3fs@ip-172-31-29-108:/home/ubuntu/s3fs-fuse$ ls -l /tmp/ | grep s3
ls: cannot access /tmp/s3: Permission denied
d????????? ? ?      ?           ?            ? s3
ubuntu@ip-172-31-29-108:~/s3fs-fuse$ sudo ls -l /tmp/ | grep s3
ls: cannot access /tmp/s3: Permission denied
d????????? ? ?      ?           ?            ? s3

Both user s3fs and root cannot stat /tmp/s3. What's the problem here?

<!-- gh-comment-id:166547496 --> @rockuw commented on GitHub (Dec 22, 2015): @ggtakec Hi, I tried run s3fs with uid/gid option and it didn't work. ``` bash ubuntu@ip-172-31-29-108:~/s3fs-fuse$ ll /tmp/ | grep s3 drwxrwxr-x 2 ubuntu ubuntu 4096 Dec 22 07:53 s3/ ubuntu@ip-172-31-29-108:~/s3fs-fuse$ ./src/s3fs bucket-name /tmp/s3 -o passwd_file=~/.s3.key -ouid=1001,gid=1001 ubuntu@ip-172-31-29-108:~/s3fs-fuse$ ll /tmp/ | grep s3 drwx------ 1 s3fs s3fs 0 Jan 1 1970 s3/ ubuntu@ip-172-31-29-108:~/s3fs-fuse$ sudo su s3fs s3fs@ip-172-31-29-108:/home/ubuntu/s3fs-fuse$ ls -l /tmp/ | grep s3 ls: cannot access /tmp/s3: Permission denied d????????? ? ? ? ? ? s3 ubuntu@ip-172-31-29-108:~/s3fs-fuse$ sudo ls -l /tmp/ | grep s3 ls: cannot access /tmp/s3: Permission denied d????????? ? ? ? ? ? s3 ``` Both user `s3fs` and `root` cannot stat `/tmp/s3`. What's the problem here?
Author
Owner

@ggtakec commented on GitHub (Jan 9, 2016):

@rockuw
Try to run s3fs with "allow_other" option. example:
./src/s3fs bucket-name /tmp/s3 -o passwd_file=~/.s3.key -ouid=1001,gid=1001,allow_other

Regards,

<!-- gh-comment-id:170270362 --> @ggtakec commented on GitHub (Jan 9, 2016): @rockuw Try to run s3fs with "allow_other" option. example: ./src/s3fs bucket-name /tmp/s3 -o passwd_file=~/.s3.key -ouid=1001,gid=1001,allow_other Regards,
Author
Owner

@rockuw commented on GitHub (Jan 11, 2016):

@ggtakec
What's the rationale here? With uid=1001,gid=1001, user s3fs is the owner, not other.
And what if we want to allow access only to s3fs and not all other users?

<!-- gh-comment-id:170415325 --> @rockuw commented on GitHub (Jan 11, 2016): @ggtakec What's the rationale here? With `uid=1001,gid=1001`, user s3fs is the _owner_, not _other_. And what if we want to allow access only to s3fs and not all other users?
Author
Owner

@ggtakec commented on GitHub (Jan 16, 2016):

@rockuw
If ubuntu user mounts by s3fs(FUSE) without allow_other option, another user is not allowed to access mount point.
So that, we need to use allow_aother, but it sets 0777 permission for mount point.

If you need to use the allow_other and want to change the permissions, you can add mp_umask option in command line.
This option is set mount point permission like as umask.

Maybe, you can run following command for this issue:

./src/s3fs bucket-name /tmp/s3 -o passwd_file=~/.s3.key -ouid=1001,gid=1001,allow_other,mp_umask=002

Regards,

<!-- gh-comment-id:172141109 --> @ggtakec commented on GitHub (Jan 16, 2016): @rockuw If ubuntu user mounts by s3fs(FUSE) without allow_other option, another user is not allowed to access mount point. So that, we need to use allow_aother, but it sets 0777 permission for mount point. If you need to use the allow_other and want to change the permissions, you can add mp_umask option in command line. This option is set mount point permission like as umask. Maybe, you can run following command for this issue: ./src/s3fs bucket-name /tmp/s3 -o passwd_file=~/.s3.key -ouid=1001,gid=1001,allow_other,mp_umask=002 Regards,
Author
Owner

@atrepca commented on GitHub (Aug 12, 2016):

Had the same issue on Ubuntu 16.04 LTS and s3fs-fuse version 1.79+git90-g8f11507-2. @ggtakec's suggestion worked - mounting with allow_other,mp_umask=022.

<!-- gh-comment-id:239406668 --> @atrepca commented on GitHub (Aug 12, 2016): Had the same issue on Ubuntu 16.04 LTS and `s3fs-fuse` version `1.79+git90-g8f11507-2`. @ggtakec's suggestion worked - mounting with `allow_other,mp_umask=022`.
Author
Owner

@Braza commented on GitHub (Nov 12, 2016):

mp_umask=022 worked for me too with Amazon Linux

<!-- gh-comment-id:260119063 --> @Braza commented on GitHub (Nov 12, 2016): mp_umask=022 worked for me too with Amazon Linux
Author
Owner

@ggtakec commented on GitHub (Jan 7, 2017):

I'm closing this issue.
But if this problem seems to continue, please reopen this issue or post new issue.
Thanks in advance for your help.

<!-- gh-comment-id:271057900 --> @ggtakec commented on GitHub (Jan 7, 2017): I'm closing this issue. But if this problem seems to continue, please reopen this issue or post new issue. Thanks in advance for your help.
Author
Owner

@faisbaig commented on GitHub (Jul 5, 2018):

The issue only resolve when you use
s3bucketname /localmountpoint fuse.s3fs _netdev,uid=1002,gid=1002,allow_other,mp_umask=022,iam_role=auto 0 0
if anyone do not use uid and gid option then mp_umask is useless.

<!-- gh-comment-id:402609611 --> @faisbaig commented on GitHub (Jul 5, 2018): The issue only resolve when you use s3bucketname /localmountpoint fuse.s3fs _netdev,uid=1002,gid=1002,allow_other,mp_umask=022,iam_role=auto 0 0 if anyone do not use uid and gid option then mp_umask is useless.
Author
Owner

@yuvaraj143 commented on GitHub (Sep 11, 2018):

use this for ubuntu instancess its working sudo s3fs bucket-name /tmp/s3 -o passwd_file=~/.s3.key -ouid=1001,gid=1001,allow_other,mp_umask=002

<!-- gh-comment-id:420133856 --> @yuvaraj143 commented on GitHub (Sep 11, 2018): use this for ubuntu instancess its working **sudo s3fs bucket-name /tmp/s3 -o passwd_file=~/.s3.key -ouid=1001,gid=1001,allow_other,mp_umask=002**
Author
Owner

@thanhn1012 commented on GitHub (Oct 27, 2020):

Thank you so much!

<!-- gh-comment-id:717043102 --> @thanhn1012 commented on GitHub (Oct 27, 2020): Thank you so much!
Author
Owner

@ricardoteix commented on GitHub (Jul 20, 2023):

I had the same problem and soved with @yuvaraj143 example . It was only needed to replace the 1001 to 33, that was the code of www-data user in my machine. I saw this code in /etc/passwd file.

Thank you all.

<!-- gh-comment-id:1643847209 --> @ricardoteix commented on GitHub (Jul 20, 2023): I had the same problem and soved with @yuvaraj143 example . It was only needed to replace the 1001 to 33, that was the code of www-data user in my machine. I saw this code in /etc/passwd file. Thank you all.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#121
No description provided.