[GH-ISSUE #333] Permission denied trying to access file in bucket #174

Closed
opened 2026-03-04 01:42:52 +03:00 by kerem · 16 comments
Owner

Originally created by @martyychang on GitHub (Jan 13, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/333

After mounting the bucket with following command, I'm getting "permission denied" errors when trying to access files in the bucket. For example:

$ cp mount/point/anyfile.txt .
cp: cannot open ‘mount/point/anyfile.txt’ for reading: Permission denied

If I look at the permissions on a file through the mount point, I see:

$ ls -l mount/point/anyfile.txt
---------- 1 root root 5796 Nov  6 08:47 mount/point/anyfile.txt

What might be causing this symptom? The command I'm using to mount S3 is shown below.

$ s3fs -o use_path_request_style -o url=http://s3.amazonaws.com my.bucket:/myfolder mount/point/
Originally created by @martyychang on GitHub (Jan 13, 2016). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/333 After mounting the bucket with following command, I'm getting "permission denied" errors when trying to access files in the bucket. For example: ``` $ cp mount/point/anyfile.txt . cp: cannot open ‘mount/point/anyfile.txt’ for reading: Permission denied ``` If I look at the permissions on a file through the mount point, I see: ``` $ ls -l mount/point/anyfile.txt ---------- 1 root root 5796 Nov 6 08:47 mount/point/anyfile.txt ``` What might be causing this symptom? The command I'm using to mount S3 is shown below. ``` $ s3fs -o use_path_request_style -o url=http://s3.amazonaws.com my.bucket:/myfolder mount/point/ ```
kerem closed this issue 2026-03-04 01:42:52 +03:00
Author
Owner

@ggtakec commented on GitHub (Jan 16, 2016):

@martyychang
Probabry, mount/point/anyfile.txt was uploaded by other S3 tools(s3cmd, s3 console, etc)

s3fs needs x-amz-meta-*(mode,uid,gid,mtime) HTTP headers for permission like filesystem.
Cause of this "----------" permission which you saw is that the object(anyfile.txt) does not have x-amz-meta-mode, or there is no object for "point" directory.

To solve the problem, you need to set the HTTP headers that is required by the s3fs.
You re-upload objects with custom headers by other tools, or you can set attributes for objects by s3fs(ex. chmod 777 mount/point; chmod 666 mount/point/anyfile.txt).

s3fs supports objects that have been uploaded by other tools as much as possible, but this issue is can only be resolved in this way now.

Regards,

<!-- gh-comment-id:172144296 --> @ggtakec commented on GitHub (Jan 16, 2016): @martyychang Probabry, mount/point/anyfile.txt was uploaded by other S3 tools(s3cmd, s3 console, etc) s3fs needs x-amz-meta-*(mode,uid,gid,mtime) HTTP headers for permission like filesystem. Cause of this "----------" permission which you saw is that the object(anyfile.txt) does not have x-amz-meta-mode, or there is no object for "point" directory. To solve the problem, you need to set the HTTP headers that is required by the s3fs. You re-upload objects with custom headers by other tools, or you can set attributes for objects by s3fs(ex. chmod 777 mount/point; chmod 666 mount/point/anyfile.txt). s3fs supports objects that have been uploaded by other tools as much as possible, but this issue is can only be resolved in this way now. Regards,
Author
Owner

@timduly4 commented on GitHub (Aug 6, 2016):

Hi there,

I was having trouble with these permissions and I thought I would give a concrete solution example of uploading a file with boto3 (important: version 1.4, see here: https://github.com/boto/boto3/issues/389#issuecomment-237610668):

import boto3 # 1.4
import botocore

s3 = boto3.resource('s3')

s3.meta.client.upload_file('file.txt', 'my_bucket', 's3-file.txt',\
        ExtraArgs={
            "Metadata": {
                "mode": "33204",
                "uid": "1000",
                "gid": "1000",
                },
            },)

The mode, uid, and gid numbers (33204, 1000, and 1000, respectively) are a bit mysterious to me. I was able to discern that (33204)_10 ~ (664)_8, but can't understand '1000'.

I was able to obtain these numbers by reverse engineering: I chmod and chown on the s3fs-mounted file to my liking (e.g., 664 and ubuntu), then looked at the metadata via:

import boto3
from pprint import pprint

s3 = boto3.resource('s3')
object = s3.Object('my-bucket', 'my-key')
pprint(object.metadata)

Output:

{'gid': '1000', 'mode': '33204', 'uid': '1000'}

Hope this helps!

<!-- gh-comment-id:238031099 --> @timduly4 commented on GitHub (Aug 6, 2016): Hi there, I was having trouble with these permissions and I thought I would give a concrete solution example of uploading a file with `boto3` (_important:_ version 1.4, see here: https://github.com/boto/boto3/issues/389#issuecomment-237610668): ``` import boto3 # 1.4 import botocore s3 = boto3.resource('s3') s3.meta.client.upload_file('file.txt', 'my_bucket', 's3-file.txt',\ ExtraArgs={ "Metadata": { "mode": "33204", "uid": "1000", "gid": "1000", }, },) ``` The mode, uid, and gid numbers (33204, 1000, and 1000, respectively) are a bit mysterious to me. I was able to discern that (33204)_10 ~ (664)_8, but can't understand '1000'. I was able to obtain these numbers by reverse engineering: I `chmod` and `chown` on the s3fs-mounted file to my liking (e.g., 664 and ubuntu), then looked at the metadata via: ``` import boto3 from pprint import pprint s3 = boto3.resource('s3') object = s3.Object('my-bucket', 'my-key') pprint(object.metadata) ``` Output: ``` {'gid': '1000', 'mode': '33204', 'uid': '1000'} ``` Hope this helps!
Author
Owner

@skliarie commented on GitHub (Sep 7, 2016):

There is a way to replace metadata without re-uploading the whole object:
http://stackoverflow.com/questions/269840/is-it-possible-to-change-headers-on-an-s3-object-without-downloading-the-entire

BTW, chmod suggestion above does not work for files bigger than 5 GB and even then it takes time proportionally to the file size.

<!-- gh-comment-id:245313124 --> @skliarie commented on GitHub (Sep 7, 2016): There is a way to replace metadata without re-uploading the whole object: http://stackoverflow.com/questions/269840/is-it-possible-to-change-headers-on-an-s3-object-without-downloading-the-entire BTW, chmod suggestion above does not work for files bigger than 5 GB and even then it takes time proportionally to the file size.
Author
Owner

@a2f0 commented on GitHub (Dec 11, 2016):

@ggtakec I can't the chmod commands to work on OS X (for s3 objects), even with the bucket mounted as AWS superuser. I get Operation not permitted. I've tried chmodding the mount point to 777 and re-mounting, still no dice. I'm mounting with:

s3fs bucket /mnt/bucket

I'd be interested in how to get this to work.

<!-- gh-comment-id:266290373 --> @a2f0 commented on GitHub (Dec 11, 2016): @ggtakec I can't the chmod commands to work on OS X (for s3 objects), even with the bucket mounted as AWS superuser. I get `Operation not permitted`. I've tried chmodding the mount point to 777 and re-mounting, still no dice. I'm mounting with: `s3fs bucket /mnt/bucket` I'd be interested in how to get this to work.
Author
Owner

@wenhongqiang commented on GitHub (Dec 26, 2016):

anyone can figure out how to set x-amz-meta- headers on files/directories via s3cmd ? I am stuck on this?
the alternative solution, using chmod doesn't work on new directories. so it's not better choice.

<!-- gh-comment-id:269155708 --> @wenhongqiang commented on GitHub (Dec 26, 2016): anyone can figure out how to set x-amz-meta- headers on files/directories via s3cmd ? I am stuck on this? the alternative solution, using chmod doesn't work on new directories. so it's not better choice.
Author
Owner

@ggtakec commented on GitHub (Jan 7, 2017):

@wenhongqiang
If you allow to ignore existing permissions, you can specify umask, mp_umask, gid and uid option for s3fs.
By these option, s3fs sets permission to all objects(files, directories) under mount point.
Please see man pages at s3fs and mount.

Thanks for your assistance.

<!-- gh-comment-id:271055722 --> @ggtakec commented on GitHub (Jan 7, 2017): @wenhongqiang If you allow to ignore existing permissions, you can specify umask, mp_umask, gid and uid option for s3fs. By these option, s3fs sets permission to all objects(files, directories) under mount point. Please see man pages at s3fs and mount. Thanks for your assistance.
Author
Owner

@wenhongqiang commented on GitHub (Jan 9, 2017):

@ggtakec
thanks . will try it

<!-- gh-comment-id:271199757 --> @wenhongqiang commented on GitHub (Jan 9, 2017): @ggtakec thanks . will try it
Author
Owner

@pelamfi commented on GitHub (Mar 26, 2017):

For those unfamiliar how the options mentioned by @wenhongqiang work, here is an example -o uid=1000,umask=077,gid=1000,... . (Replace 1000 with your own numeric user id.)

<!-- gh-comment-id:289288610 --> @pelamfi commented on GitHub (Mar 26, 2017): For those unfamiliar how the options mentioned by @wenhongqiang work, here is an example `-o uid=1000,umask=077,gid=1000,...` . (Replace 1000 with your own numeric user id.)
Author
Owner

@pifou42 commented on GitHub (Apr 4, 2017):

It's working, at last ! Thanks dudes.
I used the following command line (change the bucket and mount point names, btw)
s3fs -o allow_other,uid=33,gid=33,umask=227,use_cache=/root/cache MY-BUCKET-NAME /mnt/MY-MOUNT-POINT

  • Where 33 is both my apache user and group.
  • Note that I set mask to 227, which gives the opposite permissions, alias "550".
  • Btw, don't forget to create the cache directory if you intend to use one.
    Also, while most the examples I found were using a cache dir, no one recommended to empty it sometimes. I'll probably put a nightly cron to remove "old" files very soon, since I don't want my HDD to have my whole HDD filled.

Here are two other important things I had to figure out by myself:

  1. For mounting via fstab, using the "fuse _netdev" option is necessary. Else, it doesn't wait for the network connection before attempting to mount, and miserably fails.
    Here's what I used :
    s3fs#MY-BUCKET-NAME /mnt/MY-MOUNT-POINT fuse _netdev,allow_other,umask=227,uid=33,gid=33,use_cache=/root/cache 0 0
  2. And secondly, if your system includes locate and/or mlocate (and typically, my Ubuntu 14.04 does), you may want to add an exception, so that it does NOT scan your bucket.
    I had to modify both my /etc/updatedb.conf and /etc/cron.daily/locate, adding " /mnt/my-bucket-name" to PRUNEPATHS and " fuse.s3fs" to PRUNEFS
    I suppose adding fuse.s3fs should be enough, but... no time to take risks right now :)

I'm also wondering if the allow_other option is still necessary, but I suppose it is. Gotta check this later.

<!-- gh-comment-id:291541457 --> @pifou42 commented on GitHub (Apr 4, 2017): It's working, at last ! Thanks dudes. I used the following command line (change the bucket and mount point names, btw) `s3fs -o allow_other,uid=33,gid=33,umask=227,use_cache=/root/cache MY-BUCKET-NAME /mnt/MY-MOUNT-POINT` - Where 33 is both my apache user and group. - Note that I set mask to 227, which gives the opposite permissions, alias "550". - Btw, don't forget to create the cache directory if you intend to use one. Also, while most the examples I found were using a cache dir, no one recommended to empty it sometimes. I'll probably put a nightly cron to remove "old" files very soon, since I don't want my HDD to have my whole HDD filled. Here are two other important things I had to figure out by myself: 1. For mounting via fstab, using the "fuse _netdev" option is **necessary**. Else, it doesn't wait for the network connection before attempting to mount, and miserably fails. _Here's what I used :_ `s3fs#MY-BUCKET-NAME /mnt/MY-MOUNT-POINT fuse _netdev,allow_other,umask=227,uid=33,gid=33,use_cache=/root/cache 0 0` 2. And secondly, if your system includes locate and/or mlocate (and typically, my Ubuntu 14.04 does), you may want to add an exception, so that it does NOT scan your bucket. I had to modify both my /etc/updatedb.conf and /etc/cron.daily/locate, adding " /mnt/my-bucket-name" to PRUNEPATHS and " fuse.s3fs" to PRUNEFS I suppose adding fuse.s3fs should be enough, but... no time to take risks right now :) I'm also wondering if the allow_other option is still necessary, but I suppose it is. Gotta check this later.
Author
Owner

@gaul commented on GitHub (Apr 4, 2017):

Also, while most the examples I found were using a cache dir, no one recommended to empty it sometimes. I'll probably put a nightly cron to remove "old" files very soon, since I don't want my HDD to have my whole HDD filled.

#554 should address this.

<!-- gh-comment-id:291580345 --> @gaul commented on GitHub (Apr 4, 2017): > Also, while most the examples I found were using a cache dir, no one recommended to empty it sometimes. I'll probably put a nightly cron to remove "old" files very soon, since I don't want my HDD to have my whole HDD filled. #554 should address this.
Author
Owner

@pkerpedjiev commented on GitHub (Aug 30, 2017):

In case anybody is wondering, you can get your uid and gid by running the id command:

me@computer:~/tmp [develop|!]$ id
uid=502(me) gid=20(staff) groups=20(staff),12(everyone),61(localaccounts),79(_appserverusr),80(admin),81(_appserveradm),98(_lpadmin),399(com.apple.access_ssh),702(com.apple.sharepoint.group.2),701(com.apple.sharepoint.group.1),33(_appstore),100(_lpoperator),204(_developer),395(com.apple.access_ftp),398(com.apple.access_screensharing)
<!-- gh-comment-id:325991914 --> @pkerpedjiev commented on GitHub (Aug 30, 2017): In case anybody is wondering, you can get your `uid` and `gid` by running the `id` command: ``` me@computer:~/tmp [develop|!]$ id uid=502(me) gid=20(staff) groups=20(staff),12(everyone),61(localaccounts),79(_appserverusr),80(admin),81(_appserveradm),98(_lpadmin),399(com.apple.access_ssh),702(com.apple.sharepoint.group.2),701(com.apple.sharepoint.group.1),33(_appstore),100(_lpoperator),204(_developer),395(com.apple.access_ftp),398(com.apple.access_screensharing) ```
Author
Owner

@javilumbrales commented on GitHub (Nov 16, 2017):

In case someone lands here using Mac and having the same issue, I managed to be able to read folders and files with the below command:

s3fs BUCKET_NAME /path/to/mount/ -o passwd_file=/path/to/credentials,allow_other,uid=`id -u`,umask=0077,mp_umask=0077,use_cache=/path/to/cache/ -d -d -f
<!-- gh-comment-id:344773823 --> @javilumbrales commented on GitHub (Nov 16, 2017): In case someone lands here using Mac and having the same issue, I managed to be able to read folders and files with the below command: s3fs BUCKET_NAME /path/to/mount/ -o passwd_file=/path/to/credentials,allow_other,uid=`id -u`,umask=0077,mp_umask=0077,use_cache=/path/to/cache/ -d -d -f
Author
Owner

@myfitment commented on GitHub (Jan 29, 2019):

I'm trying to set this up in a chrooted environment. All subdirectories are 755 and they can't be changed with chmod. What is the command to alter subdirectories to 777?

<!-- gh-comment-id:458608138 --> @myfitment commented on GitHub (Jan 29, 2019): I'm trying to set this up in a chrooted environment. All subdirectories are 755 and they can't be changed with chmod. What is the command to alter subdirectories to 777?
Author
Owner

@ggtakec commented on GitHub (Mar 30, 2019):

We kept this issue open for a long time.

@myfitment Probabry you can use umask and mp_umask option for s3fs.
And if you need more information or reporting bugs, please open new issue.(This issue will be closed.)

<!-- gh-comment-id:478214797 --> @ggtakec commented on GitHub (Mar 30, 2019): We kept this issue open for a long time. @myfitment Probabry you can use `umask` and `mp_umask` option for s3fs. And if you need more information or reporting bugs, please open new issue.(This issue will be closed.)
Author
Owner

@Signant commented on GitHub (Nov 8, 2021):

Hi,
We are using s3fs v1.82 and we are facing same issue permission read/write issue. what can be done to resolve this. are there any bugs in this version ?

<!-- gh-comment-id:963020687 --> @Signant commented on GitHub (Nov 8, 2021): Hi, We are using s3fs v1.82 and we are facing same issue permission read/write issue. what can be done to resolve this. are there any bugs in this version ?
Author
Owner

@gaul commented on GitHub (Nov 8, 2021):

@Signant please stop spamming random issues with the same comments. Try upgrading to the latest s3fs. I will lock this issue now.

<!-- gh-comment-id:963041691 --> @gaul commented on GitHub (Nov 8, 2021): @Signant please stop spamming random issues with the same comments. Try upgrading to the latest s3fs. I will lock this issue now.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#174
No description provided.