[GH-ISSUE #1455] Cannot solve nested directory permissions issue in SFTP chroot context #764

Open
opened 2026-03-04 01:48:35 +03:00 by kerem · 3 comments
Owner

Originally created by @machty on GitHub (Oct 16, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1455

Additional Information

Version of s3fs being used (s3fs --version)

1.87

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.4

Kernel information (uname -r)

4.9.75-25.55.amzn1.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Amazon Linux AMI"
VERSION="2017.09"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2017.09"
PRETTY_NAME="Amazon Linux AMI 2017.09"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2017.09:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"

s3fs command line used, if applicable

n/a

/etc/fstab entry, if applicable

s3fs#mybucket:/userfolder/ /mnt/userfolder fuse _netdev,nonempty,retries=10,allow_other,mp_umask=0022 0 0

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

n/a

Details about issue

I'm posting this as an issue as an absolute last resort, as I have wasted so many hours on this and I cannot get to the bottom of it. Here is what I'm trying to do:

Users log into SFTP, and their home folder is a mounted s3fs folder. This is working via mp_umask=0022, which sets the permissions to the strict parameters required for chroot to work.

I have a process that uploads files directly to S3 via the Ruby S3 API. I am using passing meta mode/uid/gid to my put_object request, which correctly sets the permissions on the uploaded folder, but here's the crucial detail: if I'm uploading to a key whose subdirectories don't exist, e.g.. "foo/bar/baz/new_file_with_correct_permissions.txt", then the subfolders will belong to root:root. Since this commit, the intermediate directories are given basic permissions, but part of my use case is that the SFTP user needs to be able to delete files from these folders, and it's missing the write permission on these directories required to delete these files.

I don't see what options I have: if I try and set uid/gid in the s3fs options, that'll prevent chroot from working (which needs root:root ownership on the mount point directory).

Another option is that I could try and pre-create these directories via the Ruby API (or even just the command line API), but for the life of me I cannot figure out how to upload JUST a directory to S3 with the correct meta permissions set. I see some documentation referencing uploading a key ending in a slash e.g. name_of_dir/ but I can never get s3fs to recognize this as a directory. I appreciate any help I can get.

Originally created by @machty on GitHub (Oct 16, 2020). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1455 ### Additional Information #### Version of s3fs being used (s3fs --version) 1.87 #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.4 #### Kernel information (uname -r) 4.9.75-25.55.amzn1.x86_64 #### GNU/Linux Distribution, if applicable (cat /etc/os-release) NAME="Amazon Linux AMI" VERSION="2017.09" ID="amzn" ID_LIKE="rhel fedora" VERSION_ID="2017.09" PRETTY_NAME="Amazon Linux AMI 2017.09" ANSI_COLOR="0;33" CPE_NAME="cpe:/o:amazon:linux:2017.09:ga" HOME_URL="http://aws.amazon.com/amazon-linux-ami/" #### s3fs command line used, if applicable n/a #### /etc/fstab entry, if applicable ``` s3fs#mybucket:/userfolder/ /mnt/userfolder fuse _netdev,nonempty,retries=10,allow_other,mp_umask=0022 0 0 ``` #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) n/a ### Details about issue I'm posting this as an issue as an absolute last resort, as I have wasted so many hours on this and I cannot get to the bottom of it. Here is what I'm trying to do: Users log into SFTP, and their home folder is a mounted s3fs folder. This is working via `mp_umask=0022`, which sets the permissions to the strict parameters required for chroot to work. I have a process that uploads files directly to S3 via the Ruby S3 API. I am using passing meta mode/uid/gid to my `put_object` request, which correctly sets the permissions on the uploaded folder, but here's the crucial detail: if I'm uploading to a key whose subdirectories don't exist, e.g.. "foo/bar/baz/new_file_with_correct_permissions.txt", then the subfolders will belong to `root:root`. Since [this commit](https://github.com/s3fs-fuse/s3fs-fuse/pull/894/files), the intermediate directories are given basic permissions, but part of my use case is that the SFTP user needs to be able to delete files from these folders, and it's missing the write permission on these directories required to delete these files. I don't see what options I have: if I try and set uid/gid in the s3fs options, that'll prevent chroot from working (which needs root:root ownership on the mount point directory). Another option is that I could try and pre-create these directories via the Ruby API (or even just the command line API), but for the life of me I cannot figure out how to upload JUST a directory to S3 with the correct meta permissions set. I see some documentation referencing uploading a key ending in a slash e.g. `name_of_dir/` but I can never get s3fs to recognize this as a directory. I appreciate any help I can get.
Author
Owner

@Zahariel commented on GitHub (Jul 27, 2022):

Did this ever get addressed? Completely by coincidence, I'm doing basically the exact same thing, and running into the exact same issue: running an FTP server backed by s3fs using chroot and mp_umask=0022, and injecting objects into it from an outside process (a Java-based Lambda). I can forge the x-amz-meta-mode value on the object itself easily enough, and that works in that it gives me the permissions on the file that I expected, but this doesn't affect the intermediate directories. Any advice?

<!-- gh-comment-id:1197490034 --> @Zahariel commented on GitHub (Jul 27, 2022): Did this ever get addressed? Completely by coincidence, I'm doing basically the exact same thing, and running into the exact same issue: running an FTP server backed by s3fs using chroot and `mp_umask=0022`, and injecting objects into it from an outside process (a Java-based Lambda). I can forge the `x-amz-meta-mode` value on the object itself easily enough, and that works in that it gives me the permissions on the file that I expected, but this doesn't affect the intermediate directories. Any advice?
Author
Owner

@machty commented on GitHub (Jul 27, 2022):

@Zahariel I ended up just having all of my upload scripts ensure the enclosing directories exist (like a mkdir -p) and I create the directories with content_type: "application/x-directory". I couldn't find any other "automatic" solution.

<!-- gh-comment-id:1197495996 --> @machty commented on GitHub (Jul 27, 2022): @Zahariel I ended up just having all of my upload scripts ensure the enclosing directories exist (like a `mkdir -p`) and I create the directories with `content_type: "application/x-directory"`. I couldn't find any other "automatic" solution.
Author
Owner

@Zahariel commented on GitHub (Jul 27, 2022):

The sad thing is that if creating these intermediate directories respected setgid, that would have been enough for us; the ultimate parent has setgid set, is in the right group, and the default 750 mode would have allowed what I needed, because I only really needed the FTP user to be able to read from the injected files. But... that doesn't work either. I think I'm just going to end up restructuring how I inject files to not need intermediate directories.

But it's nice to know that Content-Type: "application/x-directory" (and I assume also forging the x-amz-meta-mode metadata value) is what it takes to upload things that s3fs-fuse thinks are directories. Thanks!

<!-- gh-comment-id:1197497925 --> @Zahariel commented on GitHub (Jul 27, 2022): The sad thing is that if creating these intermediate directories respected setgid, that would have been enough for us; the ultimate parent has setgid set, is in the right group, and the default 750 mode would have allowed what I needed, because I only really needed the FTP user to be able to read from the injected files. But... that doesn't work either. I think I'm just going to end up restructuring how I inject files to not need intermediate directories. But it's nice to know that `Content-Type: "application/x-directory"` (and I assume also forging the `x-amz-meta-mode` metadata value) is what it takes to upload things that s3fs-fuse thinks are directories. Thanks!
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#764
No description provided.