[GH-ISSUE #564] Multiple folder in s3 #319

Closed
opened 2026-03-04 01:44:22 +03:00 by kerem · 7 comments
Owner

Originally created by @orhankutlu on GitHub (Apr 14, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/564

It is not an issue actually. I couldn't find how can i solve my problem. We have multiple clients and they drop some reports/files into our single sftp server. Every clients has their own folder in the server and they can't access one other. We also have s3 bucket for clients upload in a single s3 bucket and all clients have folders inside of s3. Let's say we have 2 clients client-1 and client-2.

we have 2 directories in sftp server: /home/client-1/uploads and /home/client-2/uploads

and 2 folders in s3 (single bucket): client-uploads-bucket/client-1 and client-uploads-bucket/client-1

And what i would like to do is: When a client upload a file in sftp server, i would like to have it in
his own folder in s3. Eg: When client-1 uploads to file to /home/client-1/uploads directory in sftp server, it should be available under client-uploads-bucket/client-1 in s3.

Is it possible with s3fs? If yes how can i achieve it? Should I run s3fs command multiple times for each client or how?

Thanks!

Originally created by @orhankutlu on GitHub (Apr 14, 2017). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/564 It is not an issue actually. I couldn't find how can i solve my problem. We have multiple clients and they drop some reports/files into our single sftp server. Every clients has their own folder in the server and they can't access one other. We also have s3 bucket for clients upload in a single s3 bucket and all clients have folders inside of s3. Let's say we have 2 clients `client-1` and `client-2`. we have 2 directories in sftp server: `/home/client-1/uploads` and `/home/client-2/uploads` and 2 folders in s3 (single bucket): `client-uploads-bucket/client-1` and `client-uploads-bucket/client-1` And what i would like to do is: When a client upload a file in sftp server, i would like to have it in **his own folder in s3**. Eg: When client-1 uploads to file to `/home/client-1/uploads` directory in sftp server, it should be available under `client-uploads-bucket/client-1` in s3. Is it possible with s3fs? If yes how can i achieve it? Should I run s3fs command multiple times for each client or how? Thanks!
kerem closed this issue 2026-03-04 01:44:22 +03:00
Author
Owner

@ngbranitsky commented on GitHub (Apr 14, 2017):

Get rid of the sftp server.
Give each client IAM credentials to their own S3 Bucket.
Tell them to use s3fs on Linux or CloudBerry etc. on Windows and write directly to AWS S3.

Content by Norman. Spelling by iPhone.

On Apr 14, 2017, at 08:11, Orhan notifications@github.com wrote:

Additional Information

The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.

Version of s3fs being used (s3fs --version)

example: 1.0

Version of fuse being used (pkg-config --modversion fuse)

example: 2.9.4

System information (uname -a)

command result: uname -a

Distro (cat /etc/issue)

command result: result

s3fs command line used (if applicable)

/etc/fstab entry (if applicable):
s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue

It is not an issue actually. I couldn't find how can i solve my problem. We have multiple clients and they drop some reports/files into our single sftp server. Every clients has their own folder in the server and they can't access one other. We also have s3 bucket for clients upload in a single s3 bucket and all clients have folders inside of s3. Let's say we have 2 clients client-1 and client-2.

we have 2 directories in sftp server: /home/client-1/uploads and /home/client-2/uploads
and 2 folders in s3 (single bucket): client-uploads-bucket/client-1 and client-uploads-bucket/client-1

And what i would like to do is: When a client upload a file in sftp server, i would like to have it in
his own folder in s3. Eg: When client-1 uploads to file to /home/client-1/uploads directory in sftp server, it should be available under client-uploads-bucket/client-1 in s3.

Is it possible with s3fs? If yes how can i achieve it? Should I run s3fs command multiple times for each client or how?

Thanks!


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

<!-- gh-comment-id:294144893 --> @ngbranitsky commented on GitHub (Apr 14, 2017): Get rid of the sftp server. Give each client IAM credentials to their own S3 Bucket. Tell them to use s3fs on Linux or CloudBerry etc. on Windows and write directly to AWS S3. Content by Norman. Spelling by iPhone. > On Apr 14, 2017, at 08:11, Orhan <notifications@github.com> wrote: > > Additional Information > > The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all. > > Version of s3fs being used (s3fs --version) > > example: 1.0 > > Version of fuse being used (pkg-config --modversion fuse) > > example: 2.9.4 > > System information (uname -a) > > command result: uname -a > > Distro (cat /etc/issue) > > command result: result > > s3fs command line used (if applicable) > > /etc/fstab entry (if applicable): > s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs) > if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages > Details about issue > > It is not an issue actually. I couldn't find how can i solve my problem. We have multiple clients and they drop some reports/files into our single sftp server. Every clients has their own folder in the server and they can't access one other. We also have s3 bucket for clients upload in a single s3 bucket and all clients have folders inside of s3. Let's say we have 2 clients client-1 and client-2. > > we have 2 directories in sftp server: /home/client-1/uploads and /home/client-2/uploads > and 2 folders in s3 (single bucket): client-uploads-bucket/client-1 and client-uploads-bucket/client-1 > > And what i would like to do is: When a client upload a file in sftp server, i would like to have it in > his own folder in s3. Eg: When client-1 uploads to file to /home/client-1/uploads directory in sftp server, it should be available under client-uploads-bucket/client-1 in s3. > > Is it possible with s3fs? If yes how can i achieve it? Should I run s3fs command multiple times for each client or how? > > Thanks! > > — > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub, or mute the thread. >
Author
Owner

@orhankutlu commented on GitHub (Apr 14, 2017):

@ngbranitsky I wish It was possible. Some of our clients are really old fashioned. At first we offer them IAM credentials but they asked for sftp server and IP (not even domain) for their whitelist. They don't want to do any extra work at all. It is the only way how we can work with them unfortunately..

<!-- gh-comment-id:294145395 --> @orhankutlu commented on GitHub (Apr 14, 2017): @ngbranitsky I wish It was possible. Some of our clients are really old fashioned. At first we offer them IAM credentials but they asked for sftp server and IP (not even domain) for their whitelist. They don't want to do any extra work at all. It is the only way how we can work with them unfortunately..
Author
Owner

@orhankutlu commented on GitHub (Apr 17, 2017):

@ggtakec Any comment?

<!-- gh-comment-id:294604793 --> @orhankutlu commented on GitHub (Apr 17, 2017): @ggtakec Any comment?
Author
Owner

@sqlbot commented on GitHub (Apr 17, 2017):

Let's say your bucket is mounted at /srv/s3fs/example-bucket/.

Just move each user's home directory into the bucket.

$ sudo mv -v /home/client-1/ /srv/s3fs/example-bucket/client-1/

Change the system's config for the user (assuming your sftp server uses "system" users, and doesn't have its own database) so that the user's home directory is in the bucket rather than on the hard drive.

$ sudo usermod -d /srv/s3f/example-bucket/client-1

Don't follow advice from the Internet blindly. Test this on one user before globally applying it, of course.

I run an sftp server running ProFTPd, backed by s3fs, and this is fundamentally how I do it. One significant difference in my setup is that my ProFTPd users are not actual system users in /etc/passwd, since ProFTPd can use its own password file. I moved my real SSH server to another port, so that ProFTPd can have port 22. I have a user creation script that defines them initially with their home directory inside the s3fs mount point, and creates all the default directory structures and sets the permissions.

You only need to run one instance of s3fs for this, with -o allow_other.

Additional suggestions of configuration options for s3fs that work well with the SFTP server can be found in an answer I wrote at http://stackoverflow.com/a/23946418/1695906.

<!-- gh-comment-id:294624067 --> @sqlbot commented on GitHub (Apr 17, 2017): Let's say your bucket is mounted at `/srv/s3fs/example-bucket/`. Just move each user's home directory into the bucket. `$ sudo mv -v /home/client-1/ /srv/s3fs/example-bucket/client-1/` Change the system's config for the user (assuming your sftp server uses "system" users, and doesn't have its own database) so that the user's home directory is in the bucket rather than on the hard drive. `$ sudo usermod -d /srv/s3f/example-bucket/client-1` Don't follow advice from the Internet blindly. Test this on one user before globally applying it, of course. I run an sftp server running ProFTPd, backed by s3fs, and this is fundamentally how I do it. One significant difference in my setup is that my ProFTPd users are not actual system users in `/etc/passwd`, since ProFTPd can use its own password file. I moved my real SSH server to another port, so that ProFTPd can have port 22. I have a user creation script that defines them initially with their home directory inside the s3fs mount point, and creates all the default directory structures and sets the permissions. You only need to run one instance of s3fs for this, with `-o allow_other`. Additional suggestions of configuration options for s3fs that work well with the SFTP server can be found in an answer I wrote at http://stackoverflow.com/a/23946418/1695906.
Author
Owner

@sqlbot commented on GitHub (Apr 17, 2017):

Correction to the previous comment. The username is missing from the usermod line.

$ sudo usermod -d /srv/s3f/example-bucket/client-1 client-1

Depending on the SFTP server's tolerance for sanely following symlinks, you could alternately symlink the user's new home directory in the bucket back into /home/.

<!-- gh-comment-id:294624586 --> @sqlbot commented on GitHub (Apr 17, 2017): Correction to the previous comment. The username is missing from the `usermod` line. `$ sudo usermod -d /srv/s3f/example-bucket/client-1 client-1` Depending on the SFTP server's tolerance for sanely following symlinks, you could alternately symlink the user's new home directory in the bucket back into `/home/`.
Author
Owner

@ggtakec commented on GitHub (Apr 23, 2017):

@orhankutlu
I recommend that you work according to @sqlbot's comments.

You can see the directory of client-1/client-2 in the directory mounted by s3fs, and you can assign those directories to the user directory that sftp uses.

Please note that you should specify some s3fs options to improve performance.
And, to write(upload) the file to this client-1/client-2 directory and below that directories via s3fs.
If you do not write through s3fs, you may have problems with file permissions.
If you need to check it, you can directly check the file permissions below the mount point.

@sqlbot Thanks for your kindness.

Regards,

<!-- gh-comment-id:296416566 --> @ggtakec commented on GitHub (Apr 23, 2017): @orhankutlu I recommend that you work according to @sqlbot's comments. You can see the directory of client-1/client-2 in the directory mounted by s3fs, and you can assign those directories to the user directory that sftp uses. Please note that you should specify some s3fs options to improve performance. And, to write(upload) the file to this client-1/client-2 directory and below that directories via s3fs. If you do not write through s3fs, you may have problems with file permissions. If you need to check it, you can directly check the file permissions below the mount point. @sqlbot Thanks for your kindness. Regards,
Author
Owner

@orhankutlu commented on GitHub (Apr 23, 2017):

@sqlbot Thank you lot.

<!-- gh-comment-id:296444299 --> @orhankutlu commented on GitHub (Apr 23, 2017): @sqlbot Thank you lot.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#319
No description provided.