mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #564] Multiple folder in s3 #319
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#319
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @orhankutlu on GitHub (Apr 14, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/564
It is not an issue actually. I couldn't find how can i solve my problem. We have multiple clients and they drop some reports/files into our single sftp server. Every clients has their own folder in the server and they can't access one other. We also have s3 bucket for clients upload in a single s3 bucket and all clients have folders inside of s3. Let's say we have 2 clients
client-1andclient-2.we have 2 directories in sftp server:
/home/client-1/uploadsand/home/client-2/uploadsand 2 folders in s3 (single bucket):
client-uploads-bucket/client-1andclient-uploads-bucket/client-1And what i would like to do is: When a client upload a file in sftp server, i would like to have it in
his own folder in s3. Eg: When client-1 uploads to file to
/home/client-1/uploadsdirectory in sftp server, it should be available underclient-uploads-bucket/client-1in s3.Is it possible with s3fs? If yes how can i achieve it? Should I run s3fs command multiple times for each client or how?
Thanks!
@ngbranitsky commented on GitHub (Apr 14, 2017):
Get rid of the sftp server.
Give each client IAM credentials to their own S3 Bucket.
Tell them to use s3fs on Linux or CloudBerry etc. on Windows and write directly to AWS S3.
Content by Norman. Spelling by iPhone.
@orhankutlu commented on GitHub (Apr 14, 2017):
@ngbranitsky I wish It was possible. Some of our clients are really old fashioned. At first we offer them IAM credentials but they asked for sftp server and IP (not even domain) for their whitelist. They don't want to do any extra work at all. It is the only way how we can work with them unfortunately..
@orhankutlu commented on GitHub (Apr 17, 2017):
@ggtakec Any comment?
@sqlbot commented on GitHub (Apr 17, 2017):
Let's say your bucket is mounted at
/srv/s3fs/example-bucket/.Just move each user's home directory into the bucket.
$ sudo mv -v /home/client-1/ /srv/s3fs/example-bucket/client-1/Change the system's config for the user (assuming your sftp server uses "system" users, and doesn't have its own database) so that the user's home directory is in the bucket rather than on the hard drive.
$ sudo usermod -d /srv/s3f/example-bucket/client-1Don't follow advice from the Internet blindly. Test this on one user before globally applying it, of course.
I run an sftp server running ProFTPd, backed by s3fs, and this is fundamentally how I do it. One significant difference in my setup is that my ProFTPd users are not actual system users in
/etc/passwd, since ProFTPd can use its own password file. I moved my real SSH server to another port, so that ProFTPd can have port 22. I have a user creation script that defines them initially with their home directory inside the s3fs mount point, and creates all the default directory structures and sets the permissions.You only need to run one instance of s3fs for this, with
-o allow_other.Additional suggestions of configuration options for s3fs that work well with the SFTP server can be found in an answer I wrote at http://stackoverflow.com/a/23946418/1695906.
@sqlbot commented on GitHub (Apr 17, 2017):
Correction to the previous comment. The username is missing from the
usermodline.$ sudo usermod -d /srv/s3f/example-bucket/client-1 client-1Depending on the SFTP server's tolerance for sanely following symlinks, you could alternately symlink the user's new home directory in the bucket back into
/home/.@ggtakec commented on GitHub (Apr 23, 2017):
@orhankutlu
I recommend that you work according to @sqlbot's comments.
You can see the directory of client-1/client-2 in the directory mounted by s3fs, and you can assign those directories to the user directory that sftp uses.
Please note that you should specify some s3fs options to improve performance.
And, to write(upload) the file to this client-1/client-2 directory and below that directories via s3fs.
If you do not write through s3fs, you may have problems with file permissions.
If you need to check it, you can directly check the file permissions below the mount point.
@sqlbot Thanks for your kindness.
Regards,
@orhankutlu commented on GitHub (Apr 23, 2017):
@sqlbot Thank you lot.