[GH-ISSUE #1738] Mount S3FS mount to EFS additionally and share to other Kubernetes pod #893

Open
opened 2026-03-04 01:49:43 +03:00 by kerem · 0 comments
Owner

Originally created by @s7an-it on GitHub (Aug 5, 2021).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1738

Additional Information

SFTP pod used by some microservices that process content and deliver back processed content have sftp users that go to tenant specific paths (e.g. tenant-1, tenant-2) through SFTP, these chrooted paths are mounted to different EFS points through EFS provisioner.
Tenant pod - each mounting /var/s3fs <-> S3FS S3 bucket. Additionally the K8s deployment is mounting the efs shares (k8s pvc) from SFTP by tenant in /efs/nfs path. It means that when I put a file through SFTP I see it in the s3fs pods in the /efs/nfs location and I have cronjob that decrypts/encrypt content and push to s3 path. Now for more specifics flow is that customer content is expected to be encrypted from the /efs/nfs path and put in S3 for retention, so the requirement is to put encrypted content in the /var/s3fs and fetch some, there is inbox/outbox folders respectively. All that is working perfectly. The problem comes from the fact that customer need to use same SFTP to access the /var/s3fs content. So what happens is S3 has inbox/outbox folders, I see them both in S3 and /var/s3fs mounts locally in s3fs pods. I try to do a kubernetes mount with volume/pvc to the /var/s3fs which is already a mounted by S3FS, I try to share in EFS with a new pvc. I also mount same efs location to user chroot in sftp. Result is I don't see the content from /var/s3fs or vice versa from SFTP pod. When go to s3fs pods and type df -h I only see the mount by s3fs, the efs doesn't show it, but mount and /cat/mtab do show both mounts. I guess is something about libraries or permissions or the part about mounting a mounted path, please advise. Also please advise if there is any other reasonable solution for this Kubernetes use case. I tried bind to other path and mounting it instead, but same result.

Version of s3fs being used (s3fs --version)

_example: 1.86

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

example: 2.9.4

Kernel information (uname -r)

command result: uname -r

GNU/Linux Distribution, if applicable (cat /etc/os-release)

command result: cat /etc/os-release

s3fs command line used, if applicable

/etc/fstab entry, if applicable

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages

Details about issue

Originally created by @s7an-it on GitHub (Aug 5, 2021). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1738 ### Additional Information SFTP pod used by some microservices that process content and deliver back processed content have sftp users that go to tenant specific paths (e.g. tenant-1, tenant-2) through SFTP, these chrooted paths are mounted to different EFS points through EFS provisioner. Tenant pod - each mounting /var/s3fs <-> S3FS S3 bucket. Additionally the K8s deployment is mounting the efs shares (k8s pvc) from SFTP by tenant in /efs/nfs path. It means that when I put a file through SFTP I see it in the s3fs pods in the /efs/nfs location and I have cronjob that decrypts/encrypt content and push to s3 path. Now for more specifics flow is that customer content is expected to be encrypted from the /efs/nfs path and put in S3 for retention, so the requirement is to put encrypted content in the /var/s3fs and fetch some, there is inbox/outbox folders respectively. All that is working perfectly. The problem comes from the fact that customer need to use same SFTP to access the /var/s3fs content. So what happens is S3 has inbox/outbox folders, I see them both in S3 and /var/s3fs mounts locally in s3fs pods. I try to do a kubernetes mount with volume/pvc to the /var/s3fs which is already a mounted by S3FS, I try to share in EFS with a new pvc. I also mount same efs location to user chroot in sftp. Result is I don't see the content from /var/s3fs or vice versa from SFTP pod. When go to s3fs pods and type df -h I only see the mount by s3fs, the efs doesn't show it, but mount and /cat/mtab do show both mounts. I guess is something about libraries or permissions or the part about mounting a mounted path, please advise. Also please advise if there is any other reasonable solution for this Kubernetes use case. I tried bind to other path and mounting it instead, but same result. #### Version of s3fs being used (s3fs --version) _example: 1.86 #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) _example: 2.9.4_ #### Kernel information (uname -r) _command result: uname -r_ #### GNU/Linux Distribution, if applicable (cat /etc/os-release) _command result: cat /etc/os-release_ #### s3fs command line used, if applicable ``` ``` #### /etc/fstab entry, if applicable ``` ``` #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) _if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages_ ``` ``` ### Details about issue
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#893
No description provided.