mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #1474] Encryption and Permission Issues with use_sse=kmsid (RHEL7 EC2 AWS) #777
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#777
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @robzdeloitte on GitHub (Nov 12, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1474
Version of s3fs being used (s3fs --version)
Amazon Simple Storage Service File System V1.86 (commit:5614155) with OpenSSL
Version of fuse being used (pkg-config --modversion fuse)
2.9.2
Kernel information (uname -r)
3.10.0-1127.18.2.el7.x86_64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
Red Hat Enterprise Linux Server release 7.8 (Maipo)
s3fs command line used, if applicable
/etc/fstab entry, if applicable
n/a
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
no relevant logs
Details about issue
Overall context: Our ec2 instance is attempting to put an object into a S3 bucket, which is mounted to a directory using s3fs. A Lambda function then executes the s3 copy command to move this file to a new S3 bucket. Both S3 buckets in this scenario are encrypted with a KMS master key from an external AWS account. This instance is able to successfully copy files to these buckets using
aws s3 cp some-file.txt s3://our-bucket --sse aws:kms, but we run into issues with our second attempt at the s3fs command, where we observe that we can see objects from S3 and move them from the mount directory onto another location in the server, but receive errors when attempting to mv or cp a file into it (permission denied). The goal would be to understand what the appropriate equivalent to the above cli command with aws:kms would be.We are facing two separate situations:
Try 1:
try1: s3fs -o iam_role='masked-iam-role-name' -o 'nonempty' -o use_sse masked-bucket-name my-directory/When we use
-o use_sseeverything works fine - we can put objects into a bucket, but receive a(AccessDenied) when calling the CopyObject operation: Access Deniedmessage during the execution of the Lambda function. We are worried that because we are not specific about the key being used we are not encrypting the object properly. For this scenario we would like to confirm if the approach to the put for the encrypted object is correct.**Try 2 **
try 2: s3fs -o iam_role='masked-iam-role-name' -o 'nonempty' -o use_sse=kmsid masked-bucket-name my-directory/In this scenario we have specified the environment variable for the KMS ID, which is the full arn of the KMS master key for the external AWS account that is used to encrypt the two buckets. When we mount using this command, we are able to pull objects from this mounted directory, but we are unable to put objects into it. We receive permission denied when it happens:
[root@our-server-hostname our-mounted-directory]# mv example-file-name our-mounted-directory/ mv: cannot create regular file 'example-file-name': Operation not permittedGoal
Given our situation, we are hoping to identify which path we should be chasing and how to appropriately adjust for the correct path. Thanks in advance for any help!
@gaul commented on GitHub (Nov 15, 2020):
@robzdeloitte I successfully interacted with s3fs using kms after some trial and error. My issue was both using a bucket in the wrong region as my KMS ID and using the incorrect format for the kms id. The latter should have the form
xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. You can get more debugging information from s3fs by runnings3fs -f -dwhich should display any error logging.@robzdeloitte commented on GitHub (Nov 16, 2020):
Thanks for your response @gaul!
Ideally we would like to get option 2 (try 2) listed above working. The good news I see is at the IAM role was applied sucessfully. I ran the following command and received the following output. While this was running, I tried to copy a file s3fs-logs-test into the bucket, but received this error:
cp: cannot create regular file 'my-mount-directory/s3fs-logs-test': Operation not permittedI also tried to create a file robz123 directly in the directory, which failed.
It appears there were two primary errors:
404 Error
403 Permission Denied
Full Log Output