mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1214] AWS CloudWatch Agent overriding credentials #644
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#644
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @usernamemikem on GitHub (Dec 6, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1214
This maybe an FYI because I'm not sure it's 100% your issue.
Using:
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.3 LTS
Release: 18.04
Codename: bionic
Version of s3fs being used (s3fs --version)
Amazon Simple Storage Service File System V1.85 (commit:bdfb9ee) with OpenSSL
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
Version: 2.9.7-1ubuntu1
Kernel information (uname -r)
command result: uname -r
4.15.0-1056-aws
I installed the S3FS software to mount an S3 Bucket. I also installed the CloudWatch Agent to capture the logs on my EC2.
Created two IAM Users:
CloudWatch-EC2Log-Agent and SFTP-Bucket-User both using Access Key IDs.
The Access Key ID for SFTP-Bucket-User is stored in /etc/passwd-s3fs
The AccessKey ID for CloudWatch-EC2Log-Agent is stored in /root/.aws/credentials
I mounted the S3 Bucket first then installed the CloudWatch agent. It ran for a couple of weeks without issue until I rebooted.
The bucket wasn’t mounted. When I tried to mount it manually with sudo mount -a I found the errors:
s3fs.cpp:s3fs_check_service(3866): Failed to connect by sigv4, so retry to connect by signature version 2.
s3fs.cpp:s3fs_check_service(3881): invalid credentials(host=https://s3.us-east-1.amazonaws.com) - result of checking service.
Basically the credentials for the CloudWatch is being used for both services. Even though the credentials for the SFTP Bucket user is stored in a completely different place, it was ignored or over taken or something ridiculous.
I stopped the CloudWatch Agent and renamed the /root/.asw/credential file. The bucket was able to mount again.
Now I’m using only one user, the SFTP Bucket User and gave it the same permissions as the CloudWatch agent. Now there is no confusion between the credentials. S3 Bucket is mounted and CloudWatch is receiving logs and creating alarms as expected.
Of course I was banging my head against the wall because I thought I caused it with an update I ran. But I was able to reproduce it on a temporary EC2.
@gaul commented on GitHub (Oct 10, 2020):
I believe that
0569cec3eaaddresses your symptom, included in 1.86. Now s3fs checks the password file and environment before the${HOME}/.aws/credentialsfile.