mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1459] S3FS Potential Flakiness and Troubleshooting #766
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#766
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @jamiejackson on GitHub (Oct 22, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1459
Additional Information
Version of s3fs being used (s3fs --version)
Amazon Simple Storage Service File System V1.87 (commit:unknown) with OpenSSL
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.2
Kernel information (uname -r)
3.10.0-1127.19.1.el7.x86_64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
s3fs command line used, if applicable
/etc/fstab entry, if applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
Details about issue
I am getting intermittent issues on a couple of servers, one of which is an SFTP server and another is a web server. Both of them use s3fs mounts as a back end.
I have two monitors:
I have a suspicion that S3, s3fs, or some connectivity issue is causing occasional flakiness. It's probably going to be too intermittent and short-lived for me to do any live debugging on it so I could use some pointers on how to prepare for investigation.
I imagine that the answer will involve enabling logging; if so, please supply the suggested log levels and any other relevant info.
@gaul commented on GitHub (Dec 31, 2020):
What does flakiness mean? Does the s3fs process exit? If so attaching
gdband getting a backtrace will help. Otherwise you can collect more logs with-f -d.@jamiejackson commented on GitHub (Dec 31, 2020):
I have an s3fs mount which is, in turn, served via HTTPD. I have an external monitor which hits one of these files' URLs. This intermittently reports failed requests.
The other use case I have is an SFTP server whose upload target folder is an S3 location (mounted on the SFTP server). I have another monitor that uploads a small file. That monitor also intermittently reports failed uploads.
@gaul commented on GitHub (Feb 8, 2021):
Please reopen if symptoms persist.