mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #1409] failed to try resolving symlinks #745
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#745
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @guillelucero on GitHub (Sep 18, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1409
Additional Information
We are buling docker image based in ubuntu:14.04. This docker image runs in kubernetes cluster (EKS).
Client Version: v1.17.9-eks-4c6976
Server Version: v1.17.9-eks-4c6976
Version of s3fs being used (s3fs --version)
Amazon Simple Storage Service File System V1.87 (commit:b5ffd41) with OpenSSL
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
Compiled from github
Kernel information (uname -r)
4.14.193-149.317.amzn2.x86_64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
NAME="Ubuntu"
VERSION="14.04.6 LTS, Trusty Tahr"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 14.04.6 LTS"
VERSION_ID="14.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
s3fs command line used, if applicable
/etc/fstab entry, if applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue
Containers is running as kubernetes daemonset. It works fine.
During stress test, containers crash showing error shows above.
After crash no new containers lunched by the daemonset could be run.
Only way to recover our kubernetes node is delete them from the cluster. in this case autoscaling add a new one.
@gaul commented on GitHub (Oct 10, 2020):
Could you attach gdb so we can determine the backtrace of the crashing s3fs process? Running with debug logging
-dmight help as well. Finally try testing with master which includes a few concurrency fixes.@skolenkin commented on GitHub (Oct 20, 2020):
Hello,
I have the same issue.
My environment:
EKS 1.17
EKS worjker node:
Docker version:
Containers are running as Kubernetes DaemonSet but after a few redeploys I have the following error:
ailed to try resolving symlinks in path
I was able to run container without shared volume:
Also, I have tried configure shared mount on the k8s nodes but these steps didn’t help:
With the following DaemonSet config and run.sh script works but not always:
Below run.sh startup script:
Dockerfile
@jimdumont commented on GitHub (Nov 23, 2020):
Any progress on this issue? It is preventing upgrade to s3fs V1.87 which has some interesting updates. Thanks!
@gaul commented on GitHub (Feb 19, 2021):
I'm not sure what the issue is here. Does s3fs crash or do symlinks fail to resolve? You might also try testing with the latest version. I will close out this issue unless there some steps to reproduce the symptoms.
@gaul commented on GitHub (Apr 21, 2021):
Please reopen if symptoms persist.