mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #1052] Pool full: destroy the oldest handler #577
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#577
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @cifuentesatilio on GitHub (Jun 24, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1052
Version of s3fs being used (s3fs --version)
1.85
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.7
Kernel information (uname -r)
4.15.0-1041-aws
GNU/Linux Distribution, if applicable (cat /etc/os-release)
NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"
s3fs command line used, if applicable
sudo s3fs bucket-o use_cache=/tmp -o passwd_file=/etc/passwd-s3fs -o allow_other -o uid=1000 -o gid=1000 -o mp_umask=002 -o multireq_max=5 /myMount-o dbglevel=info -f -o curldbg
/myMount is located in the root not in the user.
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
Details about issue
The result of operations is correct but finally I have the issue Pool full: destroy the oldest handler and the server doesn't going from here so I Ctrl+C and just destroyed, I want to know if I have something incorrect to fix it.
Thank you all
@cifuentesatilio commented on GitHub (Jun 30, 2019):
Hello,
I solved my issue with the version 1.84, I know is not the best but right now this version help me to continue.
If somebody have other option, I am glad to try it.
Thanks,
@lxknvlk commented on GitHub (Aug 13, 2019):
I have the same problem. Stuck on
Pool full: destroy the oldest handler. But with 1.84 i get stuck on previous line.[INF] curl.cpp:RequestPerform(2251): HTTP response code 200. Im using the s3 read only permissions IAM role. Just tested - same results with full access iam role.@charsi commented on GitHub (May 16, 2021):
Encountering the same issue on s3fs 1.86 and libcurl4 7.68.0-1ubuntu2.5
@leojonathanoh commented on GitHub (Nov 5, 2021):
same issue on s3fs 1.86 and libcurl4 7.68.0-1ubuntu2.7
EDIT: i realized that if i omit
-o dbglevel=info -f -o curldbg, the mount will succeed. So the debug options are the problem.@gaul commented on GitHub (Nov 7, 2021):
Please test with the latest 1.90.
@tiberiumihai commented on GitHub (Dec 13, 2021):
Is there a mirror for s3fs v1.9 built for ubuntu, or do I have to compile from source to test this out?
I'm using the following in my /etc/fstab to connect to digital ocean space s3 compatible bucket, and added /uploads/ symlink to apache webroot for my web app so that uploads and requests go directly to s3fs:
When accesing the frontend of the website, everything is ok, but when accessing the admin CRM which loads content from bucket, suddenly stops working. First file request on admin that got s3fs down was a css file using a query param to destroy browser cache.
mnt -areturns this:s3fs: unable to access MOUNTPOINT /mnt/bucket-name/uploads: Transport endpoint is not connectedand
sudo tail -f /var/log/syslogreturns this:I'm using
s3fs v1.86-1. And files are uploaded using private acl, but tested with public as well and didn't work.Any help? Thanks!
@adiroiban commented on GitHub (Apr 27, 2023):
Are you guys trying to mount over a bucket that is already mounted?
If the mounted bucket is empty s3fs will not complain and will try to remount it.
@ggtakec commented on GitHub (May 4, 2023):
@tiberiumihai
The
Pool full: destroy the oldest handlermessage is just a message to destroy the CURL handle, so I don't think it is the direct cause.First, you need to check if you can access
/mnt/bucket-name/uploadswith the apache execution user(eg:www-data).(ex.
sudo -u www-data ls /mnt/bucket-name/uploads)If it's a permission problem, change the permissions below the mount point.
And please use the latest s3fs if possible.