[GH-ISSUE #1052] Pool full: destroy the oldest handler #577

Open
opened 2026-03-04 01:46:53 +03:00 by kerem · 8 comments
Owner

Originally created by @cifuentesatilio on GitHub (Jun 24, 2019).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1052

Version of s3fs being used (s3fs --version)

1.85

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.7

Kernel information (uname -r)

4.15.0-1041-aws

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Ubuntu"
VERSION="18.04.2 LTS (Bionic Beaver)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 18.04.2 LTS"
VERSION_ID="18.04"

s3fs command line used, if applicable

sudo s3fs bucket-o use_cache=/tmp -o passwd_file=/etc/passwd-s3fs -o allow_other -o uid=1000 -o gid=1000 -o mp_umask=002 -o multireq_max=5 /myMount-o dbglevel=info -f -o curldbg

/myMount is located in the root not in the user.

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

[CRT] s3fs.cpp:set_s3fs_log_level(297): change debug level from [CRT] to [INF]
[INF]     s3fs.cpp:set_mountpoint_attribute(4378): PROC(uid=0, gid=0) - MountPoint(uid=1000, gid=0, mode=40775)[INF] s3fs.cpp:s3fs_init(3458): init v1.85(commit:a78d8d1) with OpenSSL[INF] s3fs.cpp:s3fs_check_service(3802): check services.
[INF]       curl.cpp:CheckBucket(3273): check a bucket.
[INF]       curl.cpp:prepare_url(4527): URL is https://s3.amazonaws.com/bucket/
[INF]       curl.cpp:prepare_url(4559): URL changed is https://bucket.s3.amazonaws.com/
[INF]       curl.cpp:insertV4Headers(2610): computing signature [GET] [/] [] []
[INF]       curl.cpp:url_to_host(101): url is https://s3.amazonaws.com*   
Trying 52.216.138.59...
* TCP_NODELAY set
* Connected to yfapp.s3.amazonaws.com (52.216.138.59) port 443 (#0)* found 133 certificates in /etc/ssl/certs/ca-certificates.crt* found 399 certificates in /etc/ssl/certs
* SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256*  server certificate verification OK
*  server certificate status verification SKIPPED*  common name: *.s3.amazonaws.com (matched)
*  server certificate expiration date OK
*  server certificate activation date OK
*  certificate public key: RSA
*  certificate version: #3
*  subject: C=US,ST=Washington,L=Seattle,O=Amazon.com Inc.,CN=*.s3.amazonaws.com
*  start date: Wed, 07 Nov 2018 00:00:00 GMT
*  expire date: Fri, 07 Feb 2020 12:00:00 GMT
*  issuer: C=US,O=DigiCert Inc,OU=www.digicert.com,CN=DigiCert Baltimore CA-2 G2
*  compression: NULL
>GET / HTTP/1.1
host: yfapp.s3.amazonaws.com
User-Agent: s3fs/1.85 (commit hash a78d8d1; OpenSSL)
Accept: */*
Authorization: AWS4-HMAC-SHA256 Credential=AKIA25PJ2VZ47K47EFZS/20190624/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=db9889dd68dbb0c36eed68418744c3c7fcc7e95005a6126e72d36f934aa860d7x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855x-amz-date: 20190624T233321Z
< HTTP/1.1 200 OK
< x-amz-id-2: Ta21+MxQXVKEAaWtGhgexNRTLoqKwJZpH90Fhc5WaTEehPdy4lAG3+aUzfIgX7I7EYbt1+sPcKQ=< x-amz-request-id: E0EC10B38B25564E
< Date: Mon, 24 Jun 2019 23:33:23 GMT
< x-amz-bucket-region: us-east-1
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Server: AmazonS3
< 
* Connection #0 to host bucket.s3.amazonaws.com left intact
[INF]       curl.cpp:RequestPerform(2252): HTTP response code 200
[INF] curl.cpp:ReturnHandler(315): Pool full: destroy the oldest handler
[INF] s3fs.cpp:s3fs_destroy(3511): destroy

Details about issue

The result of operations is correct but finally I have the issue Pool full: destroy the oldest handler and the server doesn't going from here so I Ctrl+C and just destroyed, I want to know if I have something incorrect to fix it.

Thank you all

Originally created by @cifuentesatilio on GitHub (Jun 24, 2019). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1052 #### Version of s3fs being used (s3fs --version) 1.85 #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.7 #### Kernel information (uname -r) 4.15.0-1041-aws #### GNU/Linux Distribution, if applicable (cat /etc/os-release) NAME="Ubuntu" VERSION="18.04.2 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.2 LTS" VERSION_ID="18.04" #### s3fs command line used, if applicable sudo s3fs bucket-o use_cache=/tmp -o passwd_file=/etc/passwd-s3fs -o allow_other -o uid=1000 -o gid=1000 -o mp_umask=002 -o multireq_max=5 /myMount-o dbglevel=info -f -o curldbg /myMount is located in the root not in the user. #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) ``` [CRT] s3fs.cpp:set_s3fs_log_level(297): change debug level from [CRT] to [INF] [INF] s3fs.cpp:set_mountpoint_attribute(4378): PROC(uid=0, gid=0) - MountPoint(uid=1000, gid=0, mode=40775)[INF] s3fs.cpp:s3fs_init(3458): init v1.85(commit:a78d8d1) with OpenSSL[INF] s3fs.cpp:s3fs_check_service(3802): check services. [INF] curl.cpp:CheckBucket(3273): check a bucket. [INF] curl.cpp:prepare_url(4527): URL is https://s3.amazonaws.com/bucket/ [INF] curl.cpp:prepare_url(4559): URL changed is https://bucket.s3.amazonaws.com/ [INF] curl.cpp:insertV4Headers(2610): computing signature [GET] [/] [] [] [INF] curl.cpp:url_to_host(101): url is https://s3.amazonaws.com* Trying 52.216.138.59... * TCP_NODELAY set * Connected to yfapp.s3.amazonaws.com (52.216.138.59) port 443 (#0)* found 133 certificates in /etc/ssl/certs/ca-certificates.crt* found 399 certificates in /etc/ssl/certs * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256* server certificate verification OK * server certificate status verification SKIPPED* common name: *.s3.amazonaws.com (matched) * server certificate expiration date OK * server certificate activation date OK * certificate public key: RSA * certificate version: #3 * subject: C=US,ST=Washington,L=Seattle,O=Amazon.com Inc.,CN=*.s3.amazonaws.com * start date: Wed, 07 Nov 2018 00:00:00 GMT * expire date: Fri, 07 Feb 2020 12:00:00 GMT * issuer: C=US,O=DigiCert Inc,OU=www.digicert.com,CN=DigiCert Baltimore CA-2 G2 * compression: NULL >GET / HTTP/1.1 host: yfapp.s3.amazonaws.com User-Agent: s3fs/1.85 (commit hash a78d8d1; OpenSSL) Accept: */* Authorization: AWS4-HMAC-SHA256 Credential=AKIA25PJ2VZ47K47EFZS/20190624/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=db9889dd68dbb0c36eed68418744c3c7fcc7e95005a6126e72d36f934aa860d7x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855x-amz-date: 20190624T233321Z < HTTP/1.1 200 OK < x-amz-id-2: Ta21+MxQXVKEAaWtGhgexNRTLoqKwJZpH90Fhc5WaTEehPdy4lAG3+aUzfIgX7I7EYbt1+sPcKQ=< x-amz-request-id: E0EC10B38B25564E < Date: Mon, 24 Jun 2019 23:33:23 GMT < x-amz-bucket-region: us-east-1 < Content-Type: application/xml < Transfer-Encoding: chunked < Server: AmazonS3 < * Connection #0 to host bucket.s3.amazonaws.com left intact [INF] curl.cpp:RequestPerform(2252): HTTP response code 200 [INF] curl.cpp:ReturnHandler(315): Pool full: destroy the oldest handler [INF] s3fs.cpp:s3fs_destroy(3511): destroy ``` ### Details about issue The result of operations is correct but finally I have the issue Pool full: destroy the oldest handler and the server doesn't going from here so I Ctrl+C and just destroyed, I want to know if I have something incorrect to fix it. Thank you all
Author
Owner

@cifuentesatilio commented on GitHub (Jun 30, 2019):

Hello,

I solved my issue with the version 1.84, I know is not the best but right now this version help me to continue.

If somebody have other option, I am glad to try it.

Thanks,

<!-- gh-comment-id:507007388 --> @cifuentesatilio commented on GitHub (Jun 30, 2019): Hello, I solved my issue with the version 1.84, I know is not the best but right now this version help me to continue. If somebody have other option, I am glad to try it. Thanks,
Author
Owner

@lxknvlk commented on GitHub (Aug 13, 2019):

I have the same problem. Stuck on Pool full: destroy the oldest handler. But with 1.84 i get stuck on previous line. [INF] curl.cpp:RequestPerform(2251): HTTP response code 200. Im using the s3 read only permissions IAM role. Just tested - same results with full access iam role.

<!-- gh-comment-id:520841915 --> @lxknvlk commented on GitHub (Aug 13, 2019): I have the same problem. Stuck on `Pool full: destroy the oldest handler`. But with 1.84 i get stuck on previous line. `[INF] curl.cpp:RequestPerform(2251): HTTP response code 200`. Im using the s3 read only permissions IAM role. Just tested - same results with full access iam role.
Author
Owner

@charsi commented on GitHub (May 16, 2021):

Encountering the same issue on s3fs 1.86 and libcurl4 7.68.0-1ubuntu2.5

<!-- gh-comment-id:841857102 --> @charsi commented on GitHub (May 16, 2021): Encountering the same issue on s3fs 1.86 and libcurl4 7.68.0-1ubuntu2.5
Author
Owner

@leojonathanoh commented on GitHub (Nov 5, 2021):

same issue on s3fs 1.86 and libcurl4 7.68.0-1ubuntu2.7

EDIT: i realized that if i omit -o dbglevel=info -f -o curldbg, the mount will succeed. So the debug options are the problem.

<!-- gh-comment-id:962054681 --> @leojonathanoh commented on GitHub (Nov 5, 2021): same issue on s3fs 1.86 and libcurl4 7.68.0-1ubuntu2.7 EDIT: i realized that if i omit `-o dbglevel=info -f -o curldbg`, the mount will succeed. So the debug options are the problem.
Author
Owner

@gaul commented on GitHub (Nov 7, 2021):

Please test with the latest 1.90.

<!-- gh-comment-id:962540498 --> @gaul commented on GitHub (Nov 7, 2021): Please test with the latest 1.90.
Author
Owner

@tiberiumihai commented on GitHub (Dec 13, 2021):

Is there a mirror for s3fs v1.9 built for ubuntu, or do I have to compile from source to test this out?

I'm using the following in my /etc/fstab to connect to digital ocean space s3 compatible bucket, and added /uploads/ symlink to apache webroot for my web app so that uploads and requests go directly to s3fs:

bucket-name:/uploads /mnt/bucket-name/uploads fuse.s3fs _netdev,allow_other,nonempty,noexec,nosuid,nodev,noatime,uid=33,gid=33,mp_umask=022,umask=133,use_cache=/tmp,enable_noobj_cache,ensure_diskfree=1024,retries=8,max_stat_cache_size=1536000,multireq_max=500,parallel_count=40,nomixupload,use_path_request_style,dbglevel=info,curldbg,url=https://ams3.digitaloceanspaces.com 0 0
www-data www-data 41 Dec 12 10:00 /var/www/html/uploads -> /mnt/bucket-name/uploads

When accesing the frontend of the website, everything is ok, but when accessing the admin CRM which loads content from bucket, suddenly stops working. First file request on admin that got s3fs down was a css file using a query param to destroy browser cache.
mnt -a returns this:
s3fs: unable to access MOUNTPOINT /mnt/bucket-name/uploads: Transport endpoint is not connected

and sudo tail -f /var/log/syslog returns this:

s3fs[2019]: < x-amz-request-id: <id>
s3fs[2019]: < content-type: application/json
s3fs[2019]: < date: Mon, 13 Dec 2021 22:15:22 GMT
s3fs[2019]: < strict-transport-security: max-age=15552000; includeSubDomains; preload
s3fs[2019]: < vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
s3fs[2019]: < 
s3fs[2019]: * Connection #0 to host ams3.digitaloceanspaces.com left intact
s3fs[2019]:       HTTP response code 200
s3fs[2019]: Pool full: destroy the oldest handler

I'm using s3fs v1.86-1. And files are uploaded using private acl, but tested with public as well and didn't work.

Any help? Thanks!

<!-- gh-comment-id:992978742 --> @tiberiumihai commented on GitHub (Dec 13, 2021): Is there a mirror for s3fs v1.9 built for ubuntu, or do I have to compile from source to test this out? I'm using the following in my /etc/fstab to connect to digital ocean space s3 compatible bucket, and added /uploads/ symlink to apache webroot for my web app so that uploads and requests go directly to s3fs: ``` bucket-name:/uploads /mnt/bucket-name/uploads fuse.s3fs _netdev,allow_other,nonempty,noexec,nosuid,nodev,noatime,uid=33,gid=33,mp_umask=022,umask=133,use_cache=/tmp,enable_noobj_cache,ensure_diskfree=1024,retries=8,max_stat_cache_size=1536000,multireq_max=500,parallel_count=40,nomixupload,use_path_request_style,dbglevel=info,curldbg,url=https://ams3.digitaloceanspaces.com 0 0 ``` ``` www-data www-data 41 Dec 12 10:00 /var/www/html/uploads -> /mnt/bucket-name/uploads ``` When accesing the frontend of the website, everything is ok, but when accessing the admin CRM which loads content from bucket, suddenly stops working. First file request on admin that got s3fs down was a css file using a query param to destroy browser cache. `mnt -a` returns this: `s3fs: unable to access MOUNTPOINT /mnt/bucket-name/uploads: Transport endpoint is not connected` and `sudo tail -f /var/log/syslog` returns this: ``` s3fs[2019]: < x-amz-request-id: <id> s3fs[2019]: < content-type: application/json s3fs[2019]: < date: Mon, 13 Dec 2021 22:15:22 GMT s3fs[2019]: < strict-transport-security: max-age=15552000; includeSubDomains; preload s3fs[2019]: < vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method s3fs[2019]: < s3fs[2019]: * Connection #0 to host ams3.digitaloceanspaces.com left intact s3fs[2019]: HTTP response code 200 s3fs[2019]: Pool full: destroy the oldest handler ``` I'm using `s3fs v1.86-1`. And files are uploaded using private acl, but tested with public as well and didn't work. Any help? Thanks!
Author
Owner

@adiroiban commented on GitHub (Apr 27, 2023):

Are you guys trying to mount over a bucket that is already mounted?

If the mounted bucket is empty s3fs will not complain and will try to remount it.

<!-- gh-comment-id:1525445025 --> @adiroiban commented on GitHub (Apr 27, 2023): Are you guys trying to mount over a bucket that is already mounted? If the mounted bucket is empty s3fs will not complain and will try to remount it.
Author
Owner

@ggtakec commented on GitHub (May 4, 2023):

@tiberiumihai
The Pool full: destroy the oldest handler message is just a message to destroy the CURL handle, so I don't think it is the direct cause.

First, you need to check if you can access /mnt/bucket-name/uploads with the apache execution user(eg: www-data).
(ex. sudo -u www-data ls /mnt/bucket-name/uploads)
If it's a permission problem, change the permissions below the mount point.
And please use the latest s3fs if possible.

<!-- gh-comment-id:1534747344 --> @ggtakec commented on GitHub (May 4, 2023): @tiberiumihai The `Pool full: destroy the oldest handler` message is just a message to destroy the CURL handle, so I don't think it is the direct cause. First, you need to check if you can access `/mnt/bucket-name/uploads` with the apache execution user(eg: `www-data`). (ex. `sudo -u www-data ls /mnt/bucket-name/uploads`) If it's a permission problem, change the permissions below the mount point. And please use the latest s3fs if possible.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#577
No description provided.