[GH-ISSUE #725] Transport endpoint is not connected #412

Closed
opened 2026-03-04 01:45:19 +03:00 by kerem · 28 comments
Owner

Originally created by @birdhackor on GitHub (Feb 26, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/725

Additional Information

The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.

Version of s3fs being used (s3fs --version)

Amazon Simple Storage Service File System V1.79(commit:unknown) with GnuTLS(gcrypt)

Version of fuse being used (pkg-config --modversion fuse)

fusermount version: 2.9.4

System information (uname -r)

4.4.0-116-generic

Distro (cat /etc/issue)

Ubuntu 16.04.3 LTS \n \l

s3fs command line used (if applicable)

s3fs qweqwcqqvwaav s3-dd -o dbglevel=info -o curldbg

/etc/fstab entry (if applicable):

LABEL=cloudimg-rootfs   /        ext4   defaults        0 0
LABEL=UEFI      /boot/efi       vfat    defaults        0 0

s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs)

if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages

s3fs[7608]: set_s3fs_log_level(250): change debug level from [CRT] to [INF]
s3fs[7608]:     PROC(uid=1000, gid=1000) - MountPoint(uid=1000, gid=1000, mode=40755)
s3fs[7611]: s3fs_init(3294): init v1.79(commit:unknown) with GnuTLS(gcrypt)
s3fs[7611]: check services.
s3fs[7611]:       check a bucket.
s3fs[7611]:       URL is http://s3.amazonaws.com/qweqwcqqvwaav/
s3fs[7611]:       URL changed is http://qweqwcqqvwaav.s3.amazonaws.com/
s3fs[7611]:       computing signature [GET] [/] [] []
s3fs[7611]:       url is http://s3.amazonaws.com
s3fs[7611]: Libgcrypt warning: missing initialization - please fix the application
s3fs[7611]: *   Trying 54.231.82.235...
s3fs[7611]: * Connected to qweqwcqqvwaav.s3.amazonaws.com (54.231.82.235) port 80 (#0)
s3fs[7611]: > GET / HTTP/1.1#015#012host: qweqwcqqvwaav.s3.amazonaws.com#015#012Accept: */*#015#012Authorization: AWS4-HMAC-SHA256 Credentia     l=AKIAJVKLVEK4KMBXVFFA/20180226/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=fc1ec54a275b01cfa8e37caeb4cd6e47adcb9907cecd6aa4f69bbf1c4bf37     f8c#015#012x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855#015#012x-amz-date: 20180226T173323Z
s3fs[7611]: < HTTP/1.1 307 Temporary Redirect
s3fs[7611]: < x-amz-bucket-region: ap-northeast-1
s3fs[7611]: < x-amz-request-id: 2BD7680E74643CA0
s3fs[7611]: < x-amz-id-2: HX9LWuQ4+9+TLRiAgwUKEdxT7TwOaS4xCUYxPGl18FEYyrYWrUkL7UnqXtj6xCIXV4x7HmbZmuM=
s3fs[7611]: < Location: http://qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com/
s3fs[7611]: < Content-Type: application/xml
s3fs[7611]: < Transfer-Encoding: chunked
s3fs[7611]: < Date: Mon, 26 Feb 2018 17:33:23 GMT
s3fs[7611]: < Server: AmazonS3
s3fs[7611]: <
s3fs[7611]: * Ignoring the response-body
s3fs[7611]: * Connection #0 to host qweqwcqqvwaav.s3.amazonaws.com left intact
s3fs[7611]: * Issue another request to this URL: 'http://qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com/'
s3fs[7611]: *   Trying 52.219.0.87...
s3fs[7611]: * Connected to qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com (52.219.0.87) port 80 (#1)
s3fs[7611]: > GET / HTTP/1.1#015#012Host: qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com#015#012Accept: */*#015#012x-amz-content-sha256: e3b0     c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855#015#012x-amz-date: 20180226T173323Z
s3fs[7611]: < HTTP/1.1 403 Forbidden
s3fs[7611]: < x-amz-bucket-region: ap-northeast-1
s3fs[7611]: < x-amz-request-id: 6100DE3E03F220E0
s3fs[7611]: < x-amz-id-2: 6H/UIt5krcqDvsFLZCk2T9QoxzurUHS1qRe45Px9E0rYHam6FDxvmGeq6lOtB7OAO3VmIyVW+8A=
s3fs[7611]: < Content-Type: application/xml
s3fs[7611]: < Transfer-Encoding: chunked
s3fs[7611]: < Date: Mon, 26 Feb 2018 17:33:23 GMT
s3fs[7611]: < Server: AmazonS3
s3fs[7611]: <
s3fs[7611]: * Connection #1 to host qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com left intact
s3fs[7611]:       HTTP response code 403 was returned, returning EPERM
s3fs[7611]: CheckBucket(2675): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>#012<Error><Code>AccessDenied</Code><     Message>Access Denied</Message><RequestId>6100DE3E03F220E0</RequestId><HostId>6H/UIt5krcqDvsFLZCk2T9QoxzurUHS1qRe45Px9E0rYHam6FDxvmGeq6lOtB7OAO3VmIyVW+8A=</HostId></Error>
s3fs[7611]: s3fs_check_service(3691): Could not connect, so retry to connect by signature version 2.
s3fs[7611]:       check a bucket.
s3fs[7611]:       URL is http://s3.amazonaws.com/qweqwcqqvwaav/
s3fs[7611]:       URL changed is http://qweqwcqqvwaav.s3.amazonaws.com/
s3fs[7611]: * Hostname qweqwcqqvwaav.s3.amazonaws.com was found in DNS cache
s3fs[7611]: *   Trying 54.231.82.235...
s3fs[7611]: * Connected to qweqwcqqvwaav.s3.amazonaws.com (54.231.82.235) port 80 (#0)
s3fs[7611]: > GET / HTTP/1.1#015#012Host: qweqwcqqvwaav.s3.amazonaws.com#015#012Accept: */*#015#012Authorization: AWS AKIAJVKLVEK4KMBXVFFA:z     PCIvr0Mfm73QcX31hBibfzVm9M=#015#012Date: Mon, 26 Feb 2018 17:33:24 GMT
s3fs[7611]: < HTTP/1.1 307 Temporary Redirect
s3fs[7611]: < x-amz-bucket-region: ap-northeast-1
s3fs[7611]: < x-amz-request-id: DA469D976A8268FD
s3fs[7611]: < x-amz-id-2: digirw2cTScoagwnAU9QrDpqgI+v8vUnUsxuZHa4JWimVcGh4XN55AZ+iL06YHU1utGZktnH4R4=
s3fs[7611]: < Location: http://qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com/
s3fs[7611]: < Content-Type: application/xml
s3fs[7611]: < Transfer-Encoding: chunked
s3fs[7611]: < Date: Mon, 26 Feb 2018 17:33:24 GMT
s3fs[7611]: < Server: AmazonS3
s3fs[7611]: <
s3fs[7611]: * Ignoring the response-body
s3fs[7611]: * Connection #0 to host qweqwcqqvwaav.s3.amazonaws.com left intact
s3fs[7611]: * Issue another request to this URL: 'http://qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com/'
s3fs[7611]: * Hostname qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com was found in DNS cache
s3fs[7611]: *   Trying 52.219.0.87...
s3fs[7611]: * Connected to qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com (52.219.0.87) port 80 (#1)
s3fs[7611]: > GET / HTTP/1.1#015#012Host: qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com#015#012Accept: */*#015#012Date: Mon, 26 Feb 2018 17:     33:24 GMT
s3fs[7611]: < HTTP/1.1 403 Forbidden
s3fs[7611]: < x-amz-bucket-region: ap-northeast-1
s3fs[7611]: < x-amz-request-id: A4BCB32D4097E437
s3fs[7611]: < x-amz-id-2: JwztF3XmUjgTVo+cLpTHOZJ8Wt4FNmA2bFK1nMRDjW9yVOV0cvNpzDEC6nxeKiZ8UzzjHPaRDSg=
s3fs[7611]: < Content-Type: application/xml
s3fs[7611]: < Transfer-Encoding: chunked
s3fs[7611]: < Date: Mon, 26 Feb 2018 17:33:24 GMT
s3fs[7611]: < Server: AmazonS3
s3fs[7611]: <
s3fs[7611]: * Connection #1 to host qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com left intact
s3fs[7611]:       HTTP response code 403 was returned, returning EPERM
s3fs[7611]: CheckBucket(2675): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>#012<Error><Code>AccessDenied</Code><     Message>Access Denied</Message><RequestId>A4BCB32D4097E437</RequestId><HostId>JwztF3XmUjgTVo+cLpTHOZJ8Wt4FNmA2bFK1nMRDjW9yVOV0cvNpzDEC6nxeKiZ8UzzjHPaRDSg=</HostId></Error>
s3fs[7611]: s3fs_check_service(3707): invalid credentials - result of checking service.

Details about issue

when i create new bucket and mount it with s3fs, it faill and show Transport endpoint is not connected

but when i use the same command line to mount old bucket, it work!!

My IAM user use AmazonS3FullAccess, so I think the reason it fail should not be bucket police.

Originally created by @birdhackor on GitHub (Feb 26, 2018). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/725 ### Additional Information _The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._ #### Version of s3fs being used (s3fs --version) Amazon Simple Storage Service File System V1.79(commit:unknown) with GnuTLS(gcrypt) #### Version of fuse being used (pkg-config --modversion fuse) fusermount version: 2.9.4 #### System information (uname -r) 4.4.0-116-generic #### Distro (cat /etc/issue) Ubuntu 16.04.3 LTS \n \l #### s3fs command line used (if applicable) ``` s3fs qweqwcqqvwaav s3-dd -o dbglevel=info -o curldbg ``` #### /etc/fstab entry (if applicable): ``` LABEL=cloudimg-rootfs / ext4 defaults 0 0 LABEL=UEFI /boot/efi vfat defaults 0 0 ``` #### s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs) _if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages_ ``` s3fs[7608]: set_s3fs_log_level(250): change debug level from [CRT] to [INF] s3fs[7608]: PROC(uid=1000, gid=1000) - MountPoint(uid=1000, gid=1000, mode=40755) s3fs[7611]: s3fs_init(3294): init v1.79(commit:unknown) with GnuTLS(gcrypt) s3fs[7611]: check services. s3fs[7611]: check a bucket. s3fs[7611]: URL is http://s3.amazonaws.com/qweqwcqqvwaav/ s3fs[7611]: URL changed is http://qweqwcqqvwaav.s3.amazonaws.com/ s3fs[7611]: computing signature [GET] [/] [] [] s3fs[7611]: url is http://s3.amazonaws.com s3fs[7611]: Libgcrypt warning: missing initialization - please fix the application s3fs[7611]: * Trying 54.231.82.235... s3fs[7611]: * Connected to qweqwcqqvwaav.s3.amazonaws.com (54.231.82.235) port 80 (#0) s3fs[7611]: > GET / HTTP/1.1#015#012host: qweqwcqqvwaav.s3.amazonaws.com#015#012Accept: */*#015#012Authorization: AWS4-HMAC-SHA256 Credentia l=AKIAJVKLVEK4KMBXVFFA/20180226/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=fc1ec54a275b01cfa8e37caeb4cd6e47adcb9907cecd6aa4f69bbf1c4bf37 f8c#015#012x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855#015#012x-amz-date: 20180226T173323Z s3fs[7611]: < HTTP/1.1 307 Temporary Redirect s3fs[7611]: < x-amz-bucket-region: ap-northeast-1 s3fs[7611]: < x-amz-request-id: 2BD7680E74643CA0 s3fs[7611]: < x-amz-id-2: HX9LWuQ4+9+TLRiAgwUKEdxT7TwOaS4xCUYxPGl18FEYyrYWrUkL7UnqXtj6xCIXV4x7HmbZmuM= s3fs[7611]: < Location: http://qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com/ s3fs[7611]: < Content-Type: application/xml s3fs[7611]: < Transfer-Encoding: chunked s3fs[7611]: < Date: Mon, 26 Feb 2018 17:33:23 GMT s3fs[7611]: < Server: AmazonS3 s3fs[7611]: < s3fs[7611]: * Ignoring the response-body s3fs[7611]: * Connection #0 to host qweqwcqqvwaav.s3.amazonaws.com left intact s3fs[7611]: * Issue another request to this URL: 'http://qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com/' s3fs[7611]: * Trying 52.219.0.87... s3fs[7611]: * Connected to qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com (52.219.0.87) port 80 (#1) s3fs[7611]: > GET / HTTP/1.1#015#012Host: qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com#015#012Accept: */*#015#012x-amz-content-sha256: e3b0 c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855#015#012x-amz-date: 20180226T173323Z s3fs[7611]: < HTTP/1.1 403 Forbidden s3fs[7611]: < x-amz-bucket-region: ap-northeast-1 s3fs[7611]: < x-amz-request-id: 6100DE3E03F220E0 s3fs[7611]: < x-amz-id-2: 6H/UIt5krcqDvsFLZCk2T9QoxzurUHS1qRe45Px9E0rYHam6FDxvmGeq6lOtB7OAO3VmIyVW+8A= s3fs[7611]: < Content-Type: application/xml s3fs[7611]: < Transfer-Encoding: chunked s3fs[7611]: < Date: Mon, 26 Feb 2018 17:33:23 GMT s3fs[7611]: < Server: AmazonS3 s3fs[7611]: < s3fs[7611]: * Connection #1 to host qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com left intact s3fs[7611]: HTTP response code 403 was returned, returning EPERM s3fs[7611]: CheckBucket(2675): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>#012<Error><Code>AccessDenied</Code>< Message>Access Denied</Message><RequestId>6100DE3E03F220E0</RequestId><HostId>6H/UIt5krcqDvsFLZCk2T9QoxzurUHS1qRe45Px9E0rYHam6FDxvmGeq6lOtB7OAO3VmIyVW+8A=</HostId></Error> s3fs[7611]: s3fs_check_service(3691): Could not connect, so retry to connect by signature version 2. s3fs[7611]: check a bucket. s3fs[7611]: URL is http://s3.amazonaws.com/qweqwcqqvwaav/ s3fs[7611]: URL changed is http://qweqwcqqvwaav.s3.amazonaws.com/ s3fs[7611]: * Hostname qweqwcqqvwaav.s3.amazonaws.com was found in DNS cache s3fs[7611]: * Trying 54.231.82.235... s3fs[7611]: * Connected to qweqwcqqvwaav.s3.amazonaws.com (54.231.82.235) port 80 (#0) s3fs[7611]: > GET / HTTP/1.1#015#012Host: qweqwcqqvwaav.s3.amazonaws.com#015#012Accept: */*#015#012Authorization: AWS AKIAJVKLVEK4KMBXVFFA:z PCIvr0Mfm73QcX31hBibfzVm9M=#015#012Date: Mon, 26 Feb 2018 17:33:24 GMT s3fs[7611]: < HTTP/1.1 307 Temporary Redirect s3fs[7611]: < x-amz-bucket-region: ap-northeast-1 s3fs[7611]: < x-amz-request-id: DA469D976A8268FD s3fs[7611]: < x-amz-id-2: digirw2cTScoagwnAU9QrDpqgI+v8vUnUsxuZHa4JWimVcGh4XN55AZ+iL06YHU1utGZktnH4R4= s3fs[7611]: < Location: http://qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com/ s3fs[7611]: < Content-Type: application/xml s3fs[7611]: < Transfer-Encoding: chunked s3fs[7611]: < Date: Mon, 26 Feb 2018 17:33:24 GMT s3fs[7611]: < Server: AmazonS3 s3fs[7611]: < s3fs[7611]: * Ignoring the response-body s3fs[7611]: * Connection #0 to host qweqwcqqvwaav.s3.amazonaws.com left intact s3fs[7611]: * Issue another request to this URL: 'http://qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com/' s3fs[7611]: * Hostname qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com was found in DNS cache s3fs[7611]: * Trying 52.219.0.87... s3fs[7611]: * Connected to qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com (52.219.0.87) port 80 (#1) s3fs[7611]: > GET / HTTP/1.1#015#012Host: qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com#015#012Accept: */*#015#012Date: Mon, 26 Feb 2018 17: 33:24 GMT s3fs[7611]: < HTTP/1.1 403 Forbidden s3fs[7611]: < x-amz-bucket-region: ap-northeast-1 s3fs[7611]: < x-amz-request-id: A4BCB32D4097E437 s3fs[7611]: < x-amz-id-2: JwztF3XmUjgTVo+cLpTHOZJ8Wt4FNmA2bFK1nMRDjW9yVOV0cvNpzDEC6nxeKiZ8UzzjHPaRDSg= s3fs[7611]: < Content-Type: application/xml s3fs[7611]: < Transfer-Encoding: chunked s3fs[7611]: < Date: Mon, 26 Feb 2018 17:33:24 GMT s3fs[7611]: < Server: AmazonS3 s3fs[7611]: < s3fs[7611]: * Connection #1 to host qweqwcqqvwaav.s3-ap-northeast-1.amazonaws.com left intact s3fs[7611]: HTTP response code 403 was returned, returning EPERM s3fs[7611]: CheckBucket(2675): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>#012<Error><Code>AccessDenied</Code>< Message>Access Denied</Message><RequestId>A4BCB32D4097E437</RequestId><HostId>JwztF3XmUjgTVo+cLpTHOZJ8Wt4FNmA2bFK1nMRDjW9yVOV0cvNpzDEC6nxeKiZ8UzzjHPaRDSg=</HostId></Error> s3fs[7611]: s3fs_check_service(3707): invalid credentials - result of checking service. ``` ### Details about issue when i create **new** bucket and mount it with s3fs, it faill and show Transport endpoint is not connected but when i use the same command line to mount **old** bucket, it work!! My IAM user use AmazonS3FullAccess, so I think the reason it fail should not be bucket police.
kerem closed this issue 2026-03-04 01:45:19 +03:00
Author
Owner

@jancallewaert commented on GitHub (Feb 28, 2018):

We have the same issue. In fact, it does work if you wait long enough. After about one hour after the creation of the bucket, you can mount the s3 bucket without a problem.

<!-- gh-comment-id:369229101 --> @jancallewaert commented on GitHub (Feb 28, 2018): We have the same issue. In fact, it does work if you wait long enough. After about one hour after the creation of the bucket, you can mount the s3 bucket without a problem.
Author
Owner

@birdhackor commented on GitHub (Feb 28, 2018):

I try to mount the same bucket again and it success.

Seems I just need to wait, thanks.

<!-- gh-comment-id:369253628 --> @birdhackor commented on GitHub (Feb 28, 2018): I try to mount the same bucket again and it success. Seems I just need to wait, thanks.
Author
Owner

@jancallewaert commented on GitHub (Feb 28, 2018):

I am not really agreeing with closing this. It does not make sense that you have to wait more than one hour after the creation of the bucket to be able to mount it. We create the bucket and the ec2 instances in the same cloudformation stack, which means this keeps failing.

<!-- gh-comment-id:369266511 --> @jancallewaert commented on GitHub (Feb 28, 2018): I am not really agreeing with closing this. It does not make sense that you have to wait more than one hour after the creation of the bucket to be able to mount it. We create the bucket and the ec2 instances in the same cloudformation stack, which means this keeps failing.
Author
Owner

@blancqua commented on GitHub (Mar 1, 2018):

Same issue here. Did Amazon change something from their side?

<!-- gh-comment-id:369504767 --> @blancqua commented on GitHub (Mar 1, 2018): Same issue here. Did Amazon change something from their side?
Author
Owner

@H6 commented on GitHub (Mar 2, 2018):

I had to explicitly provide url and endpoint to make it work when calling s3fs. E.g. for region eu-central-1. It worked couple of days ago and then suddenly there were lots of problem due redirecting s3 bucket domains whatsoever.

s3fs [...] -o url="https://s3-eu-central-1.amazonaws.com" -o endpoint=eu-central-1

<!-- gh-comment-id:369887265 --> @H6 commented on GitHub (Mar 2, 2018): I had to explicitly provide `url` and `endpoint` to make it work when calling `s3fs`. E.g. for region `eu-central-1`. It worked couple of days ago and then suddenly there were lots of problem due redirecting s3 bucket domains whatsoever. `s3fs [...] -o url="https://s3-eu-central-1.amazonaws.com" -o endpoint=eu-central-1`
Author
Owner

@jochenhebbrecht commented on GitHub (Mar 2, 2018):

Hi @H6,

I was also facing this issue. Just tested your solution and it works!
Thanks for sharing!

Jochen

ps: you only need the url option, the endpoint does not influence the behaviour

<!-- gh-comment-id:369922502 --> @jochenhebbrecht commented on GitHub (Mar 2, 2018): Hi @H6, I was also facing this issue. Just tested your solution and it works! Thanks for sharing! Jochen ps: you only need the `url` option, the `endpoint` does not influence the behaviour
Author
Owner

@sqlbot commented on GitHub (Mar 2, 2018):

@jancallewaert by default, in DNS, *.s3.amazonaws.com points to the original us-east-1 endpoint for S3, with more specific, override records created automatically by the service for each bucket when buckets are created.

Thus, once a bucket is a few minutes old, and for the rest of its life, example-bucket.s3.amazonaws.com actually points to the correct regional endpoint. But not at first.

This the reason for the delay, when you don't explicitly specify an endpoint or url to override the default behavior... the DNS record creation isn't immediate, so when you make that first request, the old default answer is cached for a few minutes, further extending the delay.

There isn't a published expectation from AWS for how long this activity requires, but it stands to reason that it is dependent on the current level of bucket creation (and perhaps deletion) traffic. It does seem to vary.

Even Amazon's own CloudFront service is impacted by the regional rerouting provided by DNS after bucket creation. If you create a new bucket outside of us-east-1 and point a CloudFront distribution to it, for the first few minutes, your requests actually end up in us-east-1, where they get redirected by S3 to the correct regional endpoint, because CloudFront is dependent on the bucket DNS entry, and it doesn't follow the redirects, itself. It returns them to the browser, and unless the objects in the bucket allow public access, the result is AccessDenied. Then, within a few minutes, it all works as expected.

It has been my opinion for a while, now, that several capabilities that intended to be helpful to users have proven themselves not to be quite as helpful as was intended, because they still leave room for unexpected and ambiguous failures. s3fs tries to make some guesses so that an imprecise configuration still works successfully -- is not as helpful as intended.

It is my opinion that rather than try to improve our guesswork, we should require correct configuration from the start: explicit region or endpoint and signature algorithm selection, with a hard fail and explicit diagnostic logging of what S3 returns in a partial or complete misconfiguration scenario. We should probably also provide the user with the ability to specify whether we are connecting to genuine S3 or one of several "compatible" services, and remove some the burden inside s3fs.

S3 has evolved over the years in ways that couldn't have been anticipated when this library was first created, so I certainly don't fault the s3fs developers. Nor do I fault AWS, because they have done an amazing job of adding new features and functionality without breaking the behavior of the oldest buckets in the oldest regions -- but due to security and scalability innovations in AWS, newer regions did not retain all of the behavior of older regions. s3fs tried to evolve with it, but I would suggest that hindsight has a lot of suggestions to offer.

<!-- gh-comment-id:369959564 --> @sqlbot commented on GitHub (Mar 2, 2018): @jancallewaert by default, in DNS, `*.s3.amazonaws.com` points to the original us-east-1 endpoint for S3, with more specific, override records created automatically by the service for each bucket when buckets are created. Thus, once a bucket is a few minutes old, and for the rest of its life, `example-bucket.s3.amazonaws.com` actually points to the correct regional endpoint. But not at first. This the reason for the delay, when you don't explicitly specify an endpoint or url to override the default behavior... the DNS record creation isn't immediate, so when you make that first request, the old default answer is cached for a few minutes, further extending the delay. There isn't a published expectation from AWS for how long this activity requires, but it stands to reason that it is dependent on the current level of bucket creation (and perhaps deletion) traffic. It does seem to vary. Even Amazon's own CloudFront service is impacted by the regional rerouting provided by DNS after bucket creation. If you create a new bucket outside of us-east-1 and point a CloudFront distribution to it, for the first few minutes, your requests actually end up in us-east-1, where they get redirected by S3 to the correct regional endpoint, because CloudFront is dependent on the bucket DNS entry, and it doesn't follow the redirects, itself. It returns them to the browser, and unless the objects in the bucket allow public access, the result is `AccessDenied`. Then, within a few minutes, it all works as expected. It has been my opinion for a while, now, that several capabilities that intended to be helpful to users have proven themselves not to be quite as helpful as was intended, because they still leave room for unexpected and ambiguous failures. s3fs tries to make some guesses so that an imprecise configuration still works successfully -- is not as helpful as intended. It is my opinion that rather than try to improve our guesswork, we should require correct configuration from the start: explicit region or endpoint and signature algorithm selection, with a hard fail *and* explicit diagnostic logging of what S3 returns in a partial or complete misconfiguration scenario. We should probably also provide the user with the ability to specify whether we are connecting to genuine S3 or one of several "compatible" services, and remove some the burden inside s3fs. S3 has evolved over the years in ways that couldn't have been anticipated when this library was first created, so I certainly don't fault the s3fs developers. Nor do I fault AWS, because they have done an amazing job of adding new features and functionality without breaking the behavior of the oldest buckets in the oldest regions -- but due to security and scalability innovations in AWS, newer regions did not retain all of the behavior of older regions. s3fs tried to evolve with it, but I would suggest that hindsight has a lot of suggestions to offer.
Author
Owner

@jochenhebbrecht commented on GitHub (Mar 3, 2018):

Hi @sqlbot,

Thanks for the thorough explanation. However, we were using s3fs for a long time in combination with newly created S3 buckets and we never bumped into this problem. Reading your explanation doesn't give me any hint why we are now suddenly suffering from this problem. So there must be something different on AWS side.

We have a support contract with AWS and I also raised a support case on their side. However, as I was expecting, they're not giving us any indication what exactly changed on their S3 service. Their statement is 'we're not supporting s3fs'.

Conclusion for me: always provide the url when trying to mount a bucket as a filesystem.

Regards,
Jochen

<!-- gh-comment-id:370145278 --> @jochenhebbrecht commented on GitHub (Mar 3, 2018): Hi @sqlbot, Thanks for the thorough explanation. However, we were using s3fs for a long time in combination with newly created S3 buckets and we never bumped into this problem. Reading your explanation doesn't give me any hint why we are now suddenly suffering from this problem. So there must be something different on AWS side. We have a support contract with AWS and I also raised a support case on their side. However, as I was expecting, they're not giving us any indication what exactly changed on their S3 service. Their statement is 'we're not supporting s3fs'. Conclusion for me: always provide the `url` when trying to mount a bucket as a filesystem. Regards, Jochen
Author
Owner

@cristian100 commented on GitHub (Apr 8, 2018):

Hi, I want to confirm that once I setup url and endpoint parameters this issue stopped happening.

Thanks @H6 for sharing this.

<!-- gh-comment-id:379590997 --> @cristian100 commented on GitHub (Apr 8, 2018): Hi, I want to confirm that once I setup url and endpoint parameters this issue stopped happening. Thanks @H6 for sharing this.
Author
Owner

@cristian100 commented on GitHub (Apr 25, 2018):

Sadly, after some hours later, this stopped working, initially it did work, but not for long, so, adding url or/and endpoint, doesn't solves the problem.

<!-- gh-comment-id:384145759 --> @cristian100 commented on GitHub (Apr 25, 2018): Sadly, after some hours later, this stopped working, initially it did work, but not for long, so, adding url or/and endpoint, doesn't solves the problem.
Author
Owner

@spolischook commented on GitHub (Sep 15, 2018):

I have next behavior to this issue:

  1. s3fs ...
  2. ls myfolder - got list of files, but folders with zero size
  3. ls any times and still the same
  4. ls for a subfolder - got error "Software caused connection abort"
  5. now ls of myfolder get the "Transport endpoint is not connected"
  6. umount/s3fs - and same behavior.
<!-- gh-comment-id:421612095 --> @spolischook commented on GitHub (Sep 15, 2018): I have next behavior to this issue: 1. s3fs ... 2. ls myfolder - got list of files, but folders with zero size 3. ls any times and still the same 4. ls for a subfolder - got error "Software caused connection abort" 5. now ls of myfolder get the "Transport endpoint is not connected" 6. umount/s3fs - and same behavior.
Author
Owner

@spolischook commented on GitHub (Sep 15, 2018):

Install V1.84 solved the issue

<!-- gh-comment-id:421614718 --> @spolischook commented on GitHub (Sep 15, 2018): Install V1.84 solved the issue
Author
Owner

@fenying commented on GitHub (Dec 27, 2018):

Install V1.84 solved the issue

Emmm, actually not. I use v1.84 on my machine, AWS EC2 m4.large CentOS 7 x86_64, it works well, usually. However, it often dies for this problem.

I gotta check and remount it by following shell script, with crontab:

#!/bin/bash

# I use mount --bind to reduce the number and memory usage of s3fs process.
# So before remounting the s3fs, I have to unmount all bound dirs.
MAP_DIRS=$(cat /etc/fstab | grep '^/www/' | cut -d ' ' -f 2)

ROOT_DIR=/www/s3fs
echo Detecting "$ROOT_DIR"...
TEST=$(ls $ROOT_DIR/)
if [ "$TEST" = "" ]; then
    echo "ERROR, Auto remounting..."

    for i in $MAP_DIRS
    do
        umount -f $i
    done

    umount -f $ROOT_DIR
    mount -a
else
    echo "OK"
fi
<!-- gh-comment-id:450056498 --> @fenying commented on GitHub (Dec 27, 2018): > Install V1.84 solved the issue Emmm, actually not. I use v1.84 on my machine, AWS EC2 m4.large CentOS 7 x86_64, it works well, usually. However, it often dies for this problem. I gotta check and remount it by following shell script, with crontab: ```sh #!/bin/bash # I use mount --bind to reduce the number and memory usage of s3fs process. # So before remounting the s3fs, I have to unmount all bound dirs. MAP_DIRS=$(cat /etc/fstab | grep '^/www/' | cut -d ' ' -f 2) ROOT_DIR=/www/s3fs echo Detecting "$ROOT_DIR"... TEST=$(ls $ROOT_DIR/) if [ "$TEST" = "" ]; then echo "ERROR, Auto remounting..." for i in $MAP_DIRS do umount -f $i done umount -f $ROOT_DIR mount -a else echo "OK" fi ```
Author
Owner

@gaul commented on GitHub (Dec 27, 2018):

Please run s3fs with: -d -d -f -o f2 -o curldbg which may reveal additional context. This is likely a separate issue from the original report so please open a new issue unless this is related to new buckets.

<!-- gh-comment-id:450061557 --> @gaul commented on GitHub (Dec 27, 2018): Please run s3fs with: `-d -d -f -o f2 -o curldbg` which may reveal additional context. This is likely a separate issue from the original report so please open a new issue unless this is related to new buckets.
Author
Owner

@gaul commented on GitHub (Apr 9, 2019):

Closing due to inactivity. Please reopen if symptoms persist.

<!-- gh-comment-id:481184847 --> @gaul commented on GitHub (Apr 9, 2019): Closing due to inactivity. Please reopen if symptoms persist.
Author
Owner

@itsjwala commented on GitHub (Aug 4, 2019):

I have faced the same problem on v1.85, I am using nohup with s3fs.

<!-- gh-comment-id:518002349 --> @itsjwala commented on GitHub (Aug 4, 2019): I have faced the same problem on v1.85, I am using nohup with s3fs.
Author
Owner

@angristan commented on GitHub (Apr 24, 2020):

I had the same issue, @H6's comment fixed it.

<!-- gh-comment-id:618895374 --> @angristan commented on GitHub (Apr 24, 2020): I had the same issue, @H6's comment fixed it.
Author
Owner

@polar commented on GitHub (Sep 1, 2020):

After running with -d -d -f -o f2 -o curldbg

terminate called after throwing an instance of 'std::invalid_argument'
what(): s3fs_strtoofft
Aborted
[ec2-user@ip-*************~]$ s3fs --version
Amazon Simple Storage Service File System V1.86 (commit:unknown) with OpenSSL
Copyright (C) 2010 Randy Rizun rrizun@gmail.com
License GPL2: GNU GPL version 2 https://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

<!-- gh-comment-id:685094523 --> @polar commented on GitHub (Sep 1, 2020): After running with `-d -d -f -o f2 -o curldbg` terminate called after throwing an instance of 'std::invalid_argument' what(): s3fs_strtoofft Aborted [ec2-user@ip-*************~]$ s3fs --version Amazon Simple Storage Service File System V1.86 (commit:unknown) with OpenSSL Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.
Author
Owner

@gaul commented on GitHub (Sep 2, 2020):

Fixed by #1285. Please test with 1.87.

<!-- gh-comment-id:685203282 --> @gaul commented on GitHub (Sep 2, 2020): Fixed by #1285. Please test with 1.87.
Author
Owner

@mmmmxa commented on GitHub (Oct 11, 2022):

Had the same problem on 1.85 when using s3fs in slurm's prolog script to mount buckets on pcluster-allocated node. H6's fix didn't work for me. For some reason, adding sleep 5 to the end of the script did the trick. I suppose that this is somehow caused by s3fs creating child processes that are terminated when the script exits or something like that. Seems like an option for someone who don't want to upgrade the tool.

<!-- gh-comment-id:1274399884 --> @mmmmxa commented on GitHub (Oct 11, 2022): Had the same problem on 1.85 when using s3fs in slurm's prolog script to mount buckets on pcluster-allocated node. H6's fix didn't work for me. For some reason, adding `sleep 5` to the end of the script did the trick. I suppose that this is somehow caused by s3fs creating child processes that are terminated when the script exits or something like that. Seems like an option for someone who don't want to upgrade the tool.
Author
Owner

@quanltsimple commented on GitHub (Feb 8, 2023):

I have faced the same problem on version 1.91, it happens every few days.

<!-- gh-comment-id:1422095886 --> @quanltsimple commented on GitHub (Feb 8, 2023): I have faced the same problem on version 1.91, it happens every few days.
Author
Owner

@gaul commented on GitHub (Feb 8, 2023):

I can't even fathom all the problems you will have using the 4-year old 1.85. If you have this transport endpoint not connected error with 1.91 or master, try attaching gdb and sharing a backtrace. Please open a separate issue for this.

<!-- gh-comment-id:1422524126 --> @gaul commented on GitHub (Feb 8, 2023): I can't even fathom all the problems you will have using the 4-year old 1.85. If you have this transport endpoint not connected error with 1.91 or master, try attaching `gdb` and sharing a backtrace. Please open a separate issue for this.
Author
Owner

@quanltsimple commented on GitHub (Feb 14, 2023):

It sounds silly but actually creating the instance with the DNS hostnames included took care of this for me.
I don't understand why not enabling DNS hostnames in VPC causing this error in s3fs.
Anyway, everything is working perfectly now.
I have been following up for a week now, and the error is not repeated.

<!-- gh-comment-id:1430083411 --> @quanltsimple commented on GitHub (Feb 14, 2023): It sounds silly but actually creating the instance with the DNS hostnames included took care of this for me. I don't understand why not enabling DNS hostnames in VPC causing this error in s3fs. Anyway, everything is working perfectly now. I have been following up for a week now, and the error is not repeated.
Author
Owner

@ggtakec commented on GitHub (Feb 19, 2023):

@quanltsimple
If the problem still occurs, try specifying the curldbg option when starting s3fs and getting the log at that time.
(However, I think the log will be output, it grows too lage.)
So s3fs uses libcurl for its communication, and if there is a problem that affects DNS and hostnames, I think the logs output by curl will be useful.
It may also depend on the curl and TLS libraries you are using (OpenSSL, NSS, gnutls, etc.) and their environment.
If you still can't solve it, try checking the logs.

<!-- gh-comment-id:1435914034 --> @ggtakec commented on GitHub (Feb 19, 2023): @quanltsimple If the problem still occurs, try specifying the curldbg option when starting s3fs and getting the log at that time. (However, I think the log will be output, it grows too lage.) So s3fs uses libcurl for its communication, and if there is a problem that affects DNS and hostnames, I think the logs output by curl will be useful. It may also depend on the curl and TLS libraries you are using (OpenSSL, NSS, gnutls, etc.) and their environment. If you still can't solve it, try checking the logs.
Author
Owner

@quanltsimple commented on GitHub (Feb 19, 2023):

@ggtakec -san, thank you for your response ❤️,
In my case, it was completely resolved after I enabled DNS hostnames in VPC.

<!-- gh-comment-id:1435919996 --> @quanltsimple commented on GitHub (Feb 19, 2023): @ggtakec -san, thank you for your response ❤️, In my case, it was completely resolved after I enabled DNS hostnames in VPC.
Author
Owner

@ggtakec commented on GitHub (Feb 19, 2023):

@quanltsimple
If there are other causes, please post a new issue and let us know.
Thanks.

<!-- gh-comment-id:1436009530 --> @ggtakec commented on GitHub (Feb 19, 2023): @quanltsimple If there are other causes, please post a new issue and let us know. Thanks.
Author
Owner

@dbartenstein commented on GitHub (May 30, 2023):

It sounds silly but actually creating the instance with the DNS hostnames included took care of this for me. I don't understand why not enabling DNS hostnames in VPC causing this error in s3fs. Anyway, everything is working perfectly now. I have been following up for a week now, and the error is not repeated.

@quanltsimple did it also happen to you that a mounted s3fs bucket suddenly was not reachable anymore? Our bucket has been running for two weeks after we had to re-mount it. How exactly did you solve the issue on your side?

<!-- gh-comment-id:1568047863 --> @dbartenstein commented on GitHub (May 30, 2023): > It sounds silly but actually creating the instance with the DNS hostnames included took care of this for me. I don't understand why not enabling DNS hostnames in VPC causing this error in s3fs. Anyway, everything is working perfectly now. I have been following up for a week now, and the error is not repeated. @quanltsimple did it also happen to you that a mounted s3fs bucket suddenly was not reachable anymore? Our bucket has been running for two weeks after we had to re-mount it. How exactly did you solve the issue on your side?
Author
Owner

@quanltsimple commented on GitHub (May 30, 2023):

It sounds silly but actually creating the instance with the DNS hostnames included took care of this for me. I don't understand why not enabling DNS hostnames in VPC causing this error in s3fs. Anyway, everything is working perfectly now. I have been following up for a week now, and the error is not repeated.

@quanltsimple did it also happen to you that a mounted s3fs bucket suddenly was not reachable anymore? Our bucket has been running for two weeks after we had to re-mount it. How exactly did you solve the issue on your side?

Hi @dbartenstein, does your Instance have a DNS name? if not, let's enable it in the VPC setting and recreate your Instance.
That's how I handled it, it works fine so far.
Hope to help you

DNS_Enabled
<!-- gh-comment-id:1568081837 --> @quanltsimple commented on GitHub (May 30, 2023): > > It sounds silly but actually creating the instance with the DNS hostnames included took care of this for me. I don't understand why not enabling DNS hostnames in VPC causing this error in s3fs. Anyway, everything is working perfectly now. I have been following up for a week now, and the error is not repeated. > > @quanltsimple did it also happen to you that a mounted s3fs bucket suddenly was not reachable anymore? Our bucket has been running for two weeks after we had to re-mount it. How exactly did you solve the issue on your side? Hi @dbartenstein, does your Instance have a DNS name? if not, let's enable it in the VPC setting and recreate your Instance. That's how I handled it, it works fine so far. Hope to help you <img width="298" alt="DNS_Enabled" src="https://github.com/s3fs-fuse/s3fs-fuse/assets/44834926/467e08ca-8b3a-4f88-a030-9bf34c54efc9">
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#412
No description provided.