mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #725] Transport endpoint is not connected #412
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#412
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @birdhackor on GitHub (Feb 26, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/725
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Version of s3fs being used (s3fs --version)
Amazon Simple Storage Service File System V1.79(commit:unknown) with GnuTLS(gcrypt)
Version of fuse being used (pkg-config --modversion fuse)
fusermount version: 2.9.4
System information (uname -r)
4.4.0-116-generic
Distro (cat /etc/issue)
Ubuntu 16.04.3 LTS \n \l
s3fs command line used (if applicable)
/etc/fstab entry (if applicable):
s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue
when i create new bucket and mount it with s3fs, it faill and show Transport endpoint is not connected
but when i use the same command line to mount old bucket, it work!!
My IAM user use AmazonS3FullAccess, so I think the reason it fail should not be bucket police.
@jancallewaert commented on GitHub (Feb 28, 2018):
We have the same issue. In fact, it does work if you wait long enough. After about one hour after the creation of the bucket, you can mount the s3 bucket without a problem.
@birdhackor commented on GitHub (Feb 28, 2018):
I try to mount the same bucket again and it success.
Seems I just need to wait, thanks.
@jancallewaert commented on GitHub (Feb 28, 2018):
I am not really agreeing with closing this. It does not make sense that you have to wait more than one hour after the creation of the bucket to be able to mount it. We create the bucket and the ec2 instances in the same cloudformation stack, which means this keeps failing.
@blancqua commented on GitHub (Mar 1, 2018):
Same issue here. Did Amazon change something from their side?
@H6 commented on GitHub (Mar 2, 2018):
I had to explicitly provide
urlandendpointto make it work when callings3fs. E.g. for regioneu-central-1. It worked couple of days ago and then suddenly there were lots of problem due redirecting s3 bucket domains whatsoever.s3fs [...] -o url="https://s3-eu-central-1.amazonaws.com" -o endpoint=eu-central-1@jochenhebbrecht commented on GitHub (Mar 2, 2018):
Hi @H6,
I was also facing this issue. Just tested your solution and it works!
Thanks for sharing!
Jochen
ps: you only need the
urloption, theendpointdoes not influence the behaviour@sqlbot commented on GitHub (Mar 2, 2018):
@jancallewaert by default, in DNS,
*.s3.amazonaws.compoints to the original us-east-1 endpoint for S3, with more specific, override records created automatically by the service for each bucket when buckets are created.Thus, once a bucket is a few minutes old, and for the rest of its life,
example-bucket.s3.amazonaws.comactually points to the correct regional endpoint. But not at first.This the reason for the delay, when you don't explicitly specify an endpoint or url to override the default behavior... the DNS record creation isn't immediate, so when you make that first request, the old default answer is cached for a few minutes, further extending the delay.
There isn't a published expectation from AWS for how long this activity requires, but it stands to reason that it is dependent on the current level of bucket creation (and perhaps deletion) traffic. It does seem to vary.
Even Amazon's own CloudFront service is impacted by the regional rerouting provided by DNS after bucket creation. If you create a new bucket outside of us-east-1 and point a CloudFront distribution to it, for the first few minutes, your requests actually end up in us-east-1, where they get redirected by S3 to the correct regional endpoint, because CloudFront is dependent on the bucket DNS entry, and it doesn't follow the redirects, itself. It returns them to the browser, and unless the objects in the bucket allow public access, the result is
AccessDenied. Then, within a few minutes, it all works as expected.It has been my opinion for a while, now, that several capabilities that intended to be helpful to users have proven themselves not to be quite as helpful as was intended, because they still leave room for unexpected and ambiguous failures. s3fs tries to make some guesses so that an imprecise configuration still works successfully -- is not as helpful as intended.
It is my opinion that rather than try to improve our guesswork, we should require correct configuration from the start: explicit region or endpoint and signature algorithm selection, with a hard fail and explicit diagnostic logging of what S3 returns in a partial or complete misconfiguration scenario. We should probably also provide the user with the ability to specify whether we are connecting to genuine S3 or one of several "compatible" services, and remove some the burden inside s3fs.
S3 has evolved over the years in ways that couldn't have been anticipated when this library was first created, so I certainly don't fault the s3fs developers. Nor do I fault AWS, because they have done an amazing job of adding new features and functionality without breaking the behavior of the oldest buckets in the oldest regions -- but due to security and scalability innovations in AWS, newer regions did not retain all of the behavior of older regions. s3fs tried to evolve with it, but I would suggest that hindsight has a lot of suggestions to offer.
@jochenhebbrecht commented on GitHub (Mar 3, 2018):
Hi @sqlbot,
Thanks for the thorough explanation. However, we were using s3fs for a long time in combination with newly created S3 buckets and we never bumped into this problem. Reading your explanation doesn't give me any hint why we are now suddenly suffering from this problem. So there must be something different on AWS side.
We have a support contract with AWS and I also raised a support case on their side. However, as I was expecting, they're not giving us any indication what exactly changed on their S3 service. Their statement is 'we're not supporting s3fs'.
Conclusion for me: always provide the
urlwhen trying to mount a bucket as a filesystem.Regards,
Jochen
@cristian100 commented on GitHub (Apr 8, 2018):
Hi, I want to confirm that once I setup url and endpoint parameters this issue stopped happening.
Thanks @H6 for sharing this.
@cristian100 commented on GitHub (Apr 25, 2018):
Sadly, after some hours later, this stopped working, initially it did work, but not for long, so, adding url or/and endpoint, doesn't solves the problem.
@spolischook commented on GitHub (Sep 15, 2018):
I have next behavior to this issue:
@spolischook commented on GitHub (Sep 15, 2018):
Install V1.84 solved the issue
@fenying commented on GitHub (Dec 27, 2018):
Emmm, actually not. I use v1.84 on my machine, AWS EC2 m4.large CentOS 7 x86_64, it works well, usually. However, it often dies for this problem.
I gotta check and remount it by following shell script, with crontab:
@gaul commented on GitHub (Dec 27, 2018):
Please run s3fs with:
-d -d -f -o f2 -o curldbgwhich may reveal additional context. This is likely a separate issue from the original report so please open a new issue unless this is related to new buckets.@gaul commented on GitHub (Apr 9, 2019):
Closing due to inactivity. Please reopen if symptoms persist.
@itsjwala commented on GitHub (Aug 4, 2019):
I have faced the same problem on v1.85, I am using nohup with s3fs.
@angristan commented on GitHub (Apr 24, 2020):
I had the same issue, @H6's comment fixed it.
@polar commented on GitHub (Sep 1, 2020):
After running with
-d -d -f -o f2 -o curldbgterminate called after throwing an instance of 'std::invalid_argument'
what(): s3fs_strtoofft
Aborted
[ec2-user@ip-*************~]$ s3fs --version
Amazon Simple Storage Service File System V1.86 (commit:unknown) with OpenSSL
Copyright (C) 2010 Randy Rizun rrizun@gmail.com
License GPL2: GNU GPL version 2 https://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
@gaul commented on GitHub (Sep 2, 2020):
Fixed by #1285. Please test with 1.87.
@mmmmxa commented on GitHub (Oct 11, 2022):
Had the same problem on 1.85 when using s3fs in slurm's prolog script to mount buckets on pcluster-allocated node. H6's fix didn't work for me. For some reason, adding
sleep 5to the end of the script did the trick. I suppose that this is somehow caused by s3fs creating child processes that are terminated when the script exits or something like that. Seems like an option for someone who don't want to upgrade the tool.@quanltsimple commented on GitHub (Feb 8, 2023):
I have faced the same problem on version 1.91, it happens every few days.
@gaul commented on GitHub (Feb 8, 2023):
I can't even fathom all the problems you will have using the 4-year old 1.85. If you have this transport endpoint not connected error with 1.91 or master, try attaching
gdband sharing a backtrace. Please open a separate issue for this.@quanltsimple commented on GitHub (Feb 14, 2023):
It sounds silly but actually creating the instance with the DNS hostnames included took care of this for me.
I don't understand why not enabling DNS hostnames in VPC causing this error in s3fs.
Anyway, everything is working perfectly now.
I have been following up for a week now, and the error is not repeated.
@ggtakec commented on GitHub (Feb 19, 2023):
@quanltsimple
If the problem still occurs, try specifying the curldbg option when starting s3fs and getting the log at that time.
(However, I think the log will be output, it grows too lage.)
So s3fs uses libcurl for its communication, and if there is a problem that affects DNS and hostnames, I think the logs output by curl will be useful.
It may also depend on the curl and TLS libraries you are using (OpenSSL, NSS, gnutls, etc.) and their environment.
If you still can't solve it, try checking the logs.
@quanltsimple commented on GitHub (Feb 19, 2023):
@ggtakec -san, thank you for your response ❤️,
In my case, it was completely resolved after I enabled DNS hostnames in VPC.
@ggtakec commented on GitHub (Feb 19, 2023):
@quanltsimple
If there are other causes, please post a new issue and let us know.
Thanks.
@dbartenstein commented on GitHub (May 30, 2023):
@quanltsimple did it also happen to you that a mounted s3fs bucket suddenly was not reachable anymore? Our bucket has been running for two weeks after we had to re-mount it. How exactly did you solve the issue on your side?
@quanltsimple commented on GitHub (May 30, 2023):
Hi @dbartenstein, does your Instance have a DNS name? if not, let's enable it in the VPC setting and recreate your Instance.
That's how I handled it, it works fine so far.
Hope to help you