mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #602] Doesn't connect #342
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#342
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @blakemcbride on GitHub (May 16, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/602
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Version of s3fs being used (s3fs --version)
1.82
Version of fuse being used (pkg-config --modversion fuse)
2.9.4
System information (uname -a)
Linux webserver 4.4.0-1016-aws #25-Ubuntu SMP Thu Apr 20 11:34:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Distro (cat /etc/issue)
Ubuntu 16.04.2 LTS \n \l
s3fs command line used (if applicable)
root@webserver:~# s3fs backups.arahant.com /mnt/S3 -o passwd_file=key,sigv2,-d,-d,-f
/etc/fstab entry (if applicable):
None
s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
@gaul commented on GitHub (May 16, 2017):
Can you run again with the
-o curldbgflag and share the output?@ggtakec commented on GitHub (May 16, 2017):
@blakemcbride
It seems about SSL peer certificate file or remote key issue when s3fs started to run and checked bucket.
Could you check your box and cert file/and ssl connection?
@blakemcbride commented on GitHub (May 16, 2017):
s3fs backups.arahant.com /mnt/S3 -o passwd_file=key,curldbg
displays nothing but /var/log/syslog has:
May 16 11:44:53 localhost, s3fs[7501]: s3fs.cpp:s3fs_init(3378): init v1.82(commit:ae4caa9) with OpenSSL
May 16 11:44:53 localhost, s3fs[7501]: * Trying 52.92.78.7...
May 16 11:44:53 localhost, s3fs[7501]: * Connected to backups.arahant.com.s3.amazonaws.com (52.92.78.7) port 443 (#0)
May 16 11:44:53 localhost, s3fs[7501]: * found 173 certificates in /etc/ssl/certs/ca-certificates.crt
May 16 11:44:53 localhost, s3fs[7501]: * found 694 certificates in /etc/ssl/certs
May 16 11:44:53 localhost, s3fs[7501]: * ALPN, offering http/1.1
May 16 11:44:53 localhost, s3fs[7501]: * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
May 16 11:44:53 localhost, s3fs[7501]: * server certificate verification OK
May 16 11:44:53 localhost, s3fs[7501]: * server certificate status verification SKIPPED
May 16 11:44:53 localhost, s3fs[7501]: * SSL: certificate subject name (*.s3.amazonaws.com) does not match target host name 'backups.arahant.com.s3.amazonaws.com'
May 16 11:44:53 localhost, s3fs[7501]: * Closing connection 0
May 16 11:44:53 localhost, s3fs[7501]: s3fs.cpp:s3fs_check_service(3820): unable to connect(host=https://s3.amazonaws.com) - result of checking service.
@blakemcbride commented on GitHub (May 16, 2017):
I don't know what "Could you check your box and cert file/and ssl connection?" means.
Thanks for the help!
Blake
@ggtakec commented on GitHub (May 16, 2017):
Probabry the error "SSL peer certificate or SSH remote key was not OK" means:
This problem is the possibility of a wildcard SSL certificate(*.s3.amazonaws.com) that is not valid for layered FQDN(backups.arahant.com.s3.amazonaws.com), isn't it?
If there are people who can confirm that the S3 wildcard certificate is available up to the hierarchical FQDN, I would like to know about that as well.
@ggtakec commented on GitHub (May 16, 2017):
I'm sorry for last my comment that may not be the cause.
Do you use CloudFront?
I am not familiar with CloudFront(for CNAME/redirect), but found following web site, and it said:
https://simonecarletti.com/blog/2016/08/redirect-domain-https-amazon-cloudfront/
Please see: http://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/WebsiteEndpoints.html
I think that s3fs try to connect to "backups.arahant.com.s3.amazonaws.com", it is not endpoint for S3 REAT API.
It may be wrong, but please check it.
Regards,
@blakemcbride commented on GitHub (May 17, 2017):
I am not using CloudFront at all. I think I really have a simple setup. I created an S3 bucket with the defaults. My EC2 instances are just straight Linux boxes. I installed my own SSH keys in ~/.ssh, but besides that everything should be vanilla. I haven't knowingly messed with any permissions.
@blakemcbride commented on GitHub (May 17, 2017):
I am pretty sure my credentials are correct because I get no error message. If I try bad credentials, I get an error message. I am really lost with this. Are others really using the latest version as I am trying?
Thanks for all the help!
Blake
@blakemcbride commented on GitHub (May 17, 2017):
Perhaps this is an issue. I have my EC2 instances going through a security group that only allows SSH & HTTP externally. Could this be an issue. Do I need to punch a hole through another port?
@sqlbot commented on GitHub (May 17, 2017):
This is caused by dots in the bucket name.
Confirming what @ggtakec suggested:
This problem is the possibility of a wildcard SSL certificate (*.s3.amazonaws.com) that is not valid for layered FQDN(backups.arahant.com.s3.amazonaws.com), isn't it?
This is exactly the problem. Wildcard certificates are never valid for more than one level of hostname.
From RFC-6125:
If there are people who can confirm that the S3 wildcard certificate is available up to the hierarchical FQDN, I would like to know about that as well.
It is not. The S3 wildcard cert supports single-level labels, without dots. To use SSL (TLS) with a bucket, you either need to have a bucket with no dots in the hostname, or use path-style requests, which puts the bucket name in the path portion of the URL rather than in the hostname.
The solution should be
-o use_path_request_style... and if it isn't, then we need to understand why it isn't. You should also specify the region (e.g.-o endpoint=us-west-2) if the bucket isn't in the default us-east-1 region.@blakemcbride commented on GitHub (May 17, 2017):
Thanks a lot! That worked!! I created another bucket without dots in the name and it worked first try. Wow! This should definitely be in the FAQ! In fact, if there are dots in the name, I think s3fs should issue a warning or error message.
Thanks a lot to all for all the great help!!
Blake
@ggtakec commented on GitHub (May 17, 2017):
@sqlbot Thanks for your great help.
@blakemcbride I am glad that the problem was solved.
And I will add comment in wiki page about this.
Regards,
@ggtakec commented on GitHub (May 17, 2017):
Added FAQ - https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ#q-https-connecting-failed-if-bucket-name-includes-dod
Please let me know if the contents are wrong.
@mibstar commented on GitHub (Nov 21, 2017):
I've encountered this same issues on my Mac and specifically:
MacOS High Sierra 10.13.1
Amazon Simple Storage Service File System V1.82
My buckets names have dots but no files get listed when I run the following:
$ s3fs s3.local.example.org s3 -o passwd_file=.passwd-s3fs -f -o curldbg -o use_path_request_style -o endpoint=eu-west-1
I could rename my testing bucket but it's going to be an issue further down the line.
@RML-Admin commented on GitHub (Apr 1, 2018):
I am running into a problem with buckets with dots in their names.
I can see <mount_dir> in df -h.
But, I don't see anything in mount_dir when I ls <mount_dir>. When I cd <mount_dir> and ls, I see the following error message:
s3fs version is 1.83
@lakamsani commented on GitHub (Jul 3, 2018):
This worked today on an Amazon Linux instance:
s3fs first.second.co:/directory $HOME/mountdir -o iam_role -o endpoint=us-west-2 -o use_path_request_stylewith V1.83(commit:0f503ce)@exNewbie commented on GitHub (Oct 26, 2018):
I have the same issue. my bucket has dots and webhosting enabled. I tried
s3fs sub.domain.com:/ folder -o endpoint=ap-southeast-2 -o use_path_request_styleand got
Oct 26 10:53:08 localhost s3fs[8377]: s3fs.cpp:s3fs_check_service(3791): unable to connect(host=https://s3.amazonaws.com) - result of checking service.I'm using the latest version
Amazon Simple Storage Service File System V1.84(commit:8929a27) with OpenSSL Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.Updated: referring to https://github.com/s3fs-fuse/s3fs-fuse/issues/465#issuecomment-369907790, adding
-o url=http://s3-ap-southeast-2.amazonaws.comhelped.-o endpoint=ap-southeast-2might not be needed anymore.@RML-Admin commented on GitHub (Oct 26, 2018):
@exNewbie Try
s3fs -o url=https://s3-<region>.amazonaws.com -o use_path_request_style <bucket> <mount_dir> -o passwd_file=<password file>This worked for me with buckets with
dotsin their names.@exNewbie commented on GitHub (Oct 26, 2018):
thanks @adavanisanti . I updated my comment not long after the original one 👍
@vadimeremeev commented on GitHub (Jan 16, 2019):
any idea on how to put
any idea on how to put this parameters into /etc/fstab ? thanks
@exNewbie commented on GitHub (Jan 16, 2019):
@vadimeremeev I'm afraid it is not possible. S3FS is not an actual type of file system. fstab is where you specify mounts of file systems.
You may consider to use rc.local to run s3fs command at startup.
@juliogonzalez commented on GitHub (Jan 16, 2019):
@exNewbie s3fs uses fuse, which means it can be used as a filesystem.
@vadimeremeev, an example on how to use
/etc/fstabis at the mainREADME.mdfile.