[GH-ISSUE #602] Doesn't connect #342

Closed
opened 2026-03-04 01:44:32 +03:00 by kerem · 22 comments
Owner

Originally created by @blakemcbride on GitHub (May 16, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/602

Additional Information

The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.

  • Version of s3fs being used (s3fs --version)
    1.82

  • Version of fuse being used (pkg-config --modversion fuse)
    2.9.4

  • System information (uname -a)
    Linux webserver 4.4.0-1016-aws #25-Ubuntu SMP Thu Apr 20 11:34:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

  • Distro (cat /etc/issue)
    Ubuntu 16.04.2 LTS \n \l

  • s3fs command line used (if applicable)
    root@webserver:~# s3fs backups.arahant.com /mnt/S3 -o passwd_file=key,sigv2,-d,-d,-f

  • /etc/fstab entry (if applicable):
    None

  • s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs)
    if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages

May 16 02:21:09 localhost, s3fs[1987]: s3fs.cpp:set_s3fs_log_level(257): change debug level from [CRT] to [INF]
May 16 02:21:09 localhost, s3fs[1987]:     PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
May 16 02:21:09 localhost, s3fs[1987]: s3fs.cpp:s3fs_init(3378): init v1.82(commit:ae4caa9) with OpenSSL
May 16 02:21:09 localhost, s3fs[1987]: check services.
May 16 02:21:09 localhost, s3fs[1987]:       check a bucket.
May 16 02:21:09 localhost, s3fs[1987]:       URL is https://s3.amazonaws.com/backups.arahant.com/
May 16 02:21:09 localhost, s3fs[1987]:       URL changed is https://backups.arahant.com.s3.amazonaws.com/
May 16 02:21:10 localhost, s3fs[1987]: curl.cpp:RequestPerform(2078): ###curlCode: 51  msg: SSL peer certificate or SSH remote key was not OK
May 16 02:21:10 localhost, s3fs[1987]: curl.cpp:CheckBucket(2953): Check bucket failed, S3 response:
May 16 02:21:10 localhost, s3fs[1987]: s3fs.cpp:s3fs_check_service(3820): unable to connect(host=https://s3.amazonaws.com) - result of checking service.
May 16 02:21:10 localhost, s3fs[1987]: s3fs.cpp:s3fs_exit_fuseloop(3368): Exiting FUSE event loop due to errors
May 16 02:21:10 localhost, s3fs[1987]: destroy
May 16 02:21:10 localhost, s3fs[1987]: s3fs.cpp:s3fs_destroy(3445): Could not release curl library.

#### Details about issue

When I try to connect it just returns and it's not connected.  I did install my own ssh keys for my own use.  I see the message about the remote key not good.  I don't know how to fix it.  Unfortunately, I didn't save the ssh keys that where there before I changed it to suit me.  This is what I get:

root@webserver:~# s3fs backups.arahant.com /mnt/S3 -o passwd_file=key,sigv2,-d,-d,-f
FUSE library version: 2.9.4
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.23
flags=0x0003fffb
max_readahead=0x00020000
   INIT: 7.19
   flags=0x00000011
   max_readahead=0x00020000
   max_write=0x00020000
   max_background=0
   congestion_threshold=0
   unique: 1, success, outsize: 40

Of course:
root@webserver:~# ls -al /mnt/S3
total 8
drwxr-xr-x 2 root root 4096 May 14 13:18 .
drwxr-xr-x 3 root root 4096 May 14 11:38 ..

Thanks for the help!

Blake
Originally created by @blakemcbride on GitHub (May 16, 2017). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/602 #### Additional Information _The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._ - Version of s3fs being used (s3fs --version) 1.82 - Version of fuse being used (pkg-config --modversion fuse) 2.9.4 - System information (uname -a) Linux webserver 4.4.0-1016-aws #25-Ubuntu SMP Thu Apr 20 11:34:35 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux - Distro (cat /etc/issue) Ubuntu 16.04.2 LTS \n \l - s3fs command line used (if applicable) root@webserver:~# s3fs backups.arahant.com /mnt/S3 -o passwd_file=key,sigv2,-d,-d,-f - /etc/fstab entry (if applicable): None - s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs) _if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages_ ``` May 16 02:21:09 localhost, s3fs[1987]: s3fs.cpp:set_s3fs_log_level(257): change debug level from [CRT] to [INF] May 16 02:21:09 localhost, s3fs[1987]: PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) May 16 02:21:09 localhost, s3fs[1987]: s3fs.cpp:s3fs_init(3378): init v1.82(commit:ae4caa9) with OpenSSL May 16 02:21:09 localhost, s3fs[1987]: check services. May 16 02:21:09 localhost, s3fs[1987]: check a bucket. May 16 02:21:09 localhost, s3fs[1987]: URL is https://s3.amazonaws.com/backups.arahant.com/ May 16 02:21:09 localhost, s3fs[1987]: URL changed is https://backups.arahant.com.s3.amazonaws.com/ May 16 02:21:10 localhost, s3fs[1987]: curl.cpp:RequestPerform(2078): ###curlCode: 51 msg: SSL peer certificate or SSH remote key was not OK May 16 02:21:10 localhost, s3fs[1987]: curl.cpp:CheckBucket(2953): Check bucket failed, S3 response: May 16 02:21:10 localhost, s3fs[1987]: s3fs.cpp:s3fs_check_service(3820): unable to connect(host=https://s3.amazonaws.com) - result of checking service. May 16 02:21:10 localhost, s3fs[1987]: s3fs.cpp:s3fs_exit_fuseloop(3368): Exiting FUSE event loop due to errors May 16 02:21:10 localhost, s3fs[1987]: destroy May 16 02:21:10 localhost, s3fs[1987]: s3fs.cpp:s3fs_destroy(3445): Could not release curl library. #### Details about issue When I try to connect it just returns and it's not connected. I did install my own ssh keys for my own use. I see the message about the remote key not good. I don't know how to fix it. Unfortunately, I didn't save the ssh keys that where there before I changed it to suit me. This is what I get: root@webserver:~# s3fs backups.arahant.com /mnt/S3 -o passwd_file=key,sigv2,-d,-d,-f FUSE library version: 2.9.4 nullpath_ok: 0 nopath: 0 utime_omit_ok: 0 unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0 INIT: 7.23 flags=0x0003fffb max_readahead=0x00020000 INIT: 7.19 flags=0x00000011 max_readahead=0x00020000 max_write=0x00020000 max_background=0 congestion_threshold=0 unique: 1, success, outsize: 40 Of course: root@webserver:~# ls -al /mnt/S3 total 8 drwxr-xr-x 2 root root 4096 May 14 13:18 . drwxr-xr-x 3 root root 4096 May 14 11:38 .. Thanks for the help! Blake
kerem closed this issue 2026-03-04 01:44:33 +03:00
Author
Owner

@gaul commented on GitHub (May 16, 2017):

Can you run again with the -o curldbg flag and share the output?

<!-- gh-comment-id:301663451 --> @gaul commented on GitHub (May 16, 2017): Can you run again with the `-o curldbg` flag and share the output?
Author
Owner

@ggtakec commented on GitHub (May 16, 2017):

@blakemcbride
It seems about SSL peer certificate file or remote key issue when s3fs started to run and checked bucket.

May 16 02:21:10 localhost, s3fs[1987]: curl.cpp:RequestPerform(2078): ###curlCode: 51  msg: SSL peer certificate or SSH remote key was not OK

Could you check your box and cert file/and ssl connection?

<!-- gh-comment-id:301754608 --> @ggtakec commented on GitHub (May 16, 2017): @blakemcbride It seems about SSL peer certificate file or remote key issue when s3fs started to run and checked bucket. ``` May 16 02:21:10 localhost, s3fs[1987]: curl.cpp:RequestPerform(2078): ###curlCode: 51 msg: SSL peer certificate or SSH remote key was not OK ``` Could you check your box and cert file/and ssl connection?
Author
Owner

@blakemcbride commented on GitHub (May 16, 2017):

s3fs backups.arahant.com /mnt/S3 -o passwd_file=key,curldbg

displays nothing but /var/log/syslog has:

May 16 11:44:53 localhost, s3fs[7501]: s3fs.cpp:s3fs_init(3378): init v1.82(commit:ae4caa9) with OpenSSL
May 16 11:44:53 localhost, s3fs[7501]: * Trying 52.92.78.7...
May 16 11:44:53 localhost, s3fs[7501]: * Connected to backups.arahant.com.s3.amazonaws.com (52.92.78.7) port 443 (#0)
May 16 11:44:53 localhost, s3fs[7501]: * found 173 certificates in /etc/ssl/certs/ca-certificates.crt
May 16 11:44:53 localhost, s3fs[7501]: * found 694 certificates in /etc/ssl/certs
May 16 11:44:53 localhost, s3fs[7501]: * ALPN, offering http/1.1
May 16 11:44:53 localhost, s3fs[7501]: * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256
May 16 11:44:53 localhost, s3fs[7501]: * server certificate verification OK
May 16 11:44:53 localhost, s3fs[7501]: * server certificate status verification SKIPPED
May 16 11:44:53 localhost, s3fs[7501]: * SSL: certificate subject name (*.s3.amazonaws.com) does not match target host name 'backups.arahant.com.s3.amazonaws.com'
May 16 11:44:53 localhost, s3fs[7501]: * Closing connection 0
May 16 11:44:53 localhost, s3fs[7501]: s3fs.cpp:s3fs_check_service(3820): unable to connect(host=https://s3.amazonaws.com) - result of checking service.

<!-- gh-comment-id:301757594 --> @blakemcbride commented on GitHub (May 16, 2017): s3fs backups.arahant.com /mnt/S3 -o passwd_file=key,curldbg displays nothing but /var/log/syslog has: May 16 11:44:53 localhost, s3fs[7501]: s3fs.cpp:s3fs_init(3378): init v1.82(commit:ae4caa9) with OpenSSL May 16 11:44:53 localhost, s3fs[7501]: * Trying 52.92.78.7... May 16 11:44:53 localhost, s3fs[7501]: * Connected to backups.arahant.com.s3.amazonaws.com (52.92.78.7) port 443 (#0) May 16 11:44:53 localhost, s3fs[7501]: * found 173 certificates in /etc/ssl/certs/ca-certificates.crt May 16 11:44:53 localhost, s3fs[7501]: * found 694 certificates in /etc/ssl/certs May 16 11:44:53 localhost, s3fs[7501]: * ALPN, offering http/1.1 May 16 11:44:53 localhost, s3fs[7501]: * SSL connection using TLS1.2 / ECDHE_RSA_AES_128_GCM_SHA256 May 16 11:44:53 localhost, s3fs[7501]: * server certificate verification OK May 16 11:44:53 localhost, s3fs[7501]: * server certificate status verification SKIPPED May 16 11:44:53 localhost, s3fs[7501]: * SSL: certificate subject name (*.s3.amazonaws.com) does not match target host name 'backups.arahant.com.s3.amazonaws.com' May 16 11:44:53 localhost, s3fs[7501]: * Closing connection 0 May 16 11:44:53 localhost, s3fs[7501]: s3fs.cpp:s3fs_check_service(3820): unable to connect(host=https://s3.amazonaws.com) - result of checking service.
Author
Owner

@blakemcbride commented on GitHub (May 16, 2017):

I don't know what "Could you check your box and cert file/and ssl connection?" means.

Thanks for the help!

Blake

<!-- gh-comment-id:301757902 --> @blakemcbride commented on GitHub (May 16, 2017): I don't know what "Could you check your box and cert file/and ssl connection?" means. Thanks for the help! Blake
Author
Owner

@ggtakec commented on GitHub (May 16, 2017):

Probabry the error "SSL peer certificate or SSH remote key was not OK" means:

  • The X.509 SSL server certificate sent by the server is invalid.
  • The SSL peer certificate error occurs when validation of the trust chain (not the actual certificate) fails.

This problem is the possibility of a wildcard SSL certificate(*.s3.amazonaws.com) that is not valid for layered FQDN(backups.arahant.com.s3.amazonaws.com), isn't it?
If there are people who can confirm that the S3 wildcard certificate is available up to the hierarchical FQDN, I would like to know about that as well.

<!-- gh-comment-id:301775537 --> @ggtakec commented on GitHub (May 16, 2017): Probabry the error "SSL peer certificate or SSH remote key was not OK" means: - The X.509 SSL server certificate sent by the server is invalid. - The SSL peer certificate error occurs when validation of the trust chain (not the actual certificate) fails. This problem is the possibility of a wildcard SSL certificate(*.s3.amazonaws.com) that is not valid for layered FQDN(backups.arahant.com.s3.amazonaws.com), isn't it? If there are people who can confirm that the S3 wildcard certificate is available up to the hierarchical FQDN, I would like to know about that as well.
Author
Owner

@ggtakec commented on GitHub (May 16, 2017):

I'm sorry for last my comment that may not be the cause.
Do you use CloudFront?
I am not familiar with CloudFront(for CNAME/redirect), but found following web site, and it said:
https://simonecarletti.com/blog/2016/08/redirect-domain-https-amazon-cloudfront/

Make sure to use the web site endpoint and NOT the REST endpoint, since the redirect feature is only available in the web site endpoint as explained in the Amazon documentation. Don't use the endpoint auto-suggested by CloudFront.

Please see: http://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/WebsiteEndpoints.html

I think that s3fs try to connect to "backups.arahant.com.s3.amazonaws.com", it is not endpoint for S3 REAT API.
It may be wrong, but please check it.
Regards,

<!-- gh-comment-id:301786873 --> @ggtakec commented on GitHub (May 16, 2017): I'm sorry for last my comment that may not be the cause. Do you use CloudFront? I am not familiar with CloudFront(for CNAME/redirect), but found following web site, and it said: https://simonecarletti.com/blog/2016/08/redirect-domain-https-amazon-cloudfront/ ``` Make sure to use the web site endpoint and NOT the REST endpoint, since the redirect feature is only available in the web site endpoint as explained in the Amazon documentation. Don't use the endpoint auto-suggested by CloudFront. ``` Please see: http://docs.aws.amazon.com/ja_jp/AmazonS3/latest/dev/WebsiteEndpoints.html I think that s3fs try to connect to "backups.arahant.com.s3.amazonaws.com", it is not endpoint for S3 REAT API. It may be wrong, but please check it. Regards,
Author
Owner

@blakemcbride commented on GitHub (May 17, 2017):

I am not using CloudFront at all. I think I really have a simple setup. I created an S3 bucket with the defaults. My EC2 instances are just straight Linux boxes. I installed my own SSH keys in ~/.ssh, but besides that everything should be vanilla. I haven't knowingly messed with any permissions.

<!-- gh-comment-id:301948433 --> @blakemcbride commented on GitHub (May 17, 2017): I am not using CloudFront at all. I think I really have a simple setup. I created an S3 bucket with the defaults. My EC2 instances are just straight Linux boxes. I installed my own SSH keys in ~/.ssh, but besides that everything should be vanilla. I haven't knowingly messed with any permissions.
Author
Owner

@blakemcbride commented on GitHub (May 17, 2017):

I am pretty sure my credentials are correct because I get no error message. If I try bad credentials, I get an error message. I am really lost with this. Are others really using the latest version as I am trying?

Thanks for all the help!

Blake

<!-- gh-comment-id:301948765 --> @blakemcbride commented on GitHub (May 17, 2017): I am pretty sure my credentials are correct because I get no error message. If I try bad credentials, I get an error message. I am really lost with this. Are others really using the latest version as I am trying? Thanks for all the help! Blake
Author
Owner

@blakemcbride commented on GitHub (May 17, 2017):

Perhaps this is an issue. I have my EC2 instances going through a security group that only allows SSH & HTTP externally. Could this be an issue. Do I need to punch a hole through another port?

<!-- gh-comment-id:301949901 --> @blakemcbride commented on GitHub (May 17, 2017): Perhaps this is an issue. I have my EC2 instances going through a security group that only allows SSH & HTTP externally. Could this be an issue. Do I need to punch a hole through another port?
Author
Owner

@sqlbot commented on GitHub (May 17, 2017):

This is caused by dots in the bucket name.

Confirming what @ggtakec suggested:

This problem is the possibility of a wildcard SSL certificate (*.s3.amazonaws.com) that is not valid for layered FQDN(backups.arahant.com.s3.amazonaws.com), isn't it?

This is exactly the problem. Wildcard certificates are never valid for more than one level of hostname.
From RFC-6125:

The '*' (ASCII 42) wildcard character is allowed in subjectAltName
values of type dNSName, and then only as the left-most (least
significant) DNS label in that value. This wildcard matches any
left-most DNS label in the server name. That is, the subject
*.example.com matches the server names a.example.com and
b.example.com, but does not match example.com or a.b.example.com.

If there are people who can confirm that the S3 wildcard certificate is available up to the hierarchical FQDN, I would like to know about that as well.

It is not. The S3 wildcard cert supports single-level labels, without dots. To use SSL (TLS) with a bucket, you either need to have a bucket with no dots in the hostname, or use path-style requests, which puts the bucket name in the path portion of the URL rather than in the hostname.

The solution should be -o use_path_request_style... and if it isn't, then we need to understand why it isn't. You should also specify the region (e.g. -o endpoint=us-west-2) if the bucket isn't in the default us-east-1 region.

<!-- gh-comment-id:301951580 --> @sqlbot commented on GitHub (May 17, 2017): This is caused by dots in the bucket name. Confirming what @ggtakec suggested: _This problem is the possibility of a wildcard SSL certificate (*.s3.amazonaws.com) that is not valid for layered FQDN(backups.arahant.com.s3.amazonaws.com), isn't it?_ This is exactly the problem. Wildcard certificates are *never* valid for more than one level of hostname. From RFC-6125: >The '*' (ASCII 42) wildcard character is allowed in subjectAltName values of type dNSName, and then only as the left-most (least significant) DNS label in that value. This wildcard matches any left-most DNS label in the server name. That is, the subject *.example.com matches the server names a.example.com and b.example.com, but does not match example.com or a.b.example.com. _If there are people who can confirm that the S3 wildcard certificate is available up to the hierarchical FQDN, I would like to know about that as well._ It is not. The S3 wildcard cert supports single-level labels, without dots. To use SSL (TLS) with a bucket, you either need to have a bucket with no dots in the hostname, or use path-style requests, which puts the bucket name in the path portion of the URL rather than in the hostname. The solution *should* be `-o use_path_request_style`... and if it isn't, then we need to understand *why* it isn't. You should also specify the region (e.g. `-o endpoint=us-west-2`) if the bucket isn't in the default us-east-1 region.
Author
Owner

@blakemcbride commented on GitHub (May 17, 2017):

Thanks a lot! That worked!! I created another bucket without dots in the name and it worked first try. Wow! This should definitely be in the FAQ! In fact, if there are dots in the name, I think s3fs should issue a warning or error message.

Thanks a lot to all for all the great help!!

Blake

<!-- gh-comment-id:301954502 --> @blakemcbride commented on GitHub (May 17, 2017): Thanks a lot! That worked!! I created another bucket without dots in the name and it worked first try. Wow! This should definitely be in the FAQ! In fact, if there are dots in the name, I think s3fs should issue a warning or error message. Thanks a lot to all for all the great help!! Blake
Author
Owner

@ggtakec commented on GitHub (May 17, 2017):

@sqlbot Thanks for your great help.
@blakemcbride I am glad that the problem was solved.
And I will add comment in wiki page about this.
Regards,

<!-- gh-comment-id:302095261 --> @ggtakec commented on GitHub (May 17, 2017): @sqlbot Thanks for your great help. @blakemcbride I am glad that the problem was solved. And I will add comment in wiki page about this. Regards,
Author
Owner

@ggtakec commented on GitHub (May 17, 2017):

Added FAQ - https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ#q-https-connecting-failed-if-bucket-name-includes-dod
Please let me know if the contents are wrong.

<!-- gh-comment-id:302105411 --> @ggtakec commented on GitHub (May 17, 2017): Added FAQ - https://github.com/s3fs-fuse/s3fs-fuse/wiki/FAQ#q-https-connecting-failed-if-bucket-name-includes-dod Please let me know if the contents are wrong.
Author
Owner

@mibstar commented on GitHub (Nov 21, 2017):

I've encountered this same issues on my Mac and specifically:

MacOS High Sierra 10.13.1
Amazon Simple Storage Service File System V1.82

My buckets names have dots but no files get listed when I run the following:

$ s3fs s3.local.example.org s3 -o passwd_file=.passwd-s3fs -f -o curldbg -o use_path_request_style -o endpoint=eu-west-1

I could rename my testing bucket but it's going to be an issue further down the line.

<!-- gh-comment-id:345983120 --> @mibstar commented on GitHub (Nov 21, 2017): I've encountered this same issues on my Mac and specifically: MacOS High Sierra 10.13.1 Amazon Simple Storage Service File System V1.82 My buckets names have dots but no files get listed when I run the following: $ s3fs s3.local.example.org s3 -o passwd_file=.passwd-s3fs -f -o curldbg -o use_path_request_style -o endpoint=eu-west-1 I could rename my testing bucket but it's going to be an issue further down the line.
Author
Owner

@RML-Admin commented on GitHub (Apr 1, 2018):

I am running into a problem with buckets with dots in their names.

s3fs -o url=http://s3.amazonaws.com -o use_path_request_style -o endpoint=us-west-2 -o bucket=a.b.cd.ef  <mount_dir>

I can see <mount_dir> in df -h.

ubuntu@ip-172-31-45-177:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
s3fs            XT     0  XT   0% <mount_dir>

But, I don't see anything in mount_dir when I ls <mount_dir>. When I cd <mount_dir> and ls, I see the following error message:

ls: error while loading shared libraries: libselinux.so.1: file too short

s3fs version is 1.83

Amazon Simple Storage Service File System V1.83(commit:0f503ce) with OpenSSL
<!-- gh-comment-id:377754075 --> @RML-Admin commented on GitHub (Apr 1, 2018): I am running into a problem with buckets with dots in their names. ``` s3fs -o url=http://s3.amazonaws.com -o use_path_request_style -o endpoint=us-west-2 -o bucket=a.b.cd.ef <mount_dir> ``` I can see <mount_dir> in df -h. ``` ubuntu@ip-172-31-45-177:~$ df -h Filesystem Size Used Avail Use% Mounted on s3fs XT 0 XT 0% <mount_dir> ``` But, I don't see anything in mount_dir when I ls <mount_dir>. When I cd <mount_dir> and ls, I see the following error message: ``` ls: error while loading shared libraries: libselinux.so.1: file too short ``` s3fs version is 1.83 ``` Amazon Simple Storage Service File System V1.83(commit:0f503ce) with OpenSSL ```
Author
Owner

@lakamsani commented on GitHub (Jul 3, 2018):

This worked today on an Amazon Linux instance: s3fs first.second.co:/directory $HOME/mountdir -o iam_role -o endpoint=us-west-2 -o use_path_request_style with V1.83(commit:0f503ce)

<!-- gh-comment-id:402310403 --> @lakamsani commented on GitHub (Jul 3, 2018): This worked today on an Amazon Linux instance: `s3fs first.second.co:/directory $HOME/mountdir -o iam_role -o endpoint=us-west-2 -o use_path_request_style` with V1.83(commit:0f503ce)
Author
Owner

@exNewbie commented on GitHub (Oct 26, 2018):

I have the same issue. my bucket has dots and webhosting enabled. I tried

s3fs sub.domain.com:/ folder -o endpoint=ap-southeast-2 -o use_path_request_style

and got
Oct 26 10:53:08 localhost s3fs[8377]: s3fs.cpp:s3fs_check_service(3791): unable to connect(host=https://s3.amazonaws.com) - result of checking service.

I'm using the latest version
Amazon Simple Storage Service File System V1.84(commit:8929a27) with OpenSSL Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.

Updated: referring to https://github.com/s3fs-fuse/s3fs-fuse/issues/465#issuecomment-369907790, adding -o url=http://s3-ap-southeast-2.amazonaws.com helped.

-o endpoint=ap-southeast-2 might not be needed anymore.

<!-- gh-comment-id:433252399 --> @exNewbie commented on GitHub (Oct 26, 2018): I have the same issue. my bucket has dots and webhosting enabled. I tried `` s3fs sub.domain.com:/ folder -o endpoint=ap-southeast-2 -o use_path_request_style `` and got `` Oct 26 10:53:08 localhost s3fs[8377]: s3fs.cpp:s3fs_check_service(3791): unable to connect(host=https://s3.amazonaws.com) - result of checking service. `` I'm using the latest version `` Amazon Simple Storage Service File System V1.84(commit:8929a27) with OpenSSL Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. `` Updated: referring to https://github.com/s3fs-fuse/s3fs-fuse/issues/465#issuecomment-369907790, adding ``-o url=http://s3-ap-southeast-2.amazonaws.com`` helped. ``-o endpoint=ap-southeast-2`` might not be needed anymore.
Author
Owner

@RML-Admin commented on GitHub (Oct 26, 2018):

@exNewbie Try s3fs -o url=https://s3-<region>.amazonaws.com -o use_path_request_style <bucket> <mount_dir> -o passwd_file=<password file>

This worked for me with buckets with dots in their names.

<!-- gh-comment-id:433253977 --> @RML-Admin commented on GitHub (Oct 26, 2018): @exNewbie Try `s3fs -o url=https://s3-<region>.amazonaws.com -o use_path_request_style <bucket> <mount_dir> -o passwd_file=<password file>` This worked for me with buckets with `dots` in their names.
Author
Owner

@exNewbie commented on GitHub (Oct 26, 2018):

thanks @adavanisanti . I updated my comment not long after the original one 👍

<!-- gh-comment-id:433255101 --> @exNewbie commented on GitHub (Oct 26, 2018): thanks @adavanisanti . I updated my comment not long after the original one :+1:
Author
Owner

@vadimeremeev commented on GitHub (Jan 16, 2019):

any idea on how to put

@exNewbie Try s3fs -o url=https://s3-<region>.amazonaws.com -o use_path_request_style <bucket> <mount_dir> -o passwd_file=<password file>

This worked for me with buckets with dots in their names.

any idea on how to put this parameters into /etc/fstab ? thanks

<!-- gh-comment-id:454748497 --> @vadimeremeev commented on GitHub (Jan 16, 2019): any idea on how to put > @exNewbie Try `s3fs -o url=https://s3-<region>.amazonaws.com -o use_path_request_style <bucket> <mount_dir> -o passwd_file=<password file>` > > This worked for me with buckets with `dots` in their names. any idea on how to put this parameters into /etc/fstab ? thanks
Author
Owner

@exNewbie commented on GitHub (Jan 16, 2019):

@vadimeremeev I'm afraid it is not possible. S3FS is not an actual type of file system. fstab is where you specify mounts of file systems.

You may consider to use rc.local to run s3fs command at startup.

<!-- gh-comment-id:454754122 --> @exNewbie commented on GitHub (Jan 16, 2019): @vadimeremeev I'm afraid it is not possible. S3FS is not an actual type of file system. **fstab** is where you specify mounts of file systems. You may consider to use **rc.local** to run **s3fs** command at startup.
Author
Owner

@juliogonzalez commented on GitHub (Jan 16, 2019):

@exNewbie s3fs uses fuse, which means it can be used as a filesystem.

@vadimeremeev, an example on how to use /etc/fstab is at the main README.md file.

You can also mount on boot by entering the following line to /etc/fstab:

s3fs#mybucket /path/to/mountpoint fuse _netdev,allow_other 0 0

<!-- gh-comment-id:454806087 --> @juliogonzalez commented on GitHub (Jan 16, 2019): @exNewbie s3fs uses [fuse](https://github.com/libfuse/libfuse), which means it can be used as a filesystem. @vadimeremeev, an example on how to use `/etc/fstab` is at the main `README.md` file. > You can also mount on boot by entering the following line to /etc/fstab: > > s3fs#mybucket /path/to/mountpoint fuse _netdev,allow_other 0 0
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#342
No description provided.