[GH-ISSUE #105] s3fs on Amazon VPC - transport endpoint not connected #63

Closed
opened 2026-03-04 01:41:41 +03:00 by kerem · 13 comments
Owner

Originally created by @ai6pg on GitHub (Jan 19, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/105

ubuntu@hostname:/etc$ fuse --version
The Free Unix Spectrum Emulator (Fuse) version 1.1.1.
fstab has:
s3fs#bucketname /mnt/s3 fuse rw,allow_other,_netdev,nosuid,nodev,uid=1000,gid=1000,use_cache=/home/ubuntu/cache 0 2

/etc/passwd-s3fs has key&passphrase

I'be been about to mount my bucket key.name
but not my bucket name2

cd: /mnt/s3: Transport endpoint is not connected

Any suggestions?
73 de AI6PG aka Peter
http://petergraceonline.com

Originally created by @ai6pg on GitHub (Jan 19, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/105 ubuntu@hostname:/etc$ fuse --version The Free Unix Spectrum Emulator (Fuse) version 1.1.1. fstab has: s3fs#bucketname /mnt/s3 fuse rw,allow_other,_netdev,nosuid,nodev,uid=1000,gid=1000,use_cache=/home/ubuntu/cache 0 2 /etc/passwd-s3fs has key&passphrase I'be been about to mount my bucket key.name but not my bucket name2 cd: /mnt/s3: Transport endpoint is not connected Any suggestions? 73 de AI6PG aka Peter http://petergraceonline.com
kerem closed this issue 2026-03-04 01:41:41 +03:00
Author
Owner

@jb-1980 commented on GitHub (Jan 20, 2015):

I was having a similar issue. It turned out that I needed to ensure that my IAM user associated with the key:secret pair in the passwd-s3fs file had the correct permissions to access S3. It may be different for your case since you mention that you have been able to mount your bucket key.name, but I thought I would share what helped me solve that error when I was seeing it.

<!-- gh-comment-id:70671783 --> @jb-1980 commented on GitHub (Jan 20, 2015): I was having a similar issue. It turned out that I needed to ensure that my IAM user associated with the key:secret pair in the passwd-s3fs file had the correct permissions to access S3. It may be different for your case since you mention that you have been able to mount your bucket key.name, but I thought I would share what helped me solve that error when I was seeing it.
Author
Owner

@ggtakec commented on GitHub (Mar 4, 2015):

Hi, I'm sorry fo replying late.

I got same problem in other issue, which is about "Transport endpoint is not connected".
But I could not solve that problem.
If you can, please run s3fs manually(command line), and run with "-f" and "-d" option.
Probably s3fs puts many logs and it helps us for splving this issue.

Thanks in advance for your assistance.

<!-- gh-comment-id:77120823 --> @ggtakec commented on GitHub (Mar 4, 2015): Hi, I'm sorry fo replying late. I got same problem in other issue, which is about "Transport endpoint is not connected". But I could not solve that problem. If you can, please run s3fs manually(command line), and run with "-f" and "-d" option. Probably s3fs puts many logs and it helps us for splving this issue. Thanks in advance for your assistance.
Author
Owner

@wrstone commented on GitHub (Mar 30, 2015):

I have a similar issue. Specifically:

We have a number of Ubuntu 14.04 instances in GovCloud that are correctly mounting S3 buckets. However, I'm setting up a new instances and cannot connect to the bucket.

I've done a couple of command-line attempts to do this so that I can see error messages. First I tried it with our usual options:


[TEXTURAAWSGOV\bill.stone@govseesbuat01 ~]$ sudo s3fs -f -d -o url=http://s3-us-gov-west-1.amazonaws.com -o allow_other uat.project.files /java_data/s3projects
set_moutpoint_attribute(3530): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
s3fs_init(2713): init
s3fs_check_service(3070): check services.
CheckBucket(2538): check a bucket.
RequestPerform(1584): connecting to URL http://uat.project.files.s3-us-gov-west-1.amazonaws.com/
RequestPerform(1600): HTTP response code 400
RequestPerform(1614): HTTP response code 400 was returned, returing EIO.
CheckBucket(2590): Check bucket failed, S3 response:
AuthorizationHeaderMalformedThe authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-gov-west-1'us-gov-west-1D3F8A1498AE08C938Z26XMSpj9XY4HjoNke+l9nK/Q03DHeHBFeYLPmZZ8hoME5dYMW9S2SKzc41jNcm
s3fs_check_service(3103): Could not connect wrong region us-east-1, so retry to connect region us-gov-west-1.
CheckBucket(2538): check a bucket.
RequestPerform(1584): connecting to URL http://uat.project.files.s3-us-gov-west-1.amazonaws.com/
RequestPerform(1600): HTTP response code 404
RequestPerform(1624): HTTP response code 404 was returned, returning ENOENT
CheckBucket(2590): Check bucket failed, S3 response:
NoSuchBucketThe specified bucket does not existuat.project.files.s3.amazonaws.comD254DDFD35F0374BOW6J6Bs0eUn/zMXo+44vDMTGAyOOBuqbykWKEgjQtBrp9eGZ+qYbUHZkjim5WqVI
s3fs: bucket not found


So it's telling me that the URL is wrong, even though I sent it the same URL for GovCloud S3 buckets that our other servers are correctly mounting. Consequently, I tried adding an explicit endpoint:


[TEXTURAAWSGOV\bill.stone@govseesbuat01 ~]$ sudo s3fs -f -d -o url=http://s3-us-gov-west-1.amazonaws.com -o allow_other -o endpoint=us-gov-west-1 -o passwd_file=/etc/passwd-s3fs prd.project.files /java_data/s3projects
set_moutpoint_attribute(3530): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
s3fs_init(2713): init
s3fs_check_service(3070): check services.
CheckBucket(2538): check a bucket.
RequestPerform(1584): connecting to URL http://prd.project.files.s3-us-gov-west-1.amazonaws.com/
RequestPerform(1600): HTTP response code 404
RequestPerform(1624): HTTP response code 404 was returned, returning ENOENT
CheckBucket(2590): Check bucket failed, S3 response:
NoSuchBucketThe specified bucket does not existprd.project.files.s3.amazonaws.comE8AFB833D9BDD27BTpy4XIzbzs3N4CsM2UeKUQN1YuEvGUlVGWtps2o7LrHjsLIW6kcqqgMPzIfXkc7T
s3fs: bucket not found


I tried mounting other buckets and get the same output. From the S3 dashboard and Cloudberry Explorer, we can see that the buckets I'm trying to mount exist.

I'm unclear how to proceed. We had a brief ticket with AWS (which was promptly dropped since S3FS is not supported). They confirm that the bucket exists and looks fine.

The reason that the issue appears similar to the original poster's problem is that if one attempts to access the bucket after using either of the mount statements shown above (or via an /etc/fstab entry) displays the error:


[TEXTURAAWSGOV\bill.stone@govseesbuat01 ~]$ ls -al /java_data
ls: cannot access /java_data/s3projects: Transport endpoint is not connected
total 4
drwxr-xr-x 3 root root 23 Mar 27 13:35 .
drwxr-xr-x 25 root root 4096 Mar 27 14:13 ..
?????????? ? ? ? ? ? s3projects


Any assistance is appreciated.

<!-- gh-comment-id:87729585 --> @wrstone commented on GitHub (Mar 30, 2015): I have a similar issue. Specifically: We have a number of Ubuntu 14.04 instances in GovCloud that are correctly mounting S3 buckets. However, I'm setting up a new instances and cannot connect to the bucket. I've done a couple of command-line attempts to do this so that I can see error messages. First I tried it with our usual options: --- [TEXTURAAWSGOV\bill.stone@govseesbuat01 ~]$ sudo s3fs -f -d -o url=http://s3-us-gov-west-1.amazonaws.com -o allow_other uat.project.files /java_data/s3projects set_moutpoint_attribute(3530): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) s3fs_init(2713): init s3fs_check_service(3070): check services. CheckBucket(2538): check a bucket. RequestPerform(1584): connecting to URL http://uat.project.files.s3-us-gov-west-1.amazonaws.com/ RequestPerform(1600): HTTP response code 400 RequestPerform(1614): HTTP response code 400 was returned, returing EIO. CheckBucket(2590): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>AuthorizationHeaderMalformed</Code><Message>The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-gov-west-1'</Message><Region>us-gov-west-1</Region><RequestId>D3F8A1498AE08C93</RequestId><HostId>8Z26XMSpj9XY4HjoNke+l9nK/Q03DHeHBFeYLPmZZ8hoME5dYMW9S2SKzc41jNcm</HostId></Error> s3fs_check_service(3103): Could not connect wrong region us-east-1, so retry to connect region us-gov-west-1. CheckBucket(2538): check a bucket. RequestPerform(1584): connecting to URL http://uat.project.files.s3-us-gov-west-1.amazonaws.com/ RequestPerform(1600): HTTP response code 404 RequestPerform(1624): HTTP response code 404 was returned, returning ENOENT CheckBucket(2590): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message><BucketName>uat.project.files.s3.amazonaws.com</BucketName><RequestId>D254DDFD35F0374B</RequestId><HostId>OW6J6Bs0eUn/zMXo+44vDMTGAyOOBuqbykWKEgjQtBrp9eGZ+qYbUHZkjim5WqVI</HostId></Error> s3fs: bucket not found --- So it's telling me that the URL is wrong, even though I sent it the same URL for GovCloud S3 buckets that our other servers are correctly mounting. Consequently, I tried adding an explicit endpoint: --- [TEXTURAAWSGOV\bill.stone@govseesbuat01 ~]$ sudo s3fs -f -d -o url=http://s3-us-gov-west-1.amazonaws.com -o allow_other -o endpoint=us-gov-west-1 -o passwd_file=/etc/passwd-s3fs prd.project.files /java_data/s3projects set_moutpoint_attribute(3530): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) s3fs_init(2713): init s3fs_check_service(3070): check services. CheckBucket(2538): check a bucket. RequestPerform(1584): connecting to URL http://prd.project.files.s3-us-gov-west-1.amazonaws.com/ RequestPerform(1600): HTTP response code 404 RequestPerform(1624): HTTP response code 404 was returned, returning ENOENT CheckBucket(2590): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>NoSuchBucket</Code><Message>The specified bucket does not exist</Message><BucketName>prd.project.files.s3.amazonaws.com</BucketName><RequestId>E8AFB833D9BDD27B</RequestId><HostId>Tpy4XIzbzs3N4CsM2UeKUQN1YuEvGUlVGWtps2o7LrHjsLIW6kcqqgMPzIfXkc7T</HostId></Error> s3fs: bucket not found --- I tried mounting other buckets and get the same output. From the S3 dashboard and Cloudberry Explorer, we can see that the buckets I'm trying to mount exist. I'm unclear how to proceed. We had a brief ticket with AWS (which was promptly dropped since S3FS is not supported). They confirm that the bucket exists and looks fine. The reason that the issue appears similar to the original poster's problem is that if one attempts to access the bucket after using either of the mount statements shown above (or via an /etc/fstab entry) displays the error: --- [TEXTURAAWSGOV\bill.stone@govseesbuat01 ~]$ ls -al /java_data ls: cannot access /java_data/s3projects: Transport endpoint is not connected total 4 drwxr-xr-x 3 root root 23 Mar 27 13:35 . drwxr-xr-x 25 root root 4096 Mar 27 14:13 .. ?????????? ? ? ? ? ? s3projects --- Any assistance is appreciated.
Author
Owner

@wrstone commented on GitHub (Mar 31, 2015):

I discovered the answer to my problem: there has been a code change to S3FS that broke GovCloud redirection.

I've file a separate issue for this here:

https://github.com/s3fs-fuse/s3fs-fuse/issues/161

<!-- gh-comment-id:88139774 --> @wrstone commented on GitHub (Mar 31, 2015): I discovered the answer to my problem: there has been a code change to S3FS that broke GovCloud redirection. I've file a separate issue for this here: https://github.com/s3fs-fuse/s3fs-fuse/issues/161
Author
Owner

@ggtakec commented on GitHub (Apr 12, 2015):

Hi, all
I updated #167 and #164(#165), those may have solved this Issue.
Please checkout master branch, and check it.
Thanks in advance for your help.

<!-- gh-comment-id:91971846 --> @ggtakec commented on GitHub (Apr 12, 2015): Hi, all I updated #167 and #164(#165), those may have solved this Issue. Please checkout master branch, and check it. Thanks in advance for your help.
Author
Owner

@teu commented on GitHub (May 15, 2015):

Solution: https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/

<!-- gh-comment-id:102406215 --> @teu commented on GitHub (May 15, 2015): Solution: https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/
Author
Owner

@chrisschaub commented on GitHub (May 15, 2015):

Thanks!

On Fri, May 15, 2015 at 8:59 AM, Piotr Jasiulewicz <notifications@github.com

wrote:

Solution: https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/


Reply to this email directly or view it on GitHub
https://github.com/s3fs-fuse/s3fs-fuse/issues/105#issuecomment-102406215
.

Christopher Schaub
http://chris.schaub.com

<!-- gh-comment-id:102413777 --> @chrisschaub commented on GitHub (May 15, 2015): Thanks! On Fri, May 15, 2015 at 8:59 AM, Piotr Jasiulewicz <notifications@github.com > wrote: > > Solution: https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/ > > — > Reply to this email directly or view it on GitHub > https://github.com/s3fs-fuse/s3fs-fuse/issues/105#issuecomment-102406215 > . ## Christopher Schaub http://chris.schaub.com
Author
Owner

@julian1 commented on GitHub (Jul 10, 2015):

Same problem debian desktop, compiling latest master. With or without multireq_max=5

$ s3fs  -o use_cache=/tmp -o  s3_key  ./here
$ mount | grep s3
s3fs on ...here type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000)
$ ls ./here 
ls: cannot access ./here: Transport endpoint is not connected
<!-- gh-comment-id:120235647 --> @julian1 commented on GitHub (Jul 10, 2015): Same problem debian desktop, compiling latest master. With or without multireq_max=5 ``` $ s3fs -o use_cache=/tmp -o s3_key ./here $ mount | grep s3 s3fs on ...here type fuse.s3fs (rw,nosuid,nodev,relatime,user_id=1000,group_id=1000) $ ls ./here ls: cannot access ./here: Transport endpoint is not connected ```
Author
Owner

@jeyraof commented on GitHub (Dec 3, 2015):

Is this solved? Same problem here.

<!-- gh-comment-id:161546125 --> @jeyraof commented on GitHub (Dec 3, 2015): Is this solved? Same problem here.
Author
Owner

@ggtakec commented on GitHub (Dec 3, 2015):

@julian1 and @jeyraof I'm sorry for replygin late.

So you can run s3fs and it puts message(error/warning/information), please use dbglevel option.
If you can, I hope that s3fs will put message which will say what is wrong. And if you specify only dbglevel option, you can find messages in /dev/log/**** from s3fs.
If you need to get more information, you can specify curldbg option. It puts many information about communication on HTTP(S).
If You need to run s3fs forground, you can specify -f option. So that you can see messages on stdout(err) on display.
Please see man page for s3fs.

After you get some (error/warning) messages from s3fs, please let me know.
Thanks in advance for your help.

<!-- gh-comment-id:161650471 --> @ggtakec commented on GitHub (Dec 3, 2015): @julian1 and @jeyraof I'm sorry for replygin late. So you can run s3fs and it puts message(error/warning/information), please use dbglevel option. If you can, I hope that s3fs will put message which will say what is wrong. And if you specify only dbglevel option, you can find messages in /dev/log/***\* from s3fs. If you need to get more information, you can specify curldbg option. It puts many information about communication on HTTP(S). If You need to run s3fs forground, you can specify -f option. So that you can see messages on stdout(err) on display. Please see man page for s3fs. After you get some (error/warning) messages from s3fs, please let me know. Thanks in advance for your help.
Author
Owner

@ggtakec commented on GitHub (Jan 17, 2016):

I'm closing this issue, if you have a problem yet, please post new issue or reopen this issue.

Thanks in advance for your help.

<!-- gh-comment-id:172300047 --> @ggtakec commented on GitHub (Jan 17, 2016): I'm closing this issue, if you have a problem yet, please post new issue or reopen this issue. Thanks in advance for your help.
Author
Owner

@balamurugan99 commented on GitHub (Jul 5, 2016):

That's because previous mount not unmounted properly. Try unmount and remount again.

fusermount -u

Thanks,
Bala

<!-- gh-comment-id:230431454 --> @balamurugan99 commented on GitHub (Jul 5, 2016): That's because previous mount not unmounted properly. Try unmount and remount again. fusermount -u <mount point> Thanks, Bala
Author
Owner

@amkhullar commented on GitHub (Mar 28, 2017):

I am also facing the same issue.

 sudo /usr/bin/s3fs -f -d images.domain.com /var/www/html/images -o allow_other  -o passwd_file=/etc/passwd-s3fs
    set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=500, gid=501, mode=42775)
s3fs_init(2595): init
s3fs_check_service(2894): check services.
    CheckBucket(2228): check a bucket.
    RequestPerform(1467): connecting to URL http://images.domain.com.s3.amazonaws.com/
RequestPerform(1595): ### CURLE_HTTP_RETURNED_ERROR
RequestPerform(1600): HTTP response code =400
s3fs: Failed to access bucket.

I added the below bucket policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
      "Effect": "Allow",
      "Principal": "*",
      "Action": "s3:ListBucket",
      "Resource": "arn:aws:s3:::images.domain.com"
    },
        {
            "Sid": "Allow Public Access to All Objects",
            "Effect": "Allow",
            "Principal": "*",
            "Action":["s3:GetObject", "s3:PutObject"],
            "Resource": "arn:aws:s3:::images.domain.com/*"
        }

  ]
}

Can some one tell what could be the issue ?

When we check the same s3 link using the browser below is the output

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Name>images.domain.com</Name>
<Prefix/>
<Marker/>
<MaxKeys>1000</MaxKeys>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>Kurti-Square.jpg</Key>
<LastModified>2017-03-28T10:20:42.000Z</LastModified>
<ETag>"094624ef431eefeab828eae156cc79d4"</ETag>
<Size>20820</Size>
<StorageClass>STANDARD</StorageClass>
</Contents>
</ListBucketResult>
<!-- gh-comment-id:289743122 --> @amkhullar commented on GitHub (Mar 28, 2017): I am also facing the same issue. ``` sudo /usr/bin/s3fs -f -d images.domain.com /var/www/html/images -o allow_other -o passwd_file=/etc/passwd-s3fs set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=500, gid=501, mode=42775) s3fs_init(2595): init s3fs_check_service(2894): check services. CheckBucket(2228): check a bucket. RequestPerform(1467): connecting to URL http://images.domain.com.s3.amazonaws.com/ RequestPerform(1595): ### CURLE_HTTP_RETURNED_ERROR RequestPerform(1600): HTTP response code =400 s3fs: Failed to access bucket. ``` I added the below bucket policy: ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": "*", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::images.domain.com" }, { "Sid": "Allow Public Access to All Objects", "Effect": "Allow", "Principal": "*", "Action":["s3:GetObject", "s3:PutObject"], "Resource": "arn:aws:s3:::images.domain.com/*" } ] } ``` Can some one tell what could be the issue ? When we check the same s3 link using the browser below is the output ``` <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>images.domain.com</Name> <Prefix/> <Marker/> <MaxKeys>1000</MaxKeys> <IsTruncated>false</IsTruncated> <Contents> <Key>Kurti-Square.jpg</Key> <LastModified>2017-03-28T10:20:42.000Z</LastModified> <ETag>"094624ef431eefeab828eae156cc79d4"</ETag> <Size>20820</Size> <StorageClass>STANDARD</StorageClass> </Contents> </ListBucketResult> ```
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#63
No description provided.