[GH-ISSUE #335] s3fs on Amazon VPC - transport endpoint not connected #173

Closed
opened 2026-03-04 01:42:52 +03:00 by kerem · 8 comments
Owner

Originally created by @ngbranitsky on GitHub (Jan 17, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/335

Using s3fs 1.79 on RHEL 7, I can mount the attachments folder but not the logs folder.
Using s3fs 1.71 on RHEL 6 in the same AWS VPC, I have no problem mounting the logs folder.

echo ${CUSTID}
NCDPI

RHEL 7 Server:
[NCDPI-VRPREPROD-161]# s3fs --version
Amazon Simple Storage Service File System V1.79(commit:8162d49) with OpenSSL
[NCDPI-VRPREPROD-161]# /usr/bin/s3fs -d -f ${CUSTID}:/attachments /mnt/attachments -o nonempty,allow_other,uid=306,gid=306,passwd_file="/etc/CONFIG/${CUSTID}/passwd-s3fs"
[CRT] s3fs.cpp:set_s3fs_log_level(250): change debug level from [CRT] to [INF]
[INF] s3fs.cpp:set_moutpoint_attribute(4091): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
[CRT] s3fs.cpp:s3fs_init(3297): init v1.79(commit:8162d49) with OpenSSL
[INF] s3fs.cpp:s3fs_check_service(3653): check services.
[INF] curl.cpp:CheckBucket(2647): check a bucket.
[INF] curl.cpp:prepare_url(4140): URL is http://s3.amazonaws.com/ncdpi/attachments/
[INF] curl.cpp:prepare_url(4172): URL changed is http://ncdpi.s3.amazonaws.com/attachments/
[INF] curl.cpp:insertV4Headers(2069): computing signature [GET] [/attachments/] [] []
[INF] curl.cpp:url_to_host(99): url is http://s3.amazonaws.com
[INF] curl.cpp:RequestPerform(1743): **HTTP response code 200**
[INF] s3fs.cpp:remote_mountpath_exists(2788): [path=/attachments]
[INF] s3fs.cpp:s3fs_getattr(797): [path=/]
Succeeds.

[NCDPI-VRPREPROD-161]# /usr/bin/s3fs -d -f ${CUSTID}:/logs /mnt/logs -o nonempty,allow_other,uid=306,gid=306,passwd_file="/etc/CONFIG/${CUSTID}/passwd-s3fs"
[CRT] s3fs.cpp:set_s3fs_log_level(250): change debug level from [CRT] to [INF]
[INF] s3fs.cpp:set_moutpoint_attribute(4091): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
[CRT] s3fs.cpp:s3fs_init(3297): init v1.79(commit:8162d49) with OpenSSL
[INF] s3fs.cpp:s3fs_check_service(3653): check services.
[INF] curl.cpp:CheckBucket(2647): check a bucket.
[INF] curl.cpp:prepare_url(4140): URL is http://s3.amazonaws.com/ncdpi/logs/
[INF] curl.cpp:prepare_url(4172): URL changed is http://ncdpi.s3.amazonaws.com/logs/
[INF] curl.cpp:insertV4Headers(2069): computing signature [GET] [/logs/] [] []
[INF] curl.cpp:url_to_host(99): url is http://s3.amazonaws.com
[INF] curl.cpp:RequestPerform(1765): **HTTP response code 404 was returned, returning ENOENT**
[ERR] curl.cpp:CheckBucket(2685): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>logs/</Key>
<RequestId>251B7199C224EEF4</RequestId>
<HostId>/oMDKi9lmxKiXbpDzOYl3Jew85taID7glpcKSBotXRzQ4hS7pUSETgupyE88KOcsl3ei/x2tutU=</HostId>
</Error>
[CRT] s3fs.cpp:s3fs_check_service(3714): bucket not found - result of checking service.
Fails.

RHEL 6 Server:
[NCDPI-VRMASTER-166]# s3fs --version
Amazon Simple Storage Service File System 1.71
[NCDPI-VRMASTER-166]# /usr/bin/s3fs ${CUSTID}:/logs /mnt/logs -o nonempty,allow_other,uid=306,gid=306,passwd_file="/etc/CONFIG/${CUSTID}/passwd-s3fs"
Succeeds.

I'm going to try creating a VPC Endpoint for S3 in the VPC as this was suggested as a remedy,
but I don't understand the fundementals - why is one folder in a bucket accessible by 1.79 and
another folder in the same bucket not accessible, though 1.71 on another server has no problem?

Originally created by @ngbranitsky on GitHub (Jan 17, 2016). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/335 Using s3fs **1.79** on RHEL 7, I can mount the attachments folder but not the logs folder. Using s3fs **1.71** on RHEL 6 in the same AWS VPC, I have no problem mounting the logs folder. `echo ${CUSTID}` `NCDPI` RHEL 7 Server: `[NCDPI-VRPREPROD-161]# s3fs --version` `Amazon Simple Storage Service File System V1.79(commit:8162d49) with OpenSSL` `[NCDPI-VRPREPROD-161]# /usr/bin/s3fs -d -f ${CUSTID}:/attachments /mnt/attachments -o nonempty,allow_other,uid=306,gid=306,passwd_file="/etc/CONFIG/${CUSTID}/passwd-s3fs"` `[CRT] s3fs.cpp:set_s3fs_log_level(250): change debug level from [CRT] to [INF]` `[INF] s3fs.cpp:set_moutpoint_attribute(4091): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)` `[CRT] s3fs.cpp:s3fs_init(3297): init v1.79(commit:8162d49) with OpenSSL` `[INF] s3fs.cpp:s3fs_check_service(3653): check services.` `[INF] curl.cpp:CheckBucket(2647): check a bucket.` `[INF] curl.cpp:prepare_url(4140): URL is http://s3.amazonaws.com/ncdpi/attachments/` `[INF] curl.cpp:prepare_url(4172): URL changed is http://ncdpi.s3.amazonaws.com/attachments/` `[INF] curl.cpp:insertV4Headers(2069): computing signature [GET] [/attachments/] [] []` `[INF] curl.cpp:url_to_host(99): url is http://s3.amazonaws.com` `[INF] curl.cpp:RequestPerform(1743): **HTTP response code 200**` `[INF] s3fs.cpp:remote_mountpath_exists(2788): [path=/attachments]` `[INF] s3fs.cpp:s3fs_getattr(797): [path=/]` **Succeeds**. `[NCDPI-VRPREPROD-161]# /usr/bin/s3fs -d -f ${CUSTID}:/logs /mnt/logs -o nonempty,allow_other,uid=306,gid=306,passwd_file="/etc/CONFIG/${CUSTID}/passwd-s3fs"` `[CRT] s3fs.cpp:set_s3fs_log_level(250): change debug level from [CRT] to [INF]` `[INF] s3fs.cpp:set_moutpoint_attribute(4091): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)` `[CRT] s3fs.cpp:s3fs_init(3297): init v1.79(commit:8162d49) with OpenSSL` `[INF] s3fs.cpp:s3fs_check_service(3653): check services.` `[INF] curl.cpp:CheckBucket(2647): check a bucket.` `[INF] curl.cpp:prepare_url(4140): URL is http://s3.amazonaws.com/ncdpi/logs/` `[INF] curl.cpp:prepare_url(4172): URL changed is http://ncdpi.s3.amazonaws.com/logs/` `[INF] curl.cpp:insertV4Headers(2069): computing signature [GET] [/logs/] [] []` `[INF] curl.cpp:url_to_host(99): url is http://s3.amazonaws.com` `[INF] curl.cpp:RequestPerform(1765): **HTTP response code 404 was returned, returning ENOENT**` `[ERR] curl.cpp:CheckBucket(2685): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>` `<Error>` `<Code>NoSuchKey</Code>` `<Message>The specified key does not exist.</Message>` `<Key>logs/</Key>` `<RequestId>251B7199C224EEF4</RequestId>` `<HostId>/oMDKi9lmxKiXbpDzOYl3Jew85taID7glpcKSBotXRzQ4hS7pUSETgupyE88KOcsl3ei/x2tutU=</HostId>` `</Error>` `[CRT] s3fs.cpp:s3fs_check_service(3714): bucket not found - result of checking service.` **Fails**. RHEL 6 Server: `[NCDPI-VRMASTER-166]# s3fs --version` `Amazon Simple Storage Service File System 1.71` `[NCDPI-VRMASTER-166]# /usr/bin/s3fs ${CUSTID}:/logs /mnt/logs -o nonempty,allow_other,uid=306,gid=306,passwd_file="/etc/CONFIG/${CUSTID}/passwd-s3fs"` **Succeeds**. I'm going to try creating a VPC Endpoint for S3 in the VPC as this was suggested as a remedy, but I don't understand the fundementals - why is one folder in a bucket accessible by 1.79 and another folder in the same bucket not accessible, though 1.71 on another server has no problem?
kerem closed this issue 2026-03-04 01:42:52 +03:00
Author
Owner

@ggtakec commented on GitHub (Jan 18, 2016):

Could you build s3fs from latest master branch on github and try to test by it?
(because some changes was fixed to latest codes about request with path type)

Thanks in advance for your assistance.

<!-- gh-comment-id:172533013 --> @ggtakec commented on GitHub (Jan 18, 2016): Could you build s3fs from latest master branch on github and try to test by it? (because some changes was fixed to latest codes about request with path type) Thanks in advance for your assistance.
Author
Owner

@ngbranitsky commented on GitHub (Jan 18, 2016):

Code built from latest master branch.
No change in results.

In the logs below, are the following 3 lines;
* About to connect() to ncdpi.s3.amazonaws.com port 80 (#0)
* Trying 54.231.49.194...
* Connected to ncdpi.s3.amazonaws.com (54.231.49.194) port 80 (#0)
This clearly indicates that s3fs is ignoring the new VPC Endpoint for S3 that I created as suggested
in this forum. The VPC Endpoint documentation clearly states that the connection to s3 will use an
internal IP address, not an external IP address for this Service: com.amazonaws.us-east-1.s3

[NCDPI-VRPREPROD-132]# /usr/local/bin/s3fs -d -f ncdpi:/logs /mnt/logs -o curldbg,nonempty,allow_other,uid=306,gid=306,passwd_file="/etc/CONFIG/${CUSTID}/passwd-s3fs"
[CRT] s3fs.cpp:set_s3fs_log_level(251): change debug level from [CRT] to [INF] 
[INF]     s3fs.cpp:set_moutpoint_attribute(4111): PROC(uid=0, gid=0) - MountPoint(uid=306, gid=306, mode=40777)
[CRT] s3fs.cpp:s3fs_init(3314): init v1.79(commit:e932583) with OpenSSL
[INF] s3fs.cpp:s3fs_check_service(3673): check services.
[INF]       curl.cpp:CheckBucket(2649): check a bucket.
[INF]       curl.cpp:prepare_url(4142): URL is http://s3.amazonaws.com/ncdpi/logs/
[INF]       curl.cpp:prepare_url(4174): URL changed is http://ncdpi.s3.amazonaws.com/logs/
[INF]       curl.cpp:insertV4Headers(2070): computing signature [GET] [/logs/] [] []
[INF]       curl.cpp:url_to_host(99): url is http://s3.amazonaws.com
* About to connect() to ncdpi.s3.amazonaws.com port 80 (#0)
*   Trying 54.231.49.194...
* Connected to ncdpi.s3.amazonaws.com (54.231.49.194) port 80 (#
About to connect() to ncdpi.s3.amazonaws.com port 80 (#0)
*   Trying 54.231.49.194...
* Connected to ncdpi.s3.amazonaws.com (54.231.49.194) port 80 (#0)
> GET /logs/ HTTP/1.1
Accept: */*
Authorization: AWS4-HMAC-SHA256 Credential=AKIAXXXX0118/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date,         Signature=176fd3f26684c70bc16eb9e9873201dab4d5196b1a8d76d71fd4ff9e842c7b38
host: ncdpi.s3.amazonaws.com
x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date: 20160118T182429Z
< HTTP/1.1 404 Not Found
< x-amz-request-id: 5C40B000169C67B0
< x-amz-id-2: V38Jgx4I6zv5/WRnyihaii/fe9p67/4xh58l9BLlLcYSvR6QTBxsM7q0kZDU2aDZTMwBV25yE+I=
< Content-Type: application/xml
< Transfer-Encoding: chunked
< Date: Mon, 18 Jan 2016 18:24:29 GMT
< Server: AmazonS3
< 
* Connection #0 to host ncdpi.s3.amazonaws.com left intact
[INF]       curl.cpp:RequestPerform(1765): HTTP response code 404 was returned, returning ENOENT
[ERR] curl.cpp:CheckBucket(2687): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message>          <Key>logs/</Key><RequestId>5C40B000169C67B0</RequestId><HostId>V38Jgx4I6zv5/WRnyihaii/fe9p67/4xh58l9BLlLcYSvR6QTBxsM7q0kZDU2aDZTMwBV25yE+I=</HostId></Error>
[CRT] s3fs.cpp:s3fs_check_service(3734): bucket not found - result of checking service.
[ERR] s3fs.cpp:s3fs_exit_fuseloop(3304): Exiting FUSE event loop due to errors

[INF] s3fs.cpp:s3fs_destroy(3360): destroy
<!-- gh-comment-id:172621179 --> @ngbranitsky commented on GitHub (Jan 18, 2016): Code built from latest master branch. No change in results. In the logs below, are the following 3 lines; `* About to connect() to ncdpi.s3.amazonaws.com port 80 (#0)` `* Trying 54.231.49.194...` `* Connected to ncdpi.s3.amazonaws.com (54.231.49.194) port 80 (#0)` This clearly indicates that s3fs is ignoring the new VPC Endpoint for S3 that I created as suggested in this forum. The VPC Endpoint documentation clearly states that the connection to s3 will use an internal IP address, not an external IP address for this Service: com.amazonaws.us-east-1.s3 ``` [NCDPI-VRPREPROD-132]# /usr/local/bin/s3fs -d -f ncdpi:/logs /mnt/logs -o curldbg,nonempty,allow_other,uid=306,gid=306,passwd_file="/etc/CONFIG/${CUSTID}/passwd-s3fs" [CRT] s3fs.cpp:set_s3fs_log_level(251): change debug level from [CRT] to [INF] [INF] s3fs.cpp:set_moutpoint_attribute(4111): PROC(uid=0, gid=0) - MountPoint(uid=306, gid=306, mode=40777) [CRT] s3fs.cpp:s3fs_init(3314): init v1.79(commit:e932583) with OpenSSL [INF] s3fs.cpp:s3fs_check_service(3673): check services. [INF] curl.cpp:CheckBucket(2649): check a bucket. [INF] curl.cpp:prepare_url(4142): URL is http://s3.amazonaws.com/ncdpi/logs/ [INF] curl.cpp:prepare_url(4174): URL changed is http://ncdpi.s3.amazonaws.com/logs/ [INF] curl.cpp:insertV4Headers(2070): computing signature [GET] [/logs/] [] [] [INF] curl.cpp:url_to_host(99): url is http://s3.amazonaws.com * About to connect() to ncdpi.s3.amazonaws.com port 80 (#0) * Trying 54.231.49.194... * Connected to ncdpi.s3.amazonaws.com (54.231.49.194) port 80 (# About to connect() to ncdpi.s3.amazonaws.com port 80 (#0) * Trying 54.231.49.194... * Connected to ncdpi.s3.amazonaws.com (54.231.49.194) port 80 (#0) > GET /logs/ HTTP/1.1 Accept: */* Authorization: AWS4-HMAC-SHA256 Credential=AKIAXXXX0118/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=176fd3f26684c70bc16eb9e9873201dab4d5196b1a8d76d71fd4ff9e842c7b38 host: ncdpi.s3.amazonaws.com x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 x-amz-date: 20160118T182429Z < HTTP/1.1 404 Not Found < x-amz-request-id: 5C40B000169C67B0 < x-amz-id-2: V38Jgx4I6zv5/WRnyihaii/fe9p67/4xh58l9BLlLcYSvR6QTBxsM7q0kZDU2aDZTMwBV25yE+I= < Content-Type: application/xml < Transfer-Encoding: chunked < Date: Mon, 18 Jan 2016 18:24:29 GMT < Server: AmazonS3 < * Connection #0 to host ncdpi.s3.amazonaws.com left intact [INF] curl.cpp:RequestPerform(1765): HTTP response code 404 was returned, returning ENOENT [ERR] curl.cpp:CheckBucket(2687): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message> <Key>logs/</Key><RequestId>5C40B000169C67B0</RequestId><HostId>V38Jgx4I6zv5/WRnyihaii/fe9p67/4xh58l9BLlLcYSvR6QTBxsM7q0kZDU2aDZTMwBV25yE+I=</HostId></Error> [CRT] s3fs.cpp:s3fs_check_service(3734): bucket not found - result of checking service. [ERR] s3fs.cpp:s3fs_exit_fuseloop(3304): Exiting FUSE event loop due to errors [INF] s3fs.cpp:s3fs_destroy(3360): destroy ```
Author
Owner

@ngbranitsky commented on GitHub (Jan 20, 2016):

Ping?

<!-- gh-comment-id:173277926 --> @ngbranitsky commented on GitHub (Jan 20, 2016): Ping?
Author
Owner

@ngbranitsky commented on GitHub (Jan 20, 2016):

I just launched a new RHEL 7 server in a different AWS VPC.
s3fs 1.79 successfully mounted the bucket and folder called onagco:/attachments.
s3fs 1.79 failed to mount the bucket and folder called onagco:/logs !
Is there something about a folder name of "logs" that causes a 404 Not Found error?
s3fs 1.71 has no problem mounting bucket:/logs folders.

Since this forum said that creating a VPC S3 Endpoint would solve the problem,
can someone explain how to make s3fs actually use the endpoint: com.amazonaws.us-east-1.s3
instead of: http://s3.amazonaws.com/ncdpi/logs/ ?

<!-- gh-comment-id:173305014 --> @ngbranitsky commented on GitHub (Jan 20, 2016): I just launched a new RHEL 7 server in a different AWS VPC. s3fs 1.79 successfully mounted the bucket and folder called onagco:/attachments. s3fs 1.79 failed to mount the bucket and folder called onagco:/logs ! Is there something about a folder name of "logs" that causes a 404 Not Found error? s3fs 1.71 has no problem mounting bucket:/logs folders. Since this forum said that creating a VPC S3 Endpoint would solve the problem, can someone explain how to make s3fs actually use the endpoint: com.amazonaws.us-east-1.s3 instead of: http://s3.amazonaws.com/ncdpi/logs/ ?
Author
Owner

@ngbranitsky commented on GitHub (Jan 21, 2016):

In an effort to get s3fs to use the new AWS S3 Endpoint, I tried adding an url= option.
s3fs always rewrites the URL - changes domain/bucket:/folder to bucket.domain/folder.
[INF] curl.cpp:prepare_url(4140): URL is http://com.amazonaws.us-east-1.s3/onagco/logs/
[INF] curl.cpp:prepare_url(4172): URL changed is http://onagco.com.amazonaws.us-east-1.s3/logs
Perhaps this syntax is incorrect for Endpoints because the /attachments mount also fails with this url=
but succeeds without the url= parameter.

[ONAGCO-VRTEST-156]# /usr/bin/s3fs -d -f ${CUSTID}:/logs /mnt/logs -o url="http://com.amazonaws.us-east-1.s3",nonempty,allow_other,uid=306,gid=306,passwd_file="/etc/CONFIG/${CUSTID}/passwd-s3fs"
[CRT] s3fs.cpp:set_s3fs_log_level(250): change debug level from [CRT] to [INF]
[INF] s3fs.cpp:set_moutpoint_attribute(4091): PROC(uid=0, gid=0) - MountPoint(uid=306, gid=306, mode=40755)
[CRT] s3fs.cpp:s3fs_init(3297): init v1.79(commit:8162d49) with OpenSSL
[INF] s3fs.cpp:s3fs_check_service(3653): check services.
[INF] curl.cpp:CheckBucket(2647): check a bucket.
[INF] curl.cpp:prepare_url(4140): URL is http://com.amazonaws.us-east-1.s3/onagco/logs/
[INF] curl.cpp:prepare_url(4172): URL changed is http://onagco.com.amazonaws.us-east-1.s3/logs/
[INF] curl.cpp:insertV4Headers(2069): computing signature [GET] [/logs/] [] []
[INF] curl.cpp:url_to_host(99): url is http://com.amazonaws.us-east-1.s3
[ERR] curl.cpp:RequestPerform(1787): ### CURLE_COULDNT_RESOLVE_HOST
[INF] curl.cpp:RequestPerform(1885): ### retrying...
[INF] curl.cpp:RemakeHandle(1548): Retry request. [type=5][url=http://onagco.com.amazonaws.us-east-1.s3/logs/][path=/logs/]
[ERR] curl.cpp:RequestPerform(1787): ### CURLE_COULDNT_RESOLVE_HOST
[INF] curl.cpp:RequestPerform(1885): ### retrying...
[INF] curl.cpp:RemakeHandle(1548): Retry request. [type=5][url=http://onagco.com.amazonaws.us-east-1.s3/logs/][path=/logs/]
[ERR] curl.cpp:RequestPerform(1787): ### CURLE_COULDNT_RESOLVE_HOST
[INF] curl.cpp:RequestPerform(1885): ### retrying...
[INF] curl.cpp:RemakeHandle(1548): Retry request. [type=5][url=http://onagco.com.amazonaws.us-east-1.s3/logs/][path=/logs/]
[ERR] curl.cpp:RequestPerform(1892): ### giving up
[ERR] curl.cpp:CheckBucket(2685): Check bucket failed, S3 response:
[CRT] s3fs.cpp:s3fs_check_service(3724): unable to connect - result of checking service.

<!-- gh-comment-id:173604414 --> @ngbranitsky commented on GitHub (Jan 21, 2016): In an effort to get s3fs to use the new AWS S3 Endpoint, I tried adding an url= option. s3fs always rewrites the URL - changes domain/bucket:/folder to bucket.domain/folder. `[INF] curl.cpp:prepare_url(4140): URL is http://com.amazonaws.us-east-1.s3/onagco/logs/` `[INF] curl.cpp:prepare_url(4172): URL changed is http://onagco.com.amazonaws.us-east-1.s3/logs` Perhaps this syntax is incorrect for Endpoints because the /attachments mount also fails with this url= but succeeds without the url= parameter. `[ONAGCO-VRTEST-156]# /usr/bin/s3fs -d -f ${CUSTID}:/logs /mnt/logs -o url="http://com.amazonaws.us-east-1.s3",nonempty,allow_other,uid=306,gid=306,passwd_file="/etc/CONFIG/${CUSTID}/passwd-s3fs"` `[CRT] s3fs.cpp:set_s3fs_log_level(250): change debug level from [CRT] to [INF]` `[INF] s3fs.cpp:set_moutpoint_attribute(4091): PROC(uid=0, gid=0) - MountPoint(uid=306, gid=306, mode=40755)` `[CRT] s3fs.cpp:s3fs_init(3297): init v1.79(commit:8162d49) with OpenSSL` `[INF] s3fs.cpp:s3fs_check_service(3653): check services.` `[INF] curl.cpp:CheckBucket(2647): check a bucket.` `[INF] curl.cpp:prepare_url(4140): URL is http://com.amazonaws.us-east-1.s3/onagco/logs/` `[INF] curl.cpp:prepare_url(4172): URL changed is http://onagco.com.amazonaws.us-east-1.s3/logs/` `[INF] curl.cpp:insertV4Headers(2069): computing signature [GET] [/logs/] [] []` `[INF] curl.cpp:url_to_host(99): url is http://com.amazonaws.us-east-1.s3` `[ERR] curl.cpp:RequestPerform(1787): ### CURLE_COULDNT_RESOLVE_HOST` `[INF] curl.cpp:RequestPerform(1885): ### retrying...` `[INF] curl.cpp:RemakeHandle(1548): Retry request. [type=5][url=http://onagco.com.amazonaws.us-east-1.s3/logs/][path=/logs/]` `[ERR] curl.cpp:RequestPerform(1787): ### CURLE_COULDNT_RESOLVE_HOST` `[INF] curl.cpp:RequestPerform(1885): ### retrying...` `[INF] curl.cpp:RemakeHandle(1548): Retry request. [type=5][url=http://onagco.com.amazonaws.us-east-1.s3/logs/][path=/logs/]` `[ERR] curl.cpp:RequestPerform(1787): ### CURLE_COULDNT_RESOLVE_HOST` `[INF] curl.cpp:RequestPerform(1885): ### retrying...` `[INF] curl.cpp:RemakeHandle(1548): Retry request. [type=5][url=http://onagco.com.amazonaws.us-east-1.s3/logs/][path=/logs/]` `[ERR] curl.cpp:RequestPerform(1892): ### giving up` `[ERR] curl.cpp:CheckBucket(2685): Check bucket failed, S3 response:` `[CRT] s3fs.cpp:s3fs_check_service(3724): unable to connect - result of checking service.`
Author
Owner

@ngbranitsky commented on GitHub (Jan 24, 2016):

It seems I misinterpreted the use of the Endpoint prefix.
First - I entered the URL backwards - in the format of the Endpoint prefix.
Second - I now understand that I shouldn't be specifying an URL -
the Route Table defined for the Subnet should direct any traffic destined to the "Public" S3 CIDR block (54.231.0.0/17) over the Endpoint connection instead.
Using the default s3.amazonaws.com URL,
I still get 200 for the "attachments" folder and 404 for the "logs" folder.

BTW, the last response to this thread was 6 days ago.
Is anyone reading this thread other than me? :-(

<!-- gh-comment-id:174312536 --> @ngbranitsky commented on GitHub (Jan 24, 2016): It seems I misinterpreted the use of the Endpoint prefix. First - I entered the URL backwards - in the format of the Endpoint prefix. Second - I now understand that I shouldn't be specifying an URL - the Route Table defined for the Subnet should direct any traffic destined to the "Public" S3 CIDR block (54.231.0.0/17) over the Endpoint connection instead. Using the default s3.amazonaws.com URL, I still get 200 for the "attachments" folder and 404 for the "logs" folder. BTW, the last response to this thread was 6 days ago. Is anyone reading this thread other than me? :-(
Author
Owner

@ngbranitsky commented on GitHub (Jan 26, 2016):

AWS Support helped me resolve the issue.
It seems that s3://ncdpi/logs/ was created by writing a folder with a file inside /ncdpi/logs.
So logs became a prefix only and not a key.
s3fs v1.71 apparently doesn't test to see if the target folder is actually a key so it had no problem
mounting the file system. s3fs v1.79 detects the missing key and aborts.
By using the AWS S3 Console to create a folder called "logs" the problem is resolved.
Nothing appears to happen on the interface because the AWS S3 Console interface represents
both keys and prefixes as folders.

<!-- gh-comment-id:175153947 --> @ngbranitsky commented on GitHub (Jan 26, 2016): AWS Support helped me resolve the issue. It seems that s3://ncdpi/logs/ was created by writing a folder with a file inside /ncdpi/logs. So logs became a prefix only and not a key. s3fs v1.71 apparently doesn't test to see if the target folder is actually a key so it had no problem mounting the file system. s3fs v1.79 detects the missing key and aborts. By using the AWS S3 Console to create a folder called "logs" the problem is resolved. Nothing appears to happen on the interface because the AWS S3 Console interface represents both keys and prefixes as folders.
Author
Owner

@ggtakec commented on GitHub (Feb 6, 2016):

@ngbranitsky I'm sorry for replying late.
The cause of this problem is as you say.
If you try to mount the bucket with non-existent folder names by s3fs, s3fs will fail.
s3fs has failed to confirm the bucket and the folder path at excuting, because the folder must exist as an object in S3.
s3fs needs the attributes for the folder object, because s3fs checking access control under the folder.

Objects is often uploaded by other tools sush as s3-console and s3cmd.
Those tools can upload a object which have a path including the non-existent directory.
These non-existent directory objects can be listed by s3fs, but it can not be specified as mount folder.

I think that this problem is a result necessary in order to carry out the access control to the mount point.

Regards,

<!-- gh-comment-id:180738786 --> @ggtakec commented on GitHub (Feb 6, 2016): @ngbranitsky I'm sorry for replying late. The cause of this problem is as you say. If you try to mount the bucket with non-existent folder names by s3fs, s3fs will fail. s3fs has failed to confirm the bucket and the folder path at excuting, because the folder must exist as an object in S3. s3fs needs the attributes for the folder object, because s3fs checking access control under the folder. Objects is often uploaded by other tools sush as s3-console and s3cmd. Those tools can upload a object which have a path including the non-existent directory. These non-existent directory objects can be listed by s3fs, but it can not be specified as mount folder. I think that this problem is a result necessary in order to carry out the access control to the mount point. Regards,
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#173
No description provided.