[GH-ISSUE #59] cannot mount Google Cloud storage buckets #34

Closed
opened 2026-03-04 01:41:23 +03:00 by kerem · 18 comments
Owner

Originally created by @hetzbh on GitHub (Sep 28, 2014).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/59

I'm using CentOS 6.5, FuSE 2.9.3, s3fs version 1.78 (from GIT), everything compiled from scratch, no old fuse or s3fs..

I created a key in "Interoperable Storage Access Keys" in the Google API Console, entered it into .passwd-s3fs, changed permissions, done everything, but when trying to mount I get this:

$ s3fs -d -d mybucket google-cloud-storage/
FUSE library version: 2.9.3
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.13
flags=0x0000b07b
max_readahead=0x00020000
s3fs: Failed to access bucket.

Have I forgot something?

Originally created by @hetzbh on GitHub (Sep 28, 2014). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/59 I'm using CentOS 6.5, FuSE 2.9.3, s3fs version 1.78 (from GIT), everything compiled from scratch, no old fuse or s3fs.. I created a key in "Interoperable Storage Access Keys" in the Google API Console, entered it into .passwd-s3fs, changed permissions, done everything, but when trying to mount I get this: $ s3fs -d -d mybucket google-cloud-storage/ FUSE library version: 2.9.3 nullpath_ok: 0 nopath: 0 utime_omit_ok: 0 unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0 INIT: 7.13 flags=0x0000b07b max_readahead=0x00020000 s3fs: Failed to access bucket. Have I forgot something?
kerem closed this issue 2026-03-04 01:41:23 +03:00
Author
Owner

@hetzbh commented on GitHub (Sep 28, 2014):

well, I think I know the reason. see this:

3fs_check_service(2968): check services.
CheckBucket(2367): check a bucket.
RequestPerform(1572): connecting to URL http://mybucket.s3.amazonaws.com/
RequestPerform(1705): ### CURLE_HTTP_RETURNED_ERROR

Yeah, I'm trying to connect to Google, not to Amazon. Where do I change it?

<!-- gh-comment-id:57095082 --> @hetzbh commented on GitHub (Sep 28, 2014): well, I think I know the reason. see this: 3fs_check_service(2968): check services. CheckBucket(2367): check a bucket. RequestPerform(1572): connecting to URL http://mybucket.s3.amazonaws.com/ RequestPerform(1705): ### CURLE_HTTP_RETURNED_ERROR Yeah, I'm trying to connect to Google, not to Amazon. Where do I change it?
Author
Owner

@gaul commented on GitHub (Sep 28, 2014):

You must set the url for GCS:

s3fs bucket-name local-path -o passwd_file=passwd-path -o url=https://storage.googleapis.com
<!-- gh-comment-id:57095403 --> @gaul commented on GitHub (Sep 28, 2014): You must set the url for GCS: ``` s3fs bucket-name local-path -o passwd_file=passwd-path -o url=https://storage.googleapis.com ```
Author
Owner

@hetzbh commented on GitHub (Sep 28, 2014):

Well, I did few tests:

  1. bucket with dot in the name (abc.def) - gives an SSL error on CentOS 6.5, so issue 270 seems not fixed
  2. I created a bucket called "testslinux", and it still gives me error 403. Here is the output:

$ s3fs -d -d -f testslinux google-cloud-storage/ -o url=https://storage.googleapis.com
set_moutpoint_attribute(3379): PROC(uid=500, gid=500) - MountPoint(uid=500, gid=500, mode=40755)
FUSE library version: 2.9.3
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.13
flags=0x0000b07b
max_readahead=0x00020000
s3fs_init(2650): init
s3fs_check_service(2968): check services.
CheckBucket(2367): check a bucket.
RequestPerform(1572): connecting to URL https://testslinux.storage.googleapis.com/
RequestPerform(1705): ### CURLE_HTTP_RETURNED_ERROR
RequestPerform(1710): HTTP response code =403
s3fs: Failed to access bucket.

<!-- gh-comment-id:57096217 --> @hetzbh commented on GitHub (Sep 28, 2014): Well, I did few tests: 1. bucket with dot in the name (abc.def) - gives an SSL error on CentOS 6.5, so issue 270 seems not fixed 2. I created a bucket called "testslinux", and it still gives me error 403. Here is the output: $ s3fs -d -d -f testslinux google-cloud-storage/ -o url=https://storage.googleapis.com set_moutpoint_attribute(3379): PROC(uid=500, gid=500) - MountPoint(uid=500, gid=500, mode=40755) FUSE library version: 2.9.3 nullpath_ok: 0 nopath: 0 utime_omit_ok: 0 unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0 INIT: 7.13 flags=0x0000b07b max_readahead=0x00020000 s3fs_init(2650): init s3fs_check_service(2968): check services. CheckBucket(2367): check a bucket. RequestPerform(1572): connecting to URL https://testslinux.storage.googleapis.com/ RequestPerform(1705): ### CURLE_HTTP_RETURNED_ERROR RequestPerform(1710): HTTP response code =403 s3fs: Failed to access bucket.
Author
Owner

@gaul commented on GitHub (Sep 28, 2014):

I can successfully execute:

s3fs gaulbackup2 "${HOME}/gaulbackup" -o passwd_file=passwd -o url=https://storage.googleapis.com -d -d -f
    set_moutpoint_attribute(3379): PROC(uid=1000, gid=1000) - MountPoint(uid=1000, gid=1000, mode=40775)
FUSE library version: 2.9.2
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.22
flags=0x0000f7fb
max_readahead=0x00020000
s3fs_init(2650): init
s3fs_check_service(2968): check services.
    CheckBucket(2367): check a bucket.
    RequestPerform(1572): connecting to URL https://gaulbackup2.storage.googleapis.com/
    RequestPerform(1588): HTTP response code 200

What format does your password file have? I use something like:

GOOGXXXXXXXXXXXXXXXX:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
<!-- gh-comment-id:57096932 --> @gaul commented on GitHub (Sep 28, 2014): I can successfully execute: ``` s3fs gaulbackup2 "${HOME}/gaulbackup" -o passwd_file=passwd -o url=https://storage.googleapis.com -d -d -f set_moutpoint_attribute(3379): PROC(uid=1000, gid=1000) - MountPoint(uid=1000, gid=1000, mode=40775) FUSE library version: 2.9.2 nullpath_ok: 0 nopath: 0 utime_omit_ok: 0 unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0 INIT: 7.22 flags=0x0000f7fb max_readahead=0x00020000 s3fs_init(2650): init s3fs_check_service(2968): check services. CheckBucket(2367): check a bucket. RequestPerform(1572): connecting to URL https://gaulbackup2.storage.googleapis.com/ RequestPerform(1588): HTTP response code 200 ``` What format does your password file have? I use something like: ``` GOOGXXXXXXXXXXXXXXXX:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX ```
Author
Owner

@hetzbh commented on GitHub (Sep 28, 2014):

ok, I think I'm getting near... Apparently, no matter what combination I have in the .passwd-s3fs file, I get the error 403. Could it be I forgot to set anything in GCS?? (I deleted the keys, recreated them in the Interoperable Storage Access Keys and I've put them in the .passwd-s3fs file. Here is a part of it:

$ cat .passwd-s3fs
GOOGMVXXXXXXXXXXXXX:HzCXXXXXXXXXXXXXXXXXXX

<!-- gh-comment-id:57097501 --> @hetzbh commented on GitHub (Sep 28, 2014): ok, I think I'm getting near... Apparently, no matter what combination I have in the .passwd-s3fs file, I get the error 403. Could it be I forgot to set anything in GCS?? (I deleted the keys, recreated them in the Interoperable Storage Access Keys and I've put them in the .passwd-s3fs file. Here is a part of it: $ cat .passwd-s3fs GOOGMVXXXXXXXXXXXXX:HzCXXXXXXXXXXXXXXXXXXX
Author
Owner

@hetzbh commented on GitHub (Sep 28, 2014):

Ok, I'm getting close ... and frustrated...
On another machine of mine, which is also CentOS 6.5, same key:secret, both compiled with s3fs-1.7.4 and fuse 2.9.3. On one machine it works great, one gives me 403. The only difference is IP.

Suggestions?

<!-- gh-comment-id:57099533 --> @hetzbh commented on GitHub (Sep 28, 2014): Ok, I'm getting close ... and frustrated... On another machine of mine, which is also CentOS 6.5, same key:secret, both compiled with s3fs-1.7.4 and fuse 2.9.3. On one machine it works great, one gives me 403. The only difference is IP. Suggestions?
Author
Owner

@gaul commented on GitHub (Sep 28, 2014):

Try comparing the output from the working and failing machines with the -o curldbg option.

<!-- gh-comment-id:57101235 --> @gaul commented on GitHub (Sep 28, 2014): Try comparing the output from the working and failing machines with the `-o curldbg` option.
Author
Owner

@gaul commented on GitHub (Sep 28, 2014):

Also try examining the body text with #60 when running:

s3fs bucket-name local-name -o passwd_file=passwd -o url=https://storage.googleapis.com -f -o f2
<!-- gh-comment-id:57102690 --> @gaul commented on GitHub (Sep 28, 2014): Also try examining the body text with #60 when running: ``` s3fs bucket-name local-name -o passwd_file=passwd -o url=https://storage.googleapis.com -f -o f2 ```
Author
Owner

@hetzbh commented on GitHub (Sep 28, 2014):

You can remove the commit. Apparently it happens because of entirely different reason ....

  • Connection #0 to host testslinux.storage.googleapis.com left intact
    RequestPerform(1587): HTTP response code 403
    RequestPerform(1606): HTTP response code 403 was returned, returning EPERM
    RequestPerform(1607): Body Text: RequestTimeTooSkewedThe difference between the request time and the server's time is too large.Mon, 29 Sep 2014 01:38:08
    s3fs: Failed to access bucket.

I Guess correct time zone and correct clock would help a lot. After I've set them both it works perfectly (without your commit).

Might I suggest that the "" part would be displayed when a user uses the -d option? this would have saved a lot of guess work...

Thanks for your help.

<!-- gh-comment-id:57103567 --> @hetzbh commented on GitHub (Sep 28, 2014): You can remove the commit. Apparently it happens because of entirely different reason .... - Connection #0 to host testslinux.storage.googleapis.com left intact RequestPerform(1587): HTTP response code 403 RequestPerform(1606): HTTP response code 403 was returned, returning EPERM RequestPerform(1607): Body Text: <?xml version='1.0' encoding='UTF-8'?><Error><Code>RequestTimeTooSkewed</Code><Message>The difference between the request time and the server's time is too large.</Message><Date>Mon, 29 Sep 2014 01:38:08</Date></Error> s3fs: Failed to access bucket. I Guess correct time zone and correct clock would help a lot. After I've set them both it works perfectly (without your commit). Might I suggest that the "<Message>" part would be displayed when a user uses the -d option? this would have saved a lot of guess work... Thanks for your help.
Author
Owner

@gaul commented on GitHub (Sep 28, 2014):

@hetzbh I like your suggestion of emitting the Amazon code and message; I amended #60 to do so for unsuccessful calls to CheckBucket.

<!-- gh-comment-id:57104167 --> @gaul commented on GitHub (Sep 28, 2014): @hetzbh I like your suggestion of emitting the Amazon code and message; I amended #60 to do so for unsuccessful calls to CheckBucket.
Author
Owner

@ggtakec commented on GitHub (Oct 13, 2014):

Hi, andrewgaul and hetzbh
I merged #60, please use latest master branch.
Thanks.

<!-- gh-comment-id:58887440 --> @ggtakec commented on GitHub (Oct 13, 2014): Hi, andrewgaul and hetzbh I merged #60, please use latest master branch. Thanks.
Author
Owner

@t0d0r commented on GitHub (Oct 28, 2014):

Hi, I can't mount google cloud too. Maybe I'm doing something wrong. I created access key with gsutil config and with that key I created ~/.passwd-s3fs. What I'm doing wrong? Here is my output:

cmd line:
s3fs catenate /mnt/s3fs -o url=https://storage.googleapis.com -d -d -f -o f2 -o curldbg -o ahbe_conf=/mnt/header

header file contain my project-id:
x-goog-project-id 3xxxxxxxxxx9

I'm using Amazon Simple Storage Service File System V1.78 with OpenSSL compiled from git source on Ubuntu 12.04.5 LTS.

    Dump(3537): Character count list[1] = { 0 }
Additional Header list[1] = {
    *   --->    x-goog-project-id: 324xxxxxx419
}
    set_moutpoint_attribute(3379): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
FUSE library version: 2.8.6
nullpath_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56
INIT: 7.17
flags=0x0000047b
max_readahead=0x00020000
s3fs_init(2650): init
s3fs_check_service(2968): check services.
    CheckBucket(2366): check a bucket.
    prepare_url(174): URL is https://storage.googleapis.com/catenate/
    prepare_url(206): URL changed is https://catenate.storage.googleapis.com/
    RequestPerform(1571): connecting to URL https://catenate.storage.googleapis.com/
* About to connect() to catenate.storage.googleapis.com port 443 (#0)
*   Trying 173.194.112.202... * connected
* successfully set certificate verify locations:
*   CAfile: none
  CApath: /etc/ssl/certs
* SSL connection using ECDHE-ECDSA-AES128-SHA
* Server certificate:
*        subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=*.googleusercontent.com
*        start date: 2014-10-15 10:09:07 GMT
*        expire date: 2015-01-13 00:00:00 GMT
*        subjectAltName: catenate.storage.googleapis.com matched
*        issuer: C=US; O=Google Inc; CN=Google Internet Authority G2
*        SSL certificate verify ok.
> GET / HTTP/1.1
Host: catenate.storage.googleapis.com
Accept: */*
Authorization: AWS 00b49xxxx34aaae56f45d9:CgeK9xxxx4=
Date: Tue, 28 Oct 2014 08:46:30 GMT

< HTTP/1.1 500 Internal Server Error
< Content-Type: application/xml; charset=UTF-8
< Content-Length: 5196
< Date: Tue, 28 Oct 2014 08:46:31 GMT
< Expires: Tue, 28 Oct 2014 08:46:31 GMT
< Cache-Control: private, max-age=0
< Server: UploadServer ("Built on Oct 9 2014 15:35:27 (1412894127)")
< Alternate-Protocol: 443:quic,p=0.01
< 
… 
* Connection #0 to host catenate.storage.googleapis.com left intact
    RequestPerform(1587): HTTP response code 500
    RequestPerform(1593): ###HTTP response=500
    RequestPerform(1726): ### retrying...
    RemakeHandle(1393): Retry request. [type=5][url=https://catenate.storage.googleapis.com/][path=/]
RequestPerform(1733): ### giving up
CheckBucket(2400): Check bucket failed, S3 response: 
s3fs: Failed to access bucket.
* Closing connection #0

<!-- gh-comment-id:60725257 --> @t0d0r commented on GitHub (Oct 28, 2014): Hi, I can't mount google cloud too. Maybe I'm doing something wrong. I created access key with gsutil config and with that key I created ~/.passwd-s3fs. What I'm doing wrong? Here is my output: cmd line: s3fs catenate /mnt/s3fs -o url=https://storage.googleapis.com -d -d -f -o f2 -o curldbg -o ahbe_conf=/mnt/header header file contain my project-id: x-goog-project-id 3xxxxxxxxxx9 I'm using Amazon Simple Storage Service File System V1.78 with OpenSSL compiled from git source on Ubuntu 12.04.5 LTS. ``` Dump(3537): Character count list[1] = { 0 } Additional Header list[1] = { * ---> x-goog-project-id: 324xxxxxx419 } set_moutpoint_attribute(3379): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) FUSE library version: 2.8.6 nullpath_ok: 0 unique: 1, opcode: INIT (26), nodeid: 0, insize: 56 INIT: 7.17 flags=0x0000047b max_readahead=0x00020000 s3fs_init(2650): init s3fs_check_service(2968): check services. CheckBucket(2366): check a bucket. prepare_url(174): URL is https://storage.googleapis.com/catenate/ prepare_url(206): URL changed is https://catenate.storage.googleapis.com/ RequestPerform(1571): connecting to URL https://catenate.storage.googleapis.com/ * About to connect() to catenate.storage.googleapis.com port 443 (#0) * Trying 173.194.112.202... * connected * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSL connection using ECDHE-ECDSA-AES128-SHA * Server certificate: * subject: C=US; ST=California; L=Mountain View; O=Google Inc; CN=*.googleusercontent.com * start date: 2014-10-15 10:09:07 GMT * expire date: 2015-01-13 00:00:00 GMT * subjectAltName: catenate.storage.googleapis.com matched * issuer: C=US; O=Google Inc; CN=Google Internet Authority G2 * SSL certificate verify ok. > GET / HTTP/1.1 Host: catenate.storage.googleapis.com Accept: */* Authorization: AWS 00b49xxxx34aaae56f45d9:CgeK9xxxx4= Date: Tue, 28 Oct 2014 08:46:30 GMT < HTTP/1.1 500 Internal Server Error < Content-Type: application/xml; charset=UTF-8 < Content-Length: 5196 < Date: Tue, 28 Oct 2014 08:46:31 GMT < Expires: Tue, 28 Oct 2014 08:46:31 GMT < Cache-Control: private, max-age=0 < Server: UploadServer ("Built on Oct 9 2014 15:35:27 (1412894127)") < Alternate-Protocol: 443:quic,p=0.01 < … * Connection #0 to host catenate.storage.googleapis.com left intact RequestPerform(1587): HTTP response code 500 RequestPerform(1593): ###HTTP response=500 RequestPerform(1726): ### retrying... RemakeHandle(1393): Retry request. [type=5][url=https://catenate.storage.googleapis.com/][path=/] RequestPerform(1733): ### giving up CheckBucket(2400): Check bucket failed, S3 response: s3fs: Failed to access bucket. * Closing connection #0 ```
Author
Owner

@gaul commented on GitHub (Oct 28, 2014):

@t0d0r The authorization key looks incorrect; it should have the form: GOOGTS7C7FUP3AIRVJTE (GOOG prefix, exactly 20 characters). Also have you configured your project for interoperable mode:

https://cloud.google.com/storage/docs/migrating

Finally you should not need to set the project-id.

<!-- gh-comment-id:60753644 --> @gaul commented on GitHub (Oct 28, 2014): @t0d0r The authorization key looks incorrect; it should have the form: `GOOGTS7C7FUP3AIRVJTE` (GOOG prefix, exactly 20 characters). Also have you configured your project for interoperable mode: https://cloud.google.com/storage/docs/migrating Finally you should not need to set the project-id.
Author
Owner

@t0d0r commented on GitHub (Oct 29, 2014):

Thank you for your reply, I managed to mount google cloud, but now I have another problem, when I tried to copy 100MB file I got I/O error:

root@h2251189:/mnt# cp file.100MB s3fs/
cp: closing `s3fs/file.100MB': Input/output error

the file inside mounted directory shows size 0

-rw-r--r-- 1 root root    0 Oct 29 22:25 file.100MB

gsutil seems to workaround this problem:

root@h2251189:/mnt# gsutil cp file.100MB gs://catenate-bst-universum/
Copying file://file.100MB [Content-Type=application/octet-stream]...
Uploading   gs://catenate-bst-universum/file.100MB:              95.37 MB/95.37 MB     
Uploading   gs://catenate-bst-universum/file.100MB:              95.37 MB/95.37 MB    
Retrying upload from byte 100000000 after exception.

here is the latest fragment of the output with enabled debug of the s3fs command

> POST /file.100MB?uploads HTTP/1.1
Host: catenate-bst-universum.storage.googleapis.com
Authorization: AWS GOOGxxxx:bExxxxxx=
Content-Type: application/octet-stream
Date: Wed, 29 Oct 2014 21:40:31 GMT
x-amz-acl:private
x-amz-meta-gid:0
x-amz-meta-mode:33188
x-amz-meta-mtime:1414618831
x-amz-meta-uid:0

* HTTP 1.0, assume close after body
< HTTP/1.0 411 Length Required
< Content-Type: text/html; charset=UTF-8
< Content-Length: 1428
< Date: Wed, 29 Oct 2014 21:40:31 GMT
< Server: GFE/2.0
< 
* Closing connection #0
    RequestPerform(1588): HTTP response code 411
    RequestPerform(1617): HTTP response code = 411, returning EIO
    RequestPerform(1618): Body Text: <!DOCTYPE html>
<html lang=en>
  <meta charset=utf-8>
  <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
  <title>Error 411 (Length Required)!!1</title>
  <style>
    *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/errors/logo_sm_2.png) no-repeat}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/errors/logo_sm_2_hr.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/errors/logo_sm_2_hr.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/errors/logo_sm_2_hr.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:55px;width:150px}
  </style>
  <a href=//www.google.com/><span id=logo aria-label=Google></span></a>
  <p><b>411.</b> <ins>That’s an error.</ins>
  <p>POST requests require a <code>Content-length</code> header.  <ins>That’s all we know.</ins>

    Close(1182): [ent->file=/file.100MB][ent->fd=5]
    Close(537): [path=/file.100MB][fd=5][refcnt=1]
   unique: 24422, error: -5 (Input/output error), outsize: 16
unique: 24423, opcode: RELEASE (18), nodeid: 2, insize: 64
release[5] flags: 0x8001
s3fs_release(2039): [path=/file.100MB][fd=5]
    DelStat(375): delete stat cache entry[path=/file.100MB]
    GetFdEntity(1127): [path=/file.100MB]
    Close(1182): [ent->file=/file.100MB][ent->fd=5]
    Close(537): [path=/file.100MB][fd=5][refcnt=0]
cp: closing `s3fs/file.100MB': Input/output error
root@h2251189:/mnt#     GetFdEntity(1127): [path=/file.100MB]
   unique: 24423, success, outsize: 16
<!-- gh-comment-id:61012859 --> @t0d0r commented on GitHub (Oct 29, 2014): Thank you for your reply, I managed to mount google cloud, but now I have another problem, when I tried to copy 100MB file I got I/O error: ``` root@h2251189:/mnt# cp file.100MB s3fs/ cp: closing `s3fs/file.100MB': Input/output error ``` the file inside mounted directory shows size 0 ``` -rw-r--r-- 1 root root 0 Oct 29 22:25 file.100MB ``` gsutil seems to workaround this problem: ``` root@h2251189:/mnt# gsutil cp file.100MB gs://catenate-bst-universum/ Copying file://file.100MB [Content-Type=application/octet-stream]... Uploading gs://catenate-bst-universum/file.100MB: 95.37 MB/95.37 MB Uploading gs://catenate-bst-universum/file.100MB: 95.37 MB/95.37 MB Retrying upload from byte 100000000 after exception. ``` here is the latest fragment of the output with enabled debug of the s3fs command ``` > POST /file.100MB?uploads HTTP/1.1 Host: catenate-bst-universum.storage.googleapis.com Authorization: AWS GOOGxxxx:bExxxxxx= Content-Type: application/octet-stream Date: Wed, 29 Oct 2014 21:40:31 GMT x-amz-acl:private x-amz-meta-gid:0 x-amz-meta-mode:33188 x-amz-meta-mtime:1414618831 x-amz-meta-uid:0 * HTTP 1.0, assume close after body < HTTP/1.0 411 Length Required < Content-Type: text/html; charset=UTF-8 < Content-Length: 1428 < Date: Wed, 29 Oct 2014 21:40:31 GMT < Server: GFE/2.0 < * Closing connection #0 RequestPerform(1588): HTTP response code 411 RequestPerform(1617): HTTP response code = 411, returning EIO RequestPerform(1618): Body Text: <!DOCTYPE html> <html lang=en> <meta charset=utf-8> <meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width"> <title>Error 411 (Length Required)!!1</title> <style> *{margin:0;padding:0}html,code{font:15px/22px arial,sans-serif}html{background:#fff;color:#222;padding:15px}body{margin:7% auto 0;max-width:390px;min-height:180px;padding:30px 0 15px}* > body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/errors/logo_sm_2.png) no-repeat}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/errors/logo_sm_2_hr.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/errors/logo_sm_2_hr.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/errors/logo_sm_2_hr.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:55px;width:150px} </style> <a href=//www.google.com/><span id=logo aria-label=Google></span></a> <p><b>411.</b> <ins>That’s an error.</ins> <p>POST requests require a <code>Content-length</code> header. <ins>That’s all we know.</ins> Close(1182): [ent->file=/file.100MB][ent->fd=5] Close(537): [path=/file.100MB][fd=5][refcnt=1] unique: 24422, error: -5 (Input/output error), outsize: 16 unique: 24423, opcode: RELEASE (18), nodeid: 2, insize: 64 release[5] flags: 0x8001 s3fs_release(2039): [path=/file.100MB][fd=5] DelStat(375): delete stat cache entry[path=/file.100MB] GetFdEntity(1127): [path=/file.100MB] Close(1182): [ent->file=/file.100MB][ent->fd=5] Close(537): [path=/file.100MB][fd=5][refcnt=0] cp: closing `s3fs/file.100MB': Input/output error root@h2251189:/mnt# GetFdEntity(1127): [path=/file.100MB] unique: 24423, success, outsize: 16 ```
Author
Owner

@gaul commented on GitHub (Oct 30, 2014):

GCS does not support S3 multi-part uploads; instead it supports a similar feature resumable uploads:

https://cloud.google.com/storage/docs/json_api/v1/how-tos/upload#resumable

s3fs-fuse will need to add specific logic to support this. Can you try specifying -o nomultipart as a workaround? Note that this will limit file sizes to 5 GB.

<!-- gh-comment-id:61043486 --> @gaul commented on GitHub (Oct 30, 2014): GCS does not support S3 multi-part uploads; instead it supports a similar feature resumable uploads: https://cloud.google.com/storage/docs/json_api/v1/how-tos/upload#resumable s3fs-fuse will need to add specific logic to support this. Can you try specifying `-o nomultipart` as a workaround? Note that this will limit file sizes to 5 GB.
Author
Owner

@t0d0r commented on GitHub (Oct 30, 2014):

-o no multipart fix the problem, also I managed to upload 6.1GB file.

-rw-r--r-- 1 root root 954M Oct 30 07:56 bigfile
-rwxr-xr-x 1 root root 6.1G Oct 30 09:28 BS_Gnade.mov
-rw-r--r-- 1 root root  96M Oct 30 13:26 file.100MB
<!-- gh-comment-id:61086099 --> @t0d0r commented on GitHub (Oct 30, 2014): `-o no multipart` fix the problem, also I managed to upload 6.1GB file. ``` -rw-r--r-- 1 root root 954M Oct 30 07:56 bigfile -rwxr-xr-x 1 root root 6.1G Oct 30 09:28 BS_Gnade.mov -rw-r--r-- 1 root root 96M Oct 30 13:26 file.100MB ```
Author
Owner

@gaul commented on GitHub (Oct 31, 2014):

@t0d0r I added a wiki document to help future users:

https://github.com/s3fs-fuse/s3fs-fuse/wiki/Google-Cloud-Storage

@ggtakec Can you close this issue? We already resolved @hetzbh 's original issue.

<!-- gh-comment-id:61218726 --> @gaul commented on GitHub (Oct 31, 2014): @t0d0r I added a wiki document to help future users: https://github.com/s3fs-fuse/s3fs-fuse/wiki/Google-Cloud-Storage @ggtakec Can you close this issue? We already resolved @hetzbh 's original issue.
Author
Owner

@ggtakec commented on GitHub (Nov 2, 2014):

@t0d0r I'm sorry for slow replying.

@andrewgaul
Thanks very much.

<!-- gh-comment-id:61411872 --> @ggtakec commented on GitHub (Nov 2, 2014): @t0d0r I'm sorry for slow replying. @andrewgaul Thanks very much.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#34
No description provided.