mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #59] cannot mount Google Cloud storage buckets #34
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#34
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @hetzbh on GitHub (Sep 28, 2014).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/59
I'm using CentOS 6.5, FuSE 2.9.3, s3fs version 1.78 (from GIT), everything compiled from scratch, no old fuse or s3fs..
I created a key in "Interoperable Storage Access Keys" in the Google API Console, entered it into .passwd-s3fs, changed permissions, done everything, but when trying to mount I get this:
$ s3fs -d -d mybucket google-cloud-storage/
FUSE library version: 2.9.3
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.13
flags=0x0000b07b
max_readahead=0x00020000
s3fs: Failed to access bucket.
Have I forgot something?
@hetzbh commented on GitHub (Sep 28, 2014):
well, I think I know the reason. see this:
3fs_check_service(2968): check services.
CheckBucket(2367): check a bucket.
RequestPerform(1572): connecting to URL http://mybucket.s3.amazonaws.com/
RequestPerform(1705): ### CURLE_HTTP_RETURNED_ERROR
Yeah, I'm trying to connect to Google, not to Amazon. Where do I change it?
@gaul commented on GitHub (Sep 28, 2014):
You must set the url for GCS:
@hetzbh commented on GitHub (Sep 28, 2014):
Well, I did few tests:
$ s3fs -d -d -f testslinux google-cloud-storage/ -o url=https://storage.googleapis.com
set_moutpoint_attribute(3379): PROC(uid=500, gid=500) - MountPoint(uid=500, gid=500, mode=40755)
FUSE library version: 2.9.3
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.13
flags=0x0000b07b
max_readahead=0x00020000
s3fs_init(2650): init
s3fs_check_service(2968): check services.
CheckBucket(2367): check a bucket.
RequestPerform(1572): connecting to URL https://testslinux.storage.googleapis.com/
RequestPerform(1705): ### CURLE_HTTP_RETURNED_ERROR
RequestPerform(1710): HTTP response code =403
s3fs: Failed to access bucket.
@gaul commented on GitHub (Sep 28, 2014):
I can successfully execute:
What format does your password file have? I use something like:
@hetzbh commented on GitHub (Sep 28, 2014):
ok, I think I'm getting near... Apparently, no matter what combination I have in the .passwd-s3fs file, I get the error 403. Could it be I forgot to set anything in GCS?? (I deleted the keys, recreated them in the Interoperable Storage Access Keys and I've put them in the .passwd-s3fs file. Here is a part of it:
$ cat .passwd-s3fs
GOOGMVXXXXXXXXXXXXX:HzCXXXXXXXXXXXXXXXXXXX
@hetzbh commented on GitHub (Sep 28, 2014):
Ok, I'm getting close ... and frustrated...
On another machine of mine, which is also CentOS 6.5, same key:secret, both compiled with s3fs-1.7.4 and fuse 2.9.3. On one machine it works great, one gives me 403. The only difference is IP.
Suggestions?
@gaul commented on GitHub (Sep 28, 2014):
Try comparing the output from the working and failing machines with the
-o curldbgoption.@gaul commented on GitHub (Sep 28, 2014):
Also try examining the body text with #60 when running:
@hetzbh commented on GitHub (Sep 28, 2014):
You can remove the commit. Apparently it happens because of entirely different reason ....
RequestPerform(1587): HTTP response code 403
RequestPerform(1606): HTTP response code 403 was returned, returning EPERM
RequestPerform(1607): Body Text:
RequestTimeTooSkewedThe difference between the request time and the server's time is too large.Mon, 29 Sep 2014 01:38:08s3fs: Failed to access bucket.
I Guess correct time zone and correct clock would help a lot. After I've set them both it works perfectly (without your commit).
Might I suggest that the "" part would be displayed when a user uses the -d option? this would have saved a lot of guess work...
Thanks for your help.
@gaul commented on GitHub (Sep 28, 2014):
@hetzbh I like your suggestion of emitting the Amazon code and message; I amended #60 to do so for unsuccessful calls to CheckBucket.
@ggtakec commented on GitHub (Oct 13, 2014):
Hi, andrewgaul and hetzbh
I merged #60, please use latest master branch.
Thanks.
@t0d0r commented on GitHub (Oct 28, 2014):
Hi, I can't mount google cloud too. Maybe I'm doing something wrong. I created access key with gsutil config and with that key I created ~/.passwd-s3fs. What I'm doing wrong? Here is my output:
cmd line:
s3fs catenate /mnt/s3fs -o url=https://storage.googleapis.com -d -d -f -o f2 -o curldbg -o ahbe_conf=/mnt/header
header file contain my project-id:
x-goog-project-id 3xxxxxxxxxx9
I'm using Amazon Simple Storage Service File System V1.78 with OpenSSL compiled from git source on Ubuntu 12.04.5 LTS.
@gaul commented on GitHub (Oct 28, 2014):
@t0d0r The authorization key looks incorrect; it should have the form:
GOOGTS7C7FUP3AIRVJTE(GOOG prefix, exactly 20 characters). Also have you configured your project for interoperable mode:https://cloud.google.com/storage/docs/migrating
Finally you should not need to set the project-id.
@t0d0r commented on GitHub (Oct 29, 2014):
Thank you for your reply, I managed to mount google cloud, but now I have another problem, when I tried to copy 100MB file I got I/O error:
the file inside mounted directory shows size 0
gsutil seems to workaround this problem:
here is the latest fragment of the output with enabled debug of the s3fs command
@gaul commented on GitHub (Oct 30, 2014):
GCS does not support S3 multi-part uploads; instead it supports a similar feature resumable uploads:
https://cloud.google.com/storage/docs/json_api/v1/how-tos/upload#resumable
s3fs-fuse will need to add specific logic to support this. Can you try specifying
-o nomultipartas a workaround? Note that this will limit file sizes to 5 GB.@t0d0r commented on GitHub (Oct 30, 2014):
-o no multipartfix the problem, also I managed to upload 6.1GB file.@gaul commented on GitHub (Oct 31, 2014):
@t0d0r I added a wiki document to help future users:
https://github.com/s3fs-fuse/s3fs-fuse/wiki/Google-Cloud-Storage
@ggtakec Can you close this issue? We already resolved @hetzbh 's original issue.
@ggtakec commented on GitHub (Nov 2, 2014):
@t0d0r I'm sorry for slow replying.
@andrewgaul
Thanks very much.