mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #227] mount s3fs eroor 404 #125
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#125
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @novoxd on GitHub (Aug 11, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/227
s3fs instaled from git, os ubuntu
when I use:
s3fs koshka-owncloud /s3mnt -ouse_cache=/tmp/cache -d -f
I get:
set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40777)
s3fs_init(2595): init
s3fs_check_service(2894): check services.
CheckBucket(2228): check a bucket.
RequestPerform(1467): connecting to URL http://koshka-owncloud.s3.amazonaws.com/
RequestPerform(1595): ### CURLE_HTTP_RETURNED_ERROR
RequestPerform(1600): HTTP response code =400
s3fs: Failed to access bucket.
when I use:
s3fs koshka-owncloud /s3mnt -ouse_cache=/tmp/cache -ouse_sse=/etc/passwd-s3fs -ourl=http://s3-eu-west-1.amazonaws.com -d -f
I get:
set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40777)
s3fs_init(2595): init
s3fs_check_service(2894): check services.
CheckBucket(2228): check a bucket.
RequestPerform(1467): connecting to URL http://koshka-owncloud.s3-eu-west-1.amazonaws.com/
RequestPerform(1483): HTTP response code 301
^Cs3fs_destroy(2628): destroy
after 301 it suspeneds
the key format in ~/.passwd-s3fs is ok
@Doerge commented on GitHub (Aug 11, 2015):
301 means moved. It says in the response that it is located in the eu-central region:
try using
koshka-owncloud.s3.eu-central-1.amazonaws.com.@novoxd commented on GitHub (Aug 11, 2015):
s3fs koshka-owncloud /s3mnt -ouse_cache=/tmp/cache -ouse_sse=/etc/passwd-s3fs -ourl=http://koshka-owncloud.s3.eu-central-1.amazonaws.com -d -f
set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40777)
s3fs_init(2595): init
s3fs_check_service(2894): check services.
CheckBucket(2228): check a bucket.
RequestPerform(1467): connecting to URL http://koshka-owncloud.koshka-owncloud.s3.eu-central-1.amazonaws.com/koshka-owncloud/.s3.eu-central-1.amazonaws.com/koshka-owncloud/
RequestPerform(1595): ### CURLE_HTTP_RETURNED_ERROR
RequestPerform(1600): HTTP response code =400
s3fs: Failed to access bucket.
@kahing commented on GitHub (Aug 11, 2015):
try s3.eu-central-1.amazonaws.com without the bucket:
@novoxd commented on GitHub (Aug 11, 2015):
s3fs koshka-owncloud /s3mnt -ouse_cache=/tmp/cache -ouse_sse=/etc/passwd-s3fs -ourl=http://s3.eu-central-1.amazonaws.com -d -f
s3fs_init(2595): init
s3fs_check_service(2894): check services.
CheckBucket(2228): check a bucket.
RequestPerform(1467): connecting to URL http://koshka-owncloud.s3.eu-central-1.amazonaws.com/
RequestPerform(1595): ### CURLE_HTTP_RETURNED_ERROR
RequestPerform(1600): HTTP response code =400
s3fs: Failed to access bucket.
@kahing commented on GitHub (Aug 11, 2015):
did you mean to do -ouse_sse=/etc/passwd-s3fs or -o passwd_file?
@novoxd commented on GitHub (Aug 11, 2015):
you are right, changed -ouse_sse to -opasswd_file, nofing changes
@kahing commented on GitHub (Aug 11, 2015):
Have you double checked your access keys? It should be in the format of
@novoxd commented on GitHub (Aug 11, 2015):
my ~/.passwd-s3fs (permissions 600):
koshka-owncloud:my_BUCKET_ACCESSKEY:my_BUCKET_CREDENTIAL
my_BUCKET_ACCESSKEY:my_BUCKET_CREDENTIAL
@kahing commented on GitHub (Aug 11, 2015):
could you try specifying -d -d to get more debug output?
@novoxd commented on GitHub (Aug 11, 2015):
s3fs koshka-owncloud /s3mnt -ouse_cache=/tmp/cache -ourl=http://s3.eu-central-1.amazonaws.com -d -d -f
set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40777)
FUSE library version: 2.9.2
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.22
flags=0x0000f7fb
max_readahead=0x00020000
s3fs_init(2595): init
s3fs_check_service(2894): check services.
CheckBucket(2228): check a bucket.
RequestPerform(1467): connecting to URL http://koshka-owncloud.s3.eu-central-1.amazonaws.com/
RequestPerform(1595): ### CURLE_HTTP_RETURNED_ERROR
RequestPerform(1600): HTTP response code =400
s3fs: Failed to access bucket.
@novoxd commented on GitHub (Aug 11, 2015):
I will try to catch the http reqest to see what's happening
@gaul commented on GitHub (Aug 11, 2015):
@novoxudonoser Can you add
-o curldbgto dump the HTTP request?@novoxd commented on GitHub (Aug 11, 2015):
s3fs koshka-owncloud /s3mnt -ouse_cache=/tmp/cache -ourl=http://s3.eu-central-1.amazonaws.com -d -d -f -o curldbg
set_moutpoint_attribute(3291): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40777)
FUSE library version: 2.9.2
nullpath_ok: 0
nopath: 0
utime_omit_ok: 0
unique: 1, opcode: INIT (26), nodeid: 0, insize: 56, pid: 0
INIT: 7.22
flags=0x0000f7fb
max_readahead=0x00020000
s3fs_init(2595): init
s3fs_check_service(2894): check services.
CheckBucket(2228): check a bucket.
RequestPerform(1467): connecting to URL http://koshka-owncloud.s3.eu-central-1.amazonaws.com/
Hostname was NOT found in DNS cache
Trying 54.231.194.50...
Connected to koshka-owncloud.s3.eu-central-1.amazonaws.com (54.231.194.50) port 80 (#0)
The requested URL returned error: 400 Bad Request
Closing connection 0
RequestPerform(1595): ### CURLE_HTTP_RETURNED_ERROR
RequestPerform(1600): HTTP response code =400
s3fs: Failed to access bucket.
@novoxd commented on GitHub (Aug 11, 2015):
that is strange my BUCKET_CREDENTIAL is not gym3UoxBTCki7TBP0LQkbpmUvT8= , it is the crypto hash?
@gaul commented on GitHub (Aug 11, 2015):
@novoxudonoser
gym3UoxBTCki7TBP0LQkbpmUvT8=is the HMAC signature, not your secret key. Each request derives its unique signature from the key but does not include it in the request.@novoxd commented on GitHub (Aug 11, 2015):
@andrewgaul ok, any idea what to do next?
@kahing commented on GitHub (Aug 11, 2015):
double check that you are using s3fs from git. It's doing v2 auth but frankfurt only supports v4.
@kahing commented on GitHub (Aug 11, 2015):
what does
which s3fsreturn?@novoxd commented on GitHub (Aug 11, 2015):
@kahing I uset this to install:
@novoxd commented on GitHub (Aug 11, 2015):
@kahing
@novoxd commented on GitHub (Aug 11, 2015):
s3fs --version
Amazon Simple Storage Service File System 1.74
Copyright (C) 2010 Randy Rizun rrizun@gmail.com
License GPL2: GNU GPL version 2 http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
@kahing commented on GitHub (Aug 11, 2015):
you are using s3fs from
/usr/local/bin/s3fsbut you are installings3fsto /usr/bin. Either remove the copy from/usr/local/binor overwrite that with the new version.@novoxd commented on GitHub (Aug 11, 2015):
That it! Thank you a lot man, it is time to laugh at me.

@novoxd commented on GitHub (Aug 11, 2015):
By the way, what is the bigest filesize now? and what about metadata caching?
@kahing commented on GitHub (Aug 11, 2015):
there are max_stat_cache_size and stat_cache_expire options to control metadata caching. It's been tested with files over 5GB but I am not sure what's the biggest file size it would actually work with. Renaming large files doesn't currently work, pending https://github.com/s3fs-fuse/s3fs-fuse/pull/213
@gaul commented on GitHub (Aug 21, 2015):
@novoxudonoser Can you close this issue?
@goelmk commented on GitHub (May 21, 2017):
Hi @kahing,
I 'm facing the same issue and followed your conversation with @novoxudonoser. It's been very helpful until towards the end where you concluded the location / version of s3fs is incorrect for being referenced from a wrong location. That's not the case with me and hence my problem remains unresolved. But, it seems I'm very close to it. If you can help me in this regard, it will be much appreciated.
I'm getting AuthorizedHeaderMalformed error. Here is the curl response:
Look forward to your guidance and help in this regard.
Thanks,
Manoj
@gaul commented on GitHub (May 21, 2017):
@goelmk Please open a new issue and include the flags you launched s3fs with.