[GH-ISSUE #225] Cannot mount directory which haven't been explicitly created. #122

Closed
opened 2026-03-04 01:42:18 +03:00 by kerem · 5 comments
Owner

Originally created by @Doerge on GitHub (Aug 11, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/225

I'm not sure if this is supposed to work or not but here goes:

If you create a key with slashes in the name e.g. "mydirectory/test.txt", without having explicitly created the directory "mydirectory" as an empty key, s3fs-fuse cannot mount it:

> /usr/bin/s3fs mybucket:/mydirectory /mymountdir
    set_moutpoint_attribute(4055): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755)
s3fs_init(3239): init
s3fs_check_service(3598): check services.
    CheckBucket(2537): check a bucket.
    insertV4Headers(1973): computing signature [GET] [/mydirectory/] [] []
    url_to_host(99): url is http://s3.amazonaws.com
    RequestPerform(1631): connecting to URL http://mybucket.s3.amazonaws.com/mydirectory/
    RequestPerform(1670): HTTP response code 404 was returned, returning ENOENT
CheckBucket(2575): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?>
<Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>mydirectory/</Key><RequestId>47B15E4ACD932636</RequestId><HostId>3XJdNPWR2Z0/1mKzfmmm2qxv+ubLOdQHl6zxWW/1oMFlnpmtDRV1EvZKF1LlttdqjxoyTghMYl0=</HostId></Error>
s3fs: bucket not found

It makes sense, because the key mydirectory doesn't exist. However it is shown as a directory inside the s3-console so I sort of expected them to behave the same way in s3fs-fuse.

The work-around is very simple: Create the directoy as an empty key. I just didn't know it was supposed to be that way. Perhaps this should be added to the FAQ or to the Limitations section?

Also thanks for a great project!
Best,
Aske

Originally created by @Doerge on GitHub (Aug 11, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/225 I'm not sure if this is supposed to work or not but here goes: If you create a key with slashes in the name e.g. "mydirectory/test.txt", without having explicitly created the directory "mydirectory" as an empty key, s3fs-fuse cannot mount it: ``` > /usr/bin/s3fs mybucket:/mydirectory /mymountdir set_moutpoint_attribute(4055): PROC(uid=0, gid=0) - MountPoint(uid=0, gid=0, mode=40755) s3fs_init(3239): init s3fs_check_service(3598): check services. CheckBucket(2537): check a bucket. insertV4Headers(1973): computing signature [GET] [/mydirectory/] [] [] url_to_host(99): url is http://s3.amazonaws.com RequestPerform(1631): connecting to URL http://mybucket.s3.amazonaws.com/mydirectory/ RequestPerform(1670): HTTP response code 404 was returned, returning ENOENT CheckBucket(2575): Check bucket failed, S3 response: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>NoSuchKey</Code><Message>The specified key does not exist.</Message><Key>mydirectory/</Key><RequestId>47B15E4ACD932636</RequestId><HostId>3XJdNPWR2Z0/1mKzfmmm2qxv+ubLOdQHl6zxWW/1oMFlnpmtDRV1EvZKF1LlttdqjxoyTghMYl0=</HostId></Error> s3fs: bucket not found ``` It makes sense, because the key mydirectory doesn't exist. However it is shown as a directory inside the s3-console so I sort of expected them to behave the same way in s3fs-fuse. The work-around is very simple: Create the directoy as an empty key. I just didn't know it was supposed to be that way. Perhaps this should be added to the FAQ or to the Limitations section? Also thanks for a great project! Best, Aske
kerem closed this issue 2026-03-04 01:42:19 +03:00
Author
Owner

@ggtakec commented on GitHub (Aug 12, 2015):

Although I have not able to test for this, can you try the following thing?

please try to change “mydirectory” directory name to “mydirectory/” and retry to mount.

Maybe then, your directory name does not have “/“ at the end of name.
Because s3fs is operating on the assumption that there is it.
If your result is success, I fix the codes for this case which the directory does not have “/“ at end of name.

Thanks in advance for your help.
Takeshi

<!-- gh-comment-id:130346799 --> @ggtakec commented on GitHub (Aug 12, 2015): Although I have not able to test for this, can you try the following thing? please try to change “mydirectory” directory name to “mydirectory/” and retry to mount. Maybe then, your directory name does not have “/“ at the end of name. Because s3fs is operating on the assumption that there is it. If your result is success, I fix the codes for this case which the directory does not have “/“ at end of name. Thanks in advance for your help. Takeshi
Author
Owner

@Doerge commented on GitHub (Aug 13, 2015):

Sorry I was a bit unclear. Here is a small python example that will hopefully make it clearer:

import boto
s3 = boto.connect_s3(<key_id>, <key_secret>)
buck = s3.create_bucket('mybucketname')

k = buck.new_key('mydirectory/test.txt')
k.set_contents_from_string('foobarbaz')
k.close()

k = buck.get_key('mydirectory/test.txt')
assert k != None

k = buck.get_key('mydirectory/')
assert k == None
k = buck.get_key('/mydirectory/')
assert k == None
k = buck.get_key('/mydirectory')
assert k == None
k = buck.get_key('mydirectory')
assert k == None

# Possible solution? Use prefix and check if it is empty?
ks = buck.get_all_keys(prefix='mydirectory/')
assert len(ks) > 0

I think you should use a request that checks for keys with the prefix, and not if the directory exists as a key.
Perhaps http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html could be used with the prefix parameter, and maxkeys=1?

Thanks for your help.
Aske

<!-- gh-comment-id:130578204 --> @Doerge commented on GitHub (Aug 13, 2015): Sorry I was a bit unclear. Here is a small python example that will hopefully make it clearer: ``` import boto s3 = boto.connect_s3(<key_id>, <key_secret>) buck = s3.create_bucket('mybucketname') k = buck.new_key('mydirectory/test.txt') k.set_contents_from_string('foobarbaz') k.close() k = buck.get_key('mydirectory/test.txt') assert k != None k = buck.get_key('mydirectory/') assert k == None k = buck.get_key('/mydirectory/') assert k == None k = buck.get_key('/mydirectory') assert k == None k = buck.get_key('mydirectory') assert k == None # Possible solution? Use prefix and check if it is empty? ks = buck.get_all_keys(prefix='mydirectory/') assert len(ks) > 0 ``` I think you should use a request that checks for keys with the prefix, and not if the directory exists as a key. Perhaps http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketGET.html could be used with the prefix parameter, and maxkeys=1? Thanks for your help. Aske
Author
Owner

@osterman commented on GitHub (Jun 23, 2016):

Related to this, I would like the option to create the folder in the bucket if it does not exist. Otherwise this needs to be instrumented outside of s3fs (either by hand or using some other tool / library), when it seems like a trivial thing for s3fs to do.

<!-- gh-comment-id:227926492 --> @osterman commented on GitHub (Jun 23, 2016): Related to this, I would like the option to create the folder in the bucket if it does not exist. Otherwise this needs to be instrumented outside of s3fs (either by hand or using some other tool / library), when it seems like a trivial thing for s3fs to do.
Author
Owner

@sqlbot commented on GitHub (Jun 23, 2016):

From the perspective of how the S3 API and console work, what s3fs should do is not care if the root folder placeholder is missing.

This would be consistent with the console behavior -- the console doesn't actually "need" it. If you create the folder foo in the console, you get a visible folder, and an empty object named foo/ in the bucket if you list the keys through the API. You can then upload bar.txt "into" the (imaginary) folder, and you have an object named foo/bar.txt. Or, uploading the object foo/bar.txt through the API would have the same result -- however, with an empty bucket, creating the object foo/bar.txt through the API also results in that folder being visible in the console, even though no foo/ object actually exists in the bucket.

It seems to me like s3fs should have no real need for the foo/ object to exist, since it's not actually any kind of container, and listing objects with prefix foo/ and delimiter / will return the same empty result whether the foo/ object is present or not.

The folder placeholder shouldn't be "needed" in any meaningful sense, since it is reasonable to infer that that it's permissions are something sane like drwxrwx--- root root if the object is absent (or present with no metadata). Using chmod or chown on the mount point should create foo/ when executed, so the permissions can be set on the metadata, just as it should do for a subdirectory further down in the bucket that was implicitly "manifested" (since no placeholder is actually created in that case) by uploading objects through the API directly using a matching prefix.

<!-- gh-comment-id:227933546 --> @sqlbot commented on GitHub (Jun 23, 2016): From the perspective of how the S3 API and console work, what s3fs should do is not care if the root folder placeholder is missing. This would be consistent with the console behavior -- the console doesn't actually "need" it. If you create the folder `foo` in the console, you get a visible folder, and an empty object named `foo/` in the bucket if you list the keys through the API. You can then upload `bar.txt` "into" the (imaginary) folder, and you have an object named `foo/bar.txt`. Or, uploading the object `foo/bar.txt` through the API would have the same result -- however, with an empty bucket, creating the object `foo/bar.txt` through the API _also_ results in that folder being visible in the console, even though no `foo/` object actually exists in the bucket. It seems to me like s3fs should have no real need for the `foo/` object to exist, since it's not actually any kind of container, and listing objects with prefix `foo/` and delimiter `/` will return the same empty result whether the `foo/` object is present or not. The folder placeholder shouldn't be "needed" in any meaningful sense, since it is reasonable to infer that that it's permissions are something sane like `drwxrwx--- root root` if the object is absent (or present with no metadata). Using `chmod` or `chown` on the mount point _should_ create `foo/` when executed, so the permissions can be set on the metadata, just as it _should_ do for a subdirectory further down in the bucket that was implicitly "manifested" (since no placeholder is actually created in that case) by uploading objects through the API directly using a matching prefix.
Author
Owner

@ggtakec commented on GitHub (Jun 26, 2016):

@Doerge
I'm sorry, I had mistaken the meaning of your question.

Now, s3fs can not use mount point on the folder(object in S3) that does not exist.
The "mydirectory/test.txt" object which is made on S3-console by you is only one "mydirectory/test.txt" object, it does not make "mydirectory/"(or "mydirectory") object in S3.
Then s3fs can not get success to check the mount point "/mydirectory", and failed to mount.

But s3fs can make bucket(and sub directory if it is specified) at start with "createbucket" option.

So you can run s3fs following (only once):

s3fs mybucket:/mydirectory /mymountdir -o createbucket

Please try it.(@osterman too)
Thanks in advance for your assistance.

@sqlbot Thanks for your help.

Regards,

<!-- gh-comment-id:228581355 --> @ggtakec commented on GitHub (Jun 26, 2016): @Doerge I'm sorry, I had mistaken the meaning of your question. Now, s3fs can not use mount point on the folder(object in S3) that does not exist. The "mydirectory/test.txt" object which is made on S3-console by you is only one "mydirectory/test.txt" object, it does not make "mydirectory/"(or "mydirectory") object in S3. Then s3fs can not get success to check the mount point "/mydirectory", and failed to mount. But s3fs can make bucket(and sub directory if it is specified) at start with "createbucket" option. So you can run s3fs following (only once): ``` s3fs mybucket:/mydirectory /mymountdir -o createbucket ``` Please try it.(@osterman too) Thanks in advance for your assistance. @sqlbot Thanks for your help. Regards,
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#122
No description provided.