mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #378] Unable to see directories at all #196
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#196
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @mSys-mislav on GitHub (Mar 21, 2016).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/378
Hello.
I'm using s3fs version 1.79 and I'm only able to connect to my bucket using option sigv2 (default is not working, I'm getting error:
[ERR] curl.cpp:RequestPerform(1828): ### CURLE_RECV_ERROR
[INF] curl.cpp:RequestPerform(1895): ### retrying...
[INF] curl.cpp:RemakeHandle(1558): Retry request. [type=5][url=https://XY/testus/][path=/]
[ERR] curl.cpp:RequestPerform(1902): ### giving up
[ERR] curl.cpp:CheckBucket(2697): Check bucket failed, S3 response:
[CRT] s3fs.cpp:s3fs_check_service(3765): unable to connect - result of checking service.
[ERR] s3fs.cpp:s3fs_exit_fuseloop(3323): Exiting FUSE event loop due to errors
However, when I connect with the following command:
s3fs testus /mnt/testus -o passwd_file=/root/.s3_passwd -o allow_other -o url=https://XY
then it is working. Please note that whether or not I specify -o use_path_request_style - I'm able to connect.
My problem is the following now: I'm using dragondisk to upload files/folders, but once I mount the bucket, everything is seen as files:
ls -la /mnt/testus/
total 57K
drwxrwxrwx 1 root root 0 Jan 1 1970 .
drwxr-xr-x 3 root root 4.0K Mar 18 16:09 ..
---------- 1 root root 42K Mar 21 08:40 test_image_n1.jpg
---------- 1 root root 3.6K Mar 21 08:45 test_image1.png
---------- 1 root root 3.6K Mar 21 09:27 test_image2.png
---------- 1 root root 0 Mar 21 08:47 test1
---------- 1 root root 0 Mar 21 09:03 test2
---------- 1 root root 0 Mar 21 09:03 test3
---------- 1 root root 0 Mar 21 09:03 test4
Those testX are folders, however they don't have flag as directory d--------- and some of them are not even empty. For example, under dragondisk I am able to browse folders/files and copy files there.
What could be the problem? I've tried to mount bucket with different uid/gid/mode/mask, but that didn't help.
@ggtakec commented on GitHub (Mar 22, 2016):
@mislav-eu I did not know about EXFULL(54) error on GnuTLS at first your report.
If you do not specify a use_path_request_style, then s3fs does not work, please leave this option is attached.
(Maybe, do you use storage other than the S3?)
Now your problem, it reasons that the dragondisk does not make directory objects.
But s3fs needs the directory object(it is 0 byte file, but has meta data).
s3fs has compatibility with other S3 tool.
However, because s3fs could not create a permission which the object does not have, and show them as 000(----------) instead of empty meta data.
So you can use umask, uid, gid option, these help you for permission.
If you could not get these option result, please let us know about your detail option.
Regards,
@mSys-mislav commented on GitHub (Mar 22, 2016):
Hi there. I'm not using directly S3 from amazon, but from our provider - but that is also S3 storage with amazon credentials/bucket.
I didn't find anywhere in dragondisk option to make directory objects (meta data). As mentioned - whether or not I specify -o use_path_request_style - I'm able to connect and mount the bucket.
Also, I've used both umask 0002/0022/0077/0007 and nothing changed - I've also tried setting gid/uid - this is for dragondisk.
This directory objects you mentioned is also not working with:
http://s3browser.com/
but it seems like everything is working just fine with:
http://www.cloudberrylab.com/free-amazon-s3-explorer-cloudfront-IAM.aspx
$ ls -la /mnt/testus/
total 11K
drwxrwx--- 1 www-data www-data 0 Jan 1 1970 .
drwxr-xr-x 3 root root 4.0K Mar 22 08:45 ..
-rwxrwx--- 1 www-data www-data 3.6K Mar 22 09:52 test-image.png
drwxrwx--- 1 www-data www-data 0 Mar 22 09:52 test10
drwxrwx--- 1 www-data www-data 0 Mar 22 09:52 test11
-rwxrwx--- 1 www-data www-data 823 Mar 22 09:58 error.txt
The bucket was mounted with:
$ s3fs testus /mnt/testus -o passwd_file=/XY/XY/XY/.s3_passwd -ogid=XY,uid=XY -o umask=0007 -o use_path_request_style -o allow_other -o sigv2 -o url=https://XY
Do you have some other clients you would recommend for uploading files/folders?
@ggtakec commented on GitHub (Mar 22, 2016):
@mislav-eu I'm sorry for that I'm not familiar with other tools.
I think that s3 console(AWS)/s3cmd is used basically, DragonDisk and TNT Drive etc are used for the GUI tool.
However, maybe you should use the tools which can set the meta information(HTTP header), thus they go together with s3fs.
In addition, I think it is better, which is a tool to create the directory as an object.
I hope that someone recommends some tools.
Regards,
@ggtakec commented on GitHub (Mar 30, 2019):
We kept this issue open for a long time.
I will close this, but if the problem persists, please reopen or post a new issue.