[GH-ISSUE #741] Unable to delete directories with spaces #424

Closed
opened 2026-03-04 01:45:27 +03:00 by kerem · 5 comments
Owner

Originally created by @krtrego on GitHub (Apr 2, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/741

When I try to delete directories with spaces in it, I get a permission denied error. For example:

mkdir "test directory"
rm -rf "test directory"

Gives the error "rm: cannot remove 'test directory': Permission denied"

Similarly,

cd "test directory"
ls

returns the error "ls: reading directory '.': Operation not permitted"

ls -la confirms I have the proper permissions, and the issue doesn't exist on other folder in the same directory.

touch "test file"
rm "test file"

Completes successfully with no errors.

I otherwise have no problems deleting directories that do not have spaces in them.

Version of s3fs being used (s3fs --version)

1.83

Version of fuse being used (pkg-config --modversion fuse)

2.9.4

System information (uname -r)

4.4.0-116-generic

Distro (cat /etc/issue)

Ubuntu 16.04.2 LTS

s3fs command line used (if applicable)

see fstab

/etc/fstab entry (if applicable):

s3fs#winbuild /mnt/ceph fuse _netdev,allow_other,url=http://customcephendpoint.com,use_path_request_style,umask=0002,uid=112,gid=116 0 0

s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs)

/var/log/syslog:
Apr 2 08:17:37 jk-build-master s3fs[14909]: init v1.83(commit:unknown) with OpenSSL

To recreate the issue:
mkdir "test folder"
rm -rf "test folder"
rm: cannot remove 'test folder': Permission denied

Originally created by @krtrego on GitHub (Apr 2, 2018). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/741 When I try to delete directories with spaces in it, I get a permission denied error. For example: ``` mkdir "test directory" rm -rf "test directory" ``` Gives the error "rm: cannot remove 'test directory': Permission denied" Similarly, ``` cd "test directory" ls ``` returns the error "ls: reading directory '.': Operation not permitted" ls -la confirms I have the proper permissions, and the issue doesn't exist on other folder in the same directory. ``` touch "test file" rm "test file" ``` Completes successfully with no errors. I otherwise have no problems deleting directories that do not have spaces in them. #### Version of s3fs being used (s3fs --version) 1.83 #### Version of fuse being used (pkg-config --modversion fuse) 2.9.4 #### System information (uname -r) 4.4.0-116-generic #### Distro (cat /etc/issue) Ubuntu 16.04.2 LTS #### s3fs command line used (if applicable) see fstab #### /etc/fstab entry (if applicable): s3fs#winbuild /mnt/ceph fuse _netdev,allow_other,url=http://customcephendpoint.com,use_path_request_style,umask=0002,uid=112,gid=116 0 0 #### s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs) /var/log/syslog: Apr 2 08:17:37 jk-build-master s3fs[14909]: init v1.83(commit:unknown) with OpenSSL To recreate the issue: mkdir "test folder" rm -rf "test folder" rm: cannot remove 'test folder': Permission denied
kerem closed this issue 2026-03-04 01:45:27 +03:00
Author
Owner

@gaul commented on GitHub (Apr 17, 2018):

This succeeds for me:

$ mkdir "test directory"
$ rm -rf "test directory"

Could you try to mount with flags -d -d -f -o f2 which may reveal more detail?

<!-- gh-comment-id:381814437 --> @gaul commented on GitHub (Apr 17, 2018): This succeeds for me: ```ShellSession $ mkdir "test directory" $ rm -rf "test directory" ``` Could you try to mount with flags `-d -d -f -o f2` which may reveal more detail?
Author
Owner

@deejamin commented on GitHub (Oct 16, 2018):

We are also having the same, or very similar. If needed we can open up a separate request.

We are using the following combination:
s3fs version: 1.84
OS: rhel 7.4
Fuse: 2.9.2

We are not talking to amazon's s3. We have s3fs talking to an on-prem ceph cluster (version 10.2.9 jewel)

Our fstab mount line looks like:

s3fs#web /mnt/ceph_buckets/web fuse passwd_file=/special/place/web.bucket.creds,_netdev,url=https://sub-domain.ourdomain.edu/,use_path_request_style,nocopyapi,default_acl=public-read,dbglevel=debug,uid=local_unix_user,gid=local_unix_user,users,allow_other 0 0

We can create folders with spaces, but we can't view their contents or delete them. We can however create files with the directory and edit them even though we can't see them with a directory listing. We have another php s3 SDK that we use to talk directly to ceph's S3 (for testing and other operations) and we can use that to list contents and remove the directories with spaces in the name.

When turning on debug we see the following in /var/log/messages when trying to remove the "test test" directory

Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test][flags=231424]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:  [path=/test test]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      [tpath=/test test]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      URL is https://sub-domain.ourdomain.edu/web?delimiter=/&max-keys=1000&prefix=test%20test/
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      URL changed is https://sub-domain.ourdomain.edu/web/?delimiter=/&max-keys=1000&prefix=test%20test/
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=test%20test/] []
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      url is https://sub-domain.ourdomain.edu
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: curl.cpp:RequestPerform(2078): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><RequestId>tx0000000000000000003d7-005bc5fd27-148d9-default</RequestId><HostId>148d9-default-default</HostId></Error>
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: s3fs.cpp:list_bucket(2513): ListBucketRequest returns with error.
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: s3fs.cpp:s3fs_readdir(2442): list_bucket returns error(-1).
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test][flags=231424]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:  [path=/test test]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      [tpath=/test test]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      URL is https://sub-domain.ourdomain.edu/web?delimiter=/&max-keys=1000&prefix=test%20test/
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      URL changed is https://sub-domain.ourdomain.edu/web/?delimiter=/&max-keys=1000&prefix=test%20test/
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=test%20test/] []
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      url is https://sub-domain.ourdomain.edu
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: curl.cpp:RequestPerform(2078): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><RequestId>tx0000000000000000003c2-005bc5fd27-148e8-default</RequestId><HostId>148e8-default-default</HostId></Error>
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: s3fs.cpp:list_bucket(2513): ListBucketRequest returns with error.
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: s3fs.cpp:s3fs_readdir(2442): list_bucket returns error(-1).
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:  [path=/test test]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      [tpath=/test test]
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      URL is https://sub-domain.ourdomain.edu/web?delimiter=/&max-keys=2&prefix=test%20test/
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      URL changed is https://sub-domain.ourdomain.edu/web/?delimiter=/&max-keys=2&prefix=test%20test/
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=test%20test/] []
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]:      url is https://sub-domain.ourdomain.edu
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: curl.cpp:RequestPerform(2078): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><RequestId>tx000000000000000003d8f-005bc5fd27-3775-default</RequestId><HostId>3775-default-default</HostId></Error>
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: s3fs.cpp:list_bucket(2513): ListBucketRequest returns with error.
Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: s3fs.cpp:directory_empty(1115): list_bucket returns error.

If there are any other details we can provide, we'll be happy to. Please let us know if anything jumps out from the debug logs.

Thanks,
Majeed

<!-- gh-comment-id:430294396 --> @deejamin commented on GitHub (Oct 16, 2018): We are also having the same, or very similar. If needed we can open up a separate request. We are using the following combination: s3fs version: 1.84 OS: rhel 7.4 Fuse: 2.9.2 We are not talking to amazon's s3. We have s3fs talking to an on-prem ceph cluster (version 10.2.9 jewel) Our fstab mount line looks like: ``` s3fs#web /mnt/ceph_buckets/web fuse passwd_file=/special/place/web.bucket.creds,_netdev,url=https://sub-domain.ourdomain.edu/,use_path_request_style,nocopyapi,default_acl=public-read,dbglevel=debug,uid=local_unix_user,gid=local_unix_user,users,allow_other 0 0 ``` We can create folders with spaces, but we can't view their contents or delete them. We can however create files with the directory and edit them even though we can't see them with a directory listing. We have another php s3 SDK that we use to talk directly to ceph's S3 (for testing and other operations) and we can use that to list contents and remove the directories with spaces in the name. When turning on debug we see the following in /var/log/messages when trying to remove the "test test" directory ``` Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test][flags=231424] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [tpath=/test test] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: URL is https://sub-domain.ourdomain.edu/web?delimiter=/&max-keys=1000&prefix=test%20test/ Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: URL changed is https://sub-domain.ourdomain.edu/web/?delimiter=/&max-keys=1000&prefix=test%20test/ Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=test%20test/] [] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: url is https://sub-domain.ourdomain.edu Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: curl.cpp:RequestPerform(2078): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><RequestId>tx0000000000000000003d7-005bc5fd27-148d9-default</RequestId><HostId>148d9-default-default</HostId></Error> Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: s3fs.cpp:list_bucket(2513): ListBucketRequest returns with error. Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: s3fs.cpp:s3fs_readdir(2442): list_bucket returns error(-1). Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test][flags=231424] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [tpath=/test test] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: URL is https://sub-domain.ourdomain.edu/web?delimiter=/&max-keys=1000&prefix=test%20test/ Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: URL changed is https://sub-domain.ourdomain.edu/web/?delimiter=/&max-keys=1000&prefix=test%20test/ Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=test%20test/] [] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: url is https://sub-domain.ourdomain.edu Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: curl.cpp:RequestPerform(2078): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><RequestId>tx0000000000000000003c2-005bc5fd27-148e8-default</RequestId><HostId>148e8-default-default</HostId></Error> Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: s3fs.cpp:list_bucket(2513): ListBucketRequest returns with error. Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: s3fs.cpp:s3fs_readdir(2442): list_bucket returns error(-1). Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [path=/test test] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: [tpath=/test test] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: URL is https://sub-domain.ourdomain.edu/web?delimiter=/&max-keys=2&prefix=test%20test/ Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: URL changed is https://sub-domain.ourdomain.edu/web/?delimiter=/&max-keys=2&prefix=test%20test/ Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=test%20test/] [] Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: url is https://sub-domain.ourdomain.edu Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: curl.cpp:RequestPerform(2078): HTTP response code 403, returning EPERM. Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>SignatureDoesNotMatch</Code><RequestId>tx000000000000000003d8f-005bc5fd27-3775-default</RequestId><HostId>3775-default-default</HostId></Error> Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: s3fs.cpp:list_bucket(2513): ListBucketRequest returns with error. Oct 16 10:00:55 s3fsserverhostname s3fs[3747]: s3fs.cpp:directory_empty(1115): list_bucket returns error. ``` If there are any other details we can provide, we'll be happy to. Please let us know if anything jumps out from the debug logs. Thanks, Majeed
Author
Owner

@krtrego commented on GitHub (Oct 16, 2018):

I no longer have s3fs configured so I can no longer help except to add the outcome of my issue; our IT department came to us complaining that we were doing 30 ceph requests per second, all resulting in 404s. Our use case allowed for the 's3 sync' command so we ended up doing that.

Our server logs looked something like:
2018-05-09 06:25:04.467722 7fdd73c94700 1 civetweb: 0x7fddb4003f90: 192.168.61.14 - - [09/May/2018:06:25:04 -0700] "HEAD /path/to/resource HTTP/1.1" 404 0 - s3fs/1.83 (commit hash unknown; OpenSSL)

I do not know for sure that these 404s were caused by the above problem, but it seemed likely at the time.

@deejamin you may check the server logs and see if the same issue has come up on your end.

<!-- gh-comment-id:430310983 --> @krtrego commented on GitHub (Oct 16, 2018): I no longer have s3fs configured so I can no longer help except to add the outcome of my issue; our IT department came to us complaining that we were doing 30 ceph requests per second, all resulting in 404s. Our use case allowed for the 's3 sync' command so we ended up doing that. Our server logs looked something like: `2018-05-09 06:25:04.467722 7fdd73c94700 1 civetweb: 0x7fddb4003f90: 192.168.61.14 - - [09/May/2018:06:25:04 -0700] "HEAD /path/to/resource HTTP/1.1" 404 0 - s3fs/1.83 (commit hash unknown; OpenSSL)` I do not know for sure that these 404s were caused by the above problem, but it seemed likely at the time. @deejamin you may check the server logs and see if the same issue has come up on your end.
Author
Owner

@gaul commented on GitHub (Jan 24, 2019):

@deejamin I wonder if the s3fs permissions are wrong? Can you test against master or try mounting with -o mp_umask=027?

@krtrego Sorry to head that s3fs did not work for your use case. I wonder if you had updatedb running against the s3fs mountpoint which can cause these symptoms: https://stackoverflow.com/questions/52372095/tracking-down-s3-costs-s3fs

<!-- gh-comment-id:457034046 --> @gaul commented on GitHub (Jan 24, 2019): @deejamin I wonder if the s3fs permissions are wrong? Can you test against master or try mounting with `-o mp_umask=027`? @krtrego Sorry to head that s3fs did not work for your use case. I wonder if you had `updatedb` running against the s3fs mountpoint which can cause these symptoms: https://stackoverflow.com/questions/52372095/tracking-down-s3-costs-s3fs
Author
Owner

@gaul commented on GitHub (Apr 9, 2019):

Closing due to inactivity. Please reopen if symptoms persist.

<!-- gh-comment-id:481184636 --> @gaul commented on GitHub (Apr 9, 2019): Closing due to inactivity. Please reopen if symptoms persist.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#424
No description provided.