mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #741] Unable to delete directories with spaces #424
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#424
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @krtrego on GitHub (Apr 2, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/741
When I try to delete directories with spaces in it, I get a permission denied error. For example:
Gives the error "rm: cannot remove 'test directory': Permission denied"
Similarly,
returns the error "ls: reading directory '.': Operation not permitted"
ls -la confirms I have the proper permissions, and the issue doesn't exist on other folder in the same directory.
Completes successfully with no errors.
I otherwise have no problems deleting directories that do not have spaces in them.
Version of s3fs being used (s3fs --version)
1.83
Version of fuse being used (pkg-config --modversion fuse)
2.9.4
System information (uname -r)
4.4.0-116-generic
Distro (cat /etc/issue)
Ubuntu 16.04.2 LTS
s3fs command line used (if applicable)
see fstab
/etc/fstab entry (if applicable):
s3fs#winbuild /mnt/ceph fuse _netdev,allow_other,url=http://customcephendpoint.com,use_path_request_style,umask=0002,uid=112,gid=116 0 0
s3fs syslog messages (grep s3fs /var/log/syslog, or s3fs outputs)
/var/log/syslog:
Apr 2 08:17:37 jk-build-master s3fs[14909]: init v1.83(commit:unknown) with OpenSSL
To recreate the issue:
mkdir "test folder"
rm -rf "test folder"
rm: cannot remove 'test folder': Permission denied
@gaul commented on GitHub (Apr 17, 2018):
This succeeds for me:
Could you try to mount with flags
-d -d -f -o f2which may reveal more detail?@deejamin commented on GitHub (Oct 16, 2018):
We are also having the same, or very similar. If needed we can open up a separate request.
We are using the following combination:
s3fs version: 1.84
OS: rhel 7.4
Fuse: 2.9.2
We are not talking to amazon's s3. We have s3fs talking to an on-prem ceph cluster (version 10.2.9 jewel)
Our fstab mount line looks like:
We can create folders with spaces, but we can't view their contents or delete them. We can however create files with the directory and edit them even though we can't see them with a directory listing. We have another php s3 SDK that we use to talk directly to ceph's S3 (for testing and other operations) and we can use that to list contents and remove the directories with spaces in the name.
When turning on debug we see the following in /var/log/messages when trying to remove the "test test" directory
If there are any other details we can provide, we'll be happy to. Please let us know if anything jumps out from the debug logs.
Thanks,
Majeed
@krtrego commented on GitHub (Oct 16, 2018):
I no longer have s3fs configured so I can no longer help except to add the outcome of my issue; our IT department came to us complaining that we were doing 30 ceph requests per second, all resulting in 404s. Our use case allowed for the 's3 sync' command so we ended up doing that.
Our server logs looked something like:
2018-05-09 06:25:04.467722 7fdd73c94700 1 civetweb: 0x7fddb4003f90: 192.168.61.14 - - [09/May/2018:06:25:04 -0700] "HEAD /path/to/resource HTTP/1.1" 404 0 - s3fs/1.83 (commit hash unknown; OpenSSL)I do not know for sure that these 404s were caused by the above problem, but it seemed likely at the time.
@deejamin you may check the server logs and see if the same issue has come up on your end.
@gaul commented on GitHub (Jan 24, 2019):
@deejamin I wonder if the s3fs permissions are wrong? Can you test against master or try mounting with
-o mp_umask=027?@krtrego Sorry to head that s3fs did not work for your use case. I wonder if you had
updatedbrunning against the s3fs mountpoint which can cause these symptoms: https://stackoverflow.com/questions/52372095/tracking-down-s3-costs-s3fs@gaul commented on GitHub (Apr 9, 2019):
Closing due to inactivity. Please reopen if symptoms persist.