[GH-ISSUE #2025] doesn't work properly under versioning enabled #1020

Open
opened 2026-03-04 01:50:41 +03:00 by kerem · 21 comments
Owner

Originally created by @garenchan on GitHub (Aug 26, 2022).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2025

Additional Information

The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD

Version of s3fs being used (s3fs --version)

Amazon Simple Storage Service File System V1.91 (commit:unknown) with OpenSSL
Copyright (C) 2010 Randy Rizun rrizun@gmail.com
License GPL2: GNU GPL version 2 https://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

Name : fuse
Version : 2.9.2
Release : 11.el7
Architecture: x86_64
Install Date: 2022年07月12日 星期二 09时39分58秒
Group : System Environment/Base
Size : 223297
License : GPL+
Signature : RSA/SHA256, 2018年11月12日 星期一 22时25分34秒, Key ID 24c6a8a7f4a80eb5
Source RPM : fuse-2.9.2-11.el7.src.rpm
Build Date : 2018年10月31日 星期三 05时32分35秒
Build Host : x86-01.bsys.centos.org
Relocations : (not relocatable)
Packager : CentOS BuildSystem http://bugs.centos.org
Vendor : CentOS
URL : https://github.com/libfuse/libfuse
Summary : File System in Userspace (FUSE) utilities
Description :
With FUSE it is possible to implement a fully functional filesystem in a
userspace program. This package contains the FUSE userspace tools to
mount a FUSE filesystem.

Kernel information (uname -r)

3.10.0-1127.el7.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"

s3fs command line used, if applicable

s3fs test-bucket /mnt/test2/ -f -o passwd_file=/etc/s3cred,use_path_request_style,url=http://10.134.80.223:9001,dbglevel=info

/etc/fstab entry, if applicable

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages

2022-08-26T09:15:27.741Z [INF]   s3fs.cpp:list_bucket(2707): [path=/]
2022-08-26T09:15:27.741Z [INF]       curl.cpp:ListBucketRequest(3522): [tpath=/]
2022-08-26T09:15:27.741Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=1000&prefix=
2022-08-26T09:15:27.741Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=1000&prefix=
2022-08-26T09:15:27.741Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=] []
2022-08-26T09:15:27.741Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:27.744Z [INF]       curl.cpp:RequestPerform(2324): HTTP response code 200
2022-08-26T09:15:27.745Z [INF]   s3fs.cpp:readdir_multi_head(2579): [path=/][list=0]
2022-08-26T09:15:27.745Z [INF]       curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/][bpath=parent-dir/][save=/parent-dir/][sseckeypos=18446744073709551615]
2022-08-26T09:15:27.745Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/
2022-08-26T09:15:27.745Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/
2022-08-26T09:15:27.745Z [INF]       curl_multi.cpp:Request(297): [count=1]
2022-08-26T09:15:27.745Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/] [] []
2022-08-26T09:15:27.745Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:27.747Z [INF]       curl.cpp:RequestPerform(2324): HTTP response code 200
2022-08-26T09:15:27.748Z [INF]       cache.cpp:AddStat(343): add stat cache entry[path=/parent-dir/]
2022-08-26T09:15:27.748Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/parent-dir]
2022-08-26T09:15:27.936Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/]
2022-08-26T09:15:28.945Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/]
2022-08-26T09:15:29.863Z [INF] s3fs.cpp:s3fs_opendir(2514): [path=/][flags=0x18800]
2022-08-26T09:15:29.863Z [INF] s3fs.cpp:s3fs_readdir(2663): [path=/]
2022-08-26T09:15:29.863Z [INF]   s3fs.cpp:list_bucket(2707): [path=/]
2022-08-26T09:15:29.863Z [INF]       curl.cpp:ListBucketRequest(3522): [tpath=/]
2022-08-26T09:15:29.863Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=1000&prefix=
2022-08-26T09:15:29.863Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=1000&prefix=
2022-08-26T09:15:29.863Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=] []
2022-08-26T09:15:29.863Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:29.866Z [INF]       curl.cpp:RequestPerform(2324): HTTP response code 200
2022-08-26T09:15:29.866Z [INF]   s3fs.cpp:readdir_multi_head(2579): [path=/][list=0]
2022-08-26T09:15:29.866Z [INF]       curl_multi.cpp:Request(297): [count=0]
2022-08-26T09:15:29.866Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/parent-dir]
2022-08-26T09:15:29.954Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/]
2022-08-26T09:15:30.195Z [INF] s3fs.cpp:s3fs_opendir(2514): [path=/parent-dir][flags=0x38800]
2022-08-26T09:15:30.196Z [INF] s3fs.cpp:s3fs_readdir(2663): [path=/parent-dir]
2022-08-26T09:15:30.196Z [INF]   s3fs.cpp:list_bucket(2707): [path=/parent-dir]
2022-08-26T09:15:30.196Z [INF]       curl.cpp:ListBucketRequest(3522): [tpath=/parent-dir]
2022-08-26T09:15:30.196Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=1000&prefix=parent-dir/
2022-08-26T09:15:30.196Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=1000&prefix=parent-dir/
2022-08-26T09:15:30.196Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=parent-dir/] []
2022-08-26T09:15:30.196Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.198Z [INF]       curl.cpp:RequestPerform(2324): HTTP response code 200
2022-08-26T09:15:30.198Z [INF]   s3fs.cpp:readdir_multi_head(2579): [path=/parent-dir/][list=0]
2022-08-26T09:15:30.198Z [INF]       curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir/][bpath=sub-dir/][save=/parent-dir/sub-dir/][sseckeypos=18446744073709551615]
2022-08-26T09:15:30.198Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/
2022-08-26T09:15:30.198Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/
2022-08-26T09:15:30.198Z [INF]       curl_multi.cpp:Request(297): [count=1]
2022-08-26T09:15:30.198Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir/] [] []
2022-08-26T09:15:30.198Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.200Z [INF]       curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT
2022-08-26T09:15:30.200Z [INF]     s3fs.cpp:readdir_multi_head(2649): Could not find /parent-dir/sub-dir file in stat cache.
2022-08-26T09:15:30.200Z [INF] s3fs.cpp:s3fs_opendir(2514): [path=/parent-dir][flags=0x38800]
2022-08-26T09:15:30.200Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/parent-dir]
2022-08-26T09:15:30.200Z [INF] s3fs.cpp:s3fs_readdir(2663): [path=/parent-dir]
2022-08-26T09:15:30.200Z [INF]   s3fs.cpp:list_bucket(2707): [path=/parent-dir]
2022-08-26T09:15:30.200Z [INF]       curl.cpp:ListBucketRequest(3522): [tpath=/parent-dir]
2022-08-26T09:15:30.200Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=1000&prefix=parent-dir/
2022-08-26T09:15:30.200Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=1000&prefix=parent-dir/
2022-08-26T09:15:30.200Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=parent-dir/] []
2022-08-26T09:15:30.200Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.202Z [INF]       curl.cpp:RequestPerform(2324): HTTP response code 200
2022-08-26T09:15:30.202Z [INF]   s3fs.cpp:readdir_multi_head(2579): [path=/parent-dir/][list=0]
2022-08-26T09:15:30.202Z [INF]       curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir/][bpath=sub-dir/][save=/parent-dir/sub-dir/][sseckeypos=18446744073709551615]
2022-08-26T09:15:30.202Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/
2022-08-26T09:15:30.202Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/
2022-08-26T09:15:30.202Z [INF]       curl_multi.cpp:Request(297): [count=1]
2022-08-26T09:15:30.202Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir/] [] []
2022-08-26T09:15:30.202Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.203Z [INF]       curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT
2022-08-26T09:15:30.203Z [INF]     s3fs.cpp:readdir_multi_head(2649): Could not find /parent-dir/sub-dir file in stat cache.
2022-08-26T09:15:30.204Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/parent-dir/sub-dir]
2022-08-26T09:15:30.204Z [INF]       curl.cpp:HeadRequest(3100): [tpath=/parent-dir/sub-dir]
2022-08-26T09:15:30.204Z [INF]       curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir][bpath=][save=][sseckeypos=18446744073709551615]
2022-08-26T09:15:30.204Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir
2022-08-26T09:15:30.204Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir
2022-08-26T09:15:30.204Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir] [] []
2022-08-26T09:15:30.204Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.205Z [INF]       curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT
2022-08-26T09:15:30.205Z [INF]       curl.cpp:HeadRequest(3100): [tpath=/parent-dir/sub-dir/]
2022-08-26T09:15:30.205Z [INF]       curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir/][bpath=][save=][sseckeypos=18446744073709551615]
2022-08-26T09:15:30.205Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/
2022-08-26T09:15:30.205Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/
2022-08-26T09:15:30.205Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir/] [] []
2022-08-26T09:15:30.205Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.206Z [INF]       curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT
2022-08-26T09:15:30.206Z [INF]       curl.cpp:HeadRequest(3100): [tpath=/parent-dir/sub-dir_$folder$]
2022-08-26T09:15:30.206Z [INF]       curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir_$folder$][bpath=][save=][sseckeypos=18446744073709551615]
2022-08-26T09:15:30.206Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir_%24folder%24
2022-08-26T09:15:30.206Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir_%24folder%24
2022-08-26T09:15:30.206Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir_$folder$] [] []
2022-08-26T09:15:30.206Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.207Z [INF]       curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT
2022-08-26T09:15:30.207Z [INF]   s3fs.cpp:list_bucket(2707): [path=/parent-dir/sub-dir]
2022-08-26T09:15:30.207Z [INF]       curl.cpp:ListBucketRequest(3522): [tpath=/parent-dir/sub-dir]
2022-08-26T09:15:30.207Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=2&prefix=parent-dir/sub-dir/
2022-08-26T09:15:30.207Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=2&prefix=parent-dir/sub-dir/
2022-08-26T09:15:30.207Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=parent-dir/sub-dir/] []
2022-08-26T09:15:30.207Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.209Z [INF]       curl.cpp:RequestPerform(2324): HTTP response code 200
2022-08-26T09:15:30.209Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/parent-dir/sub-dir]
2022-08-26T09:15:30.209Z [INF]       curl.cpp:HeadRequest(3100): [tpath=/parent-dir/sub-dir]
2022-08-26T09:15:30.209Z [INF]       curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir][bpath=][save=][sseckeypos=18446744073709551615]
2022-08-26T09:15:30.209Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir
2022-08-26T09:15:30.209Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir
2022-08-26T09:15:30.209Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir] [] []
2022-08-26T09:15:30.209Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.210Z [INF]       curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT
2022-08-26T09:15:30.210Z [INF]       curl.cpp:HeadRequest(3100): [tpath=/parent-dir/sub-dir/]
2022-08-26T09:15:30.210Z [INF]       curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir/][bpath=][save=][sseckeypos=18446744073709551615]
2022-08-26T09:15:30.210Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/
2022-08-26T09:15:30.210Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/
2022-08-26T09:15:30.210Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir/] [] []
2022-08-26T09:15:30.210Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.211Z [INF]       curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT
2022-08-26T09:15:30.211Z [INF]       curl.cpp:HeadRequest(3100): [tpath=/parent-dir/sub-dir_$folder$]
2022-08-26T09:15:30.211Z [INF]       curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir_$folder$][bpath=][save=][sseckeypos=18446744073709551615]
2022-08-26T09:15:30.211Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir_%24folder%24
2022-08-26T09:15:30.211Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir_%24folder%24
2022-08-26T09:15:30.211Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir_$folder$] [] []
2022-08-26T09:15:30.211Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.212Z [INF]       curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT
2022-08-26T09:15:30.212Z [INF]   s3fs.cpp:list_bucket(2707): [path=/parent-dir/sub-dir]
2022-08-26T09:15:30.212Z [INF]       curl.cpp:ListBucketRequest(3522): [tpath=/parent-dir/sub-dir]
2022-08-26T09:15:30.212Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=2&prefix=parent-dir/sub-dir/
2022-08-26T09:15:30.212Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=2&prefix=parent-dir/sub-dir/
2022-08-26T09:15:30.212Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=parent-dir/sub-dir/] []
2022-08-26T09:15:30.212Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.214Z [INF]       curl.cpp:RequestPerform(2324): HTTP response code 200
2022-08-26T09:15:30.214Z [INF] s3fs.cpp:s3fs_rmdir(1051): [path=/parent-dir]
2022-08-26T09:15:30.214Z [INF]   s3fs.cpp:list_bucket(2707): [path=/parent-dir]
2022-08-26T09:15:30.214Z [INF]       curl.cpp:ListBucketRequest(3522): [tpath=/parent-dir]
2022-08-26T09:15:30.214Z [INF]       curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=2&prefix=parent-dir/
2022-08-26T09:15:30.214Z [INF]       curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=2&prefix=parent-dir/
2022-08-26T09:15:30.214Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=parent-dir/] []
2022-08-26T09:15:30.214Z [INF]       curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001
2022-08-26T09:15:30.216Z [INF]       curl.cpp:RequestPerform(2324): HTTP response code 200

Details about issue

I deployed a minio cluster http://10.134.80.223:9001 and created a bucket test-bucket with versioning enabled.
I then mount the bucket to directory /mnt/test2/ through S3fs.
I create a directory parent-dir/sub-dir in /mnt/test2/.
Finally, I failed to force the deletion of the directory parent-dir, prompting that the directory is not empty.

[root@demo test2]# pwd
/mnt/test2
[root@demo test2]# mkdir parent-dir
[root@demo test2]# mkdir parent-dir/sub-dir
[root@demo test2]# rm -rf parent-dir
rm: 无法删除"parent-dir": 目录非空
[root@demo test2]# ls -lsh parent-dir/
ls: 无法访问parent-dir/sub-dir: 没有那个文件或目录
总用量 0
? ?????????? ? ? ? ?            ? sub-dir

I also wonder why we can see deleted directories even though we can't access them.
I look forward to your help.

Originally created by @garenchan on GitHub (Aug 26, 2022). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2025 ### Additional Information _The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all._ _Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD_ #### Version of s3fs being used (s3fs --version) Amazon Simple Storage Service File System V1.91 (commit:unknown) with OpenSSL Copyright (C) 2010 Randy Rizun <rrizun@gmail.com> License GPL2: GNU GPL version 2 <https://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) Name : fuse Version : 2.9.2 Release : 11.el7 Architecture: x86_64 Install Date: 2022年07月12日 星期二 09时39分58秒 Group : System Environment/Base Size : 223297 License : GPL+ Signature : RSA/SHA256, 2018年11月12日 星期一 22时25分34秒, Key ID 24c6a8a7f4a80eb5 Source RPM : fuse-2.9.2-11.el7.src.rpm Build Date : 2018年10月31日 星期三 05时32分35秒 Build Host : x86-01.bsys.centos.org Relocations : (not relocatable) Packager : CentOS BuildSystem <http://bugs.centos.org> Vendor : CentOS URL : https://github.com/libfuse/libfuse Summary : File System in Userspace (FUSE) utilities Description : With FUSE it is possible to implement a fully functional filesystem in a userspace program. This package contains the FUSE userspace tools to mount a FUSE filesystem. #### Kernel information (uname -r) 3.10.0-1127.el7.x86_64 #### GNU/Linux Distribution, if applicable (cat /etc/os-release) NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)" ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7" HOME_URL="https://www.centos.org/" BUG_REPORT_URL="https://bugs.centos.org/" CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7" REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7" #### s3fs command line used, if applicable ``` s3fs test-bucket /mnt/test2/ -f -o passwd_file=/etc/s3cred,use_path_request_style,url=http://10.134.80.223:9001,dbglevel=info ``` #### /etc/fstab entry, if applicable ``` ``` #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) _if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages_ ``` 2022-08-26T09:15:27.741Z [INF] s3fs.cpp:list_bucket(2707): [path=/] 2022-08-26T09:15:27.741Z [INF] curl.cpp:ListBucketRequest(3522): [tpath=/] 2022-08-26T09:15:27.741Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=1000&prefix= 2022-08-26T09:15:27.741Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=1000&prefix= 2022-08-26T09:15:27.741Z [INF] curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=] [] 2022-08-26T09:15:27.741Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:27.744Z [INF] curl.cpp:RequestPerform(2324): HTTP response code 200 2022-08-26T09:15:27.745Z [INF] s3fs.cpp:readdir_multi_head(2579): [path=/][list=0] 2022-08-26T09:15:27.745Z [INF] curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/][bpath=parent-dir/][save=/parent-dir/][sseckeypos=18446744073709551615] 2022-08-26T09:15:27.745Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/ 2022-08-26T09:15:27.745Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/ 2022-08-26T09:15:27.745Z [INF] curl_multi.cpp:Request(297): [count=1] 2022-08-26T09:15:27.745Z [INF] curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/] [] [] 2022-08-26T09:15:27.745Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:27.747Z [INF] curl.cpp:RequestPerform(2324): HTTP response code 200 2022-08-26T09:15:27.748Z [INF] cache.cpp:AddStat(343): add stat cache entry[path=/parent-dir/] 2022-08-26T09:15:27.748Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/parent-dir] 2022-08-26T09:15:27.936Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/] 2022-08-26T09:15:28.945Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/] 2022-08-26T09:15:29.863Z [INF] s3fs.cpp:s3fs_opendir(2514): [path=/][flags=0x18800] 2022-08-26T09:15:29.863Z [INF] s3fs.cpp:s3fs_readdir(2663): [path=/] 2022-08-26T09:15:29.863Z [INF] s3fs.cpp:list_bucket(2707): [path=/] 2022-08-26T09:15:29.863Z [INF] curl.cpp:ListBucketRequest(3522): [tpath=/] 2022-08-26T09:15:29.863Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=1000&prefix= 2022-08-26T09:15:29.863Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=1000&prefix= 2022-08-26T09:15:29.863Z [INF] curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=] [] 2022-08-26T09:15:29.863Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:29.866Z [INF] curl.cpp:RequestPerform(2324): HTTP response code 200 2022-08-26T09:15:29.866Z [INF] s3fs.cpp:readdir_multi_head(2579): [path=/][list=0] 2022-08-26T09:15:29.866Z [INF] curl_multi.cpp:Request(297): [count=0] 2022-08-26T09:15:29.866Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/parent-dir] 2022-08-26T09:15:29.954Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/] 2022-08-26T09:15:30.195Z [INF] s3fs.cpp:s3fs_opendir(2514): [path=/parent-dir][flags=0x38800] 2022-08-26T09:15:30.196Z [INF] s3fs.cpp:s3fs_readdir(2663): [path=/parent-dir] 2022-08-26T09:15:30.196Z [INF] s3fs.cpp:list_bucket(2707): [path=/parent-dir] 2022-08-26T09:15:30.196Z [INF] curl.cpp:ListBucketRequest(3522): [tpath=/parent-dir] 2022-08-26T09:15:30.196Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=1000&prefix=parent-dir/ 2022-08-26T09:15:30.196Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=1000&prefix=parent-dir/ 2022-08-26T09:15:30.196Z [INF] curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=parent-dir/] [] 2022-08-26T09:15:30.196Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.198Z [INF] curl.cpp:RequestPerform(2324): HTTP response code 200 2022-08-26T09:15:30.198Z [INF] s3fs.cpp:readdir_multi_head(2579): [path=/parent-dir/][list=0] 2022-08-26T09:15:30.198Z [INF] curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir/][bpath=sub-dir/][save=/parent-dir/sub-dir/][sseckeypos=18446744073709551615] 2022-08-26T09:15:30.198Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/ 2022-08-26T09:15:30.198Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/ 2022-08-26T09:15:30.198Z [INF] curl_multi.cpp:Request(297): [count=1] 2022-08-26T09:15:30.198Z [INF] curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir/] [] [] 2022-08-26T09:15:30.198Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.200Z [INF] curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT 2022-08-26T09:15:30.200Z [INF] s3fs.cpp:readdir_multi_head(2649): Could not find /parent-dir/sub-dir file in stat cache. 2022-08-26T09:15:30.200Z [INF] s3fs.cpp:s3fs_opendir(2514): [path=/parent-dir][flags=0x38800] 2022-08-26T09:15:30.200Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/parent-dir] 2022-08-26T09:15:30.200Z [INF] s3fs.cpp:s3fs_readdir(2663): [path=/parent-dir] 2022-08-26T09:15:30.200Z [INF] s3fs.cpp:list_bucket(2707): [path=/parent-dir] 2022-08-26T09:15:30.200Z [INF] curl.cpp:ListBucketRequest(3522): [tpath=/parent-dir] 2022-08-26T09:15:30.200Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=1000&prefix=parent-dir/ 2022-08-26T09:15:30.200Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=1000&prefix=parent-dir/ 2022-08-26T09:15:30.200Z [INF] curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=1000&prefix=parent-dir/] [] 2022-08-26T09:15:30.200Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.202Z [INF] curl.cpp:RequestPerform(2324): HTTP response code 200 2022-08-26T09:15:30.202Z [INF] s3fs.cpp:readdir_multi_head(2579): [path=/parent-dir/][list=0] 2022-08-26T09:15:30.202Z [INF] curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir/][bpath=sub-dir/][save=/parent-dir/sub-dir/][sseckeypos=18446744073709551615] 2022-08-26T09:15:30.202Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/ 2022-08-26T09:15:30.202Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/ 2022-08-26T09:15:30.202Z [INF] curl_multi.cpp:Request(297): [count=1] 2022-08-26T09:15:30.202Z [INF] curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir/] [] [] 2022-08-26T09:15:30.202Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.203Z [INF] curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT 2022-08-26T09:15:30.203Z [INF] s3fs.cpp:readdir_multi_head(2649): Could not find /parent-dir/sub-dir file in stat cache. 2022-08-26T09:15:30.204Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/parent-dir/sub-dir] 2022-08-26T09:15:30.204Z [INF] curl.cpp:HeadRequest(3100): [tpath=/parent-dir/sub-dir] 2022-08-26T09:15:30.204Z [INF] curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir][bpath=][save=][sseckeypos=18446744073709551615] 2022-08-26T09:15:30.204Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir 2022-08-26T09:15:30.204Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir 2022-08-26T09:15:30.204Z [INF] curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir] [] [] 2022-08-26T09:15:30.204Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.205Z [INF] curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT 2022-08-26T09:15:30.205Z [INF] curl.cpp:HeadRequest(3100): [tpath=/parent-dir/sub-dir/] 2022-08-26T09:15:30.205Z [INF] curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir/][bpath=][save=][sseckeypos=18446744073709551615] 2022-08-26T09:15:30.205Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/ 2022-08-26T09:15:30.205Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/ 2022-08-26T09:15:30.205Z [INF] curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir/] [] [] 2022-08-26T09:15:30.205Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.206Z [INF] curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT 2022-08-26T09:15:30.206Z [INF] curl.cpp:HeadRequest(3100): [tpath=/parent-dir/sub-dir_$folder$] 2022-08-26T09:15:30.206Z [INF] curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir_$folder$][bpath=][save=][sseckeypos=18446744073709551615] 2022-08-26T09:15:30.206Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir_%24folder%24 2022-08-26T09:15:30.206Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir_%24folder%24 2022-08-26T09:15:30.206Z [INF] curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir_$folder$] [] [] 2022-08-26T09:15:30.206Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.207Z [INF] curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT 2022-08-26T09:15:30.207Z [INF] s3fs.cpp:list_bucket(2707): [path=/parent-dir/sub-dir] 2022-08-26T09:15:30.207Z [INF] curl.cpp:ListBucketRequest(3522): [tpath=/parent-dir/sub-dir] 2022-08-26T09:15:30.207Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=2&prefix=parent-dir/sub-dir/ 2022-08-26T09:15:30.207Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=2&prefix=parent-dir/sub-dir/ 2022-08-26T09:15:30.207Z [INF] curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=parent-dir/sub-dir/] [] 2022-08-26T09:15:30.207Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.209Z [INF] curl.cpp:RequestPerform(2324): HTTP response code 200 2022-08-26T09:15:30.209Z [INF] s3fs.cpp:s3fs_getattr(763): [path=/parent-dir/sub-dir] 2022-08-26T09:15:30.209Z [INF] curl.cpp:HeadRequest(3100): [tpath=/parent-dir/sub-dir] 2022-08-26T09:15:30.209Z [INF] curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir][bpath=][save=][sseckeypos=18446744073709551615] 2022-08-26T09:15:30.209Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir 2022-08-26T09:15:30.209Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir 2022-08-26T09:15:30.209Z [INF] curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir] [] [] 2022-08-26T09:15:30.209Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.210Z [INF] curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT 2022-08-26T09:15:30.210Z [INF] curl.cpp:HeadRequest(3100): [tpath=/parent-dir/sub-dir/] 2022-08-26T09:15:30.210Z [INF] curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir/][bpath=][save=][sseckeypos=18446744073709551615] 2022-08-26T09:15:30.210Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/ 2022-08-26T09:15:30.210Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir/ 2022-08-26T09:15:30.210Z [INF] curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir/] [] [] 2022-08-26T09:15:30.210Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.211Z [INF] curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT 2022-08-26T09:15:30.211Z [INF] curl.cpp:HeadRequest(3100): [tpath=/parent-dir/sub-dir_$folder$] 2022-08-26T09:15:30.211Z [INF] curl.cpp:PreHeadRequest(3060): [tpath=/parent-dir/sub-dir_$folder$][bpath=][save=][sseckeypos=18446744073709551615] 2022-08-26T09:15:30.211Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir_%24folder%24 2022-08-26T09:15:30.211Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/parent-dir/sub-dir_%24folder%24 2022-08-26T09:15:30.211Z [INF] curl.cpp:insertV4Headers(2696): computing signature [HEAD] [/parent-dir/sub-dir_$folder$] [] [] 2022-08-26T09:15:30.211Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.212Z [INF] curl.cpp:RequestPerform(2376): HTTP response code 404 was returned, returning ENOENT 2022-08-26T09:15:30.212Z [INF] s3fs.cpp:list_bucket(2707): [path=/parent-dir/sub-dir] 2022-08-26T09:15:30.212Z [INF] curl.cpp:ListBucketRequest(3522): [tpath=/parent-dir/sub-dir] 2022-08-26T09:15:30.212Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=2&prefix=parent-dir/sub-dir/ 2022-08-26T09:15:30.212Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=2&prefix=parent-dir/sub-dir/ 2022-08-26T09:15:30.212Z [INF] curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=parent-dir/sub-dir/] [] 2022-08-26T09:15:30.212Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.214Z [INF] curl.cpp:RequestPerform(2324): HTTP response code 200 2022-08-26T09:15:30.214Z [INF] s3fs.cpp:s3fs_rmdir(1051): [path=/parent-dir] 2022-08-26T09:15:30.214Z [INF] s3fs.cpp:list_bucket(2707): [path=/parent-dir] 2022-08-26T09:15:30.214Z [INF] curl.cpp:ListBucketRequest(3522): [tpath=/parent-dir] 2022-08-26T09:15:30.214Z [INF] curl_util.cpp:prepare_url(255): URL is http://10.134.80.223:9001/test-bucket?delimiter=/&max-keys=2&prefix=parent-dir/ 2022-08-26T09:15:30.214Z [INF] curl_util.cpp:prepare_url(288): URL changed is http://10.134.80.223:9001/test-bucket/?delimiter=/&max-keys=2&prefix=parent-dir/ 2022-08-26T09:15:30.214Z [INF] curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=parent-dir/] [] 2022-08-26T09:15:30.214Z [INF] curl_util.cpp:url_to_host(332): url is http://10.134.80.223:9001 2022-08-26T09:15:30.216Z [INF] curl.cpp:RequestPerform(2324): HTTP response code 200 ``` ### Details about issue I deployed a minio cluster `http://10.134.80.223:9001` and created a bucket `test-bucket` with versioning enabled. I then mount the bucket to directory `/mnt/test2/` through `S3fs`. I create a directory `parent-dir/sub-dir` in `/mnt/test2/`. Finally, I failed to force the deletion of the directory `parent-dir`, prompting that the directory is not empty. ``` [root@demo test2]# pwd /mnt/test2 [root@demo test2]# mkdir parent-dir [root@demo test2]# mkdir parent-dir/sub-dir [root@demo test2]# rm -rf parent-dir rm: 无法删除"parent-dir": 目录非空 [root@demo test2]# ls -lsh parent-dir/ ls: 无法访问parent-dir/sub-dir: 没有那个文件或目录 总用量 0 ? ?????????? ? ? ? ? ? sub-dir ``` I also wonder why we can see deleted directories even though we can't access them. I look forward to your help.
Author
Owner

@ggtakec commented on GitHub (Aug 27, 2022):

It seems that the reason is that sub-dir cannot be deleted when parent-dir is deleted.
After a failed delete, the referenced sub-dir does not appear to be an object named sub-dir/ that s3fs recognizes.

At this time:

  • how does this object look on the minio cluster side?
    (Which is the object sub-dir or sub-dir/?)
  • And does that object have attributes such as x-amz-****?

If you run s3fs with the compat_dir option, will there be any changes?

(Although it's not a clear answer) I think the reason for this is that the object sub-dir already existed before creating parent-dir/sub-dir.
Please delete the target object from the minio side once, not from s3fs, and try the same thing again.
I hope you don't get the same error.

<!-- gh-comment-id:1229146466 --> @ggtakec commented on GitHub (Aug 27, 2022): It seems that the reason is that sub-dir cannot be deleted when parent-dir is deleted. After a failed delete, the referenced `sub-dir` does not appear to be an object named `sub-dir/` that s3fs recognizes. At this time: - how does this object look on the minio cluster side? (Which is the object `sub-dir` or `sub-dir/`?) - And does that object have attributes such as `x-amz-****`? If you run s3fs with the `compat_dir` option, will there be any changes? (Although it's not a clear answer) I think the reason for this is that the object `sub-dir` already existed before creating `parent-dir/sub-dir`. Please delete the target object from the minio side once, not from s3fs, and try the same thing again. I hope you don't get the same error.
Author
Owner

@garenchan commented on GitHub (Aug 27, 2022):

I think the problem might be caused by directory_empty function.

When bucket versioning is enabled, the ListBucket API will return deleted subdirectories. As shown below, parent/sub/ has actually been deleted. So the directory_empty function mistakenly determines that the parent directory is not empty.

If we try to get the attributes of the parent/sub/ directory, we will get a 404 error.

<ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
    <Name>test</Name>
    <Prefix>parent/</Prefix>
    <Marker></Marker>
    <MaxKeys>1000</MaxKeys>
    <Delimiter>/</Delimiter>
    <IsTruncated>false</IsTruncated>
    <Contents>
        <Key>parent/</Key>
        <LastModified>2022-08-26T13:13:08.712Z</LastModified>
        <ETag>&#34;d41d8cd98f00b204e9800998ecf8427e&#34;</ETag>
        <Size>0</Size>
        <Owner>
            <ID>02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4</ID>
            <DisplayName>minio</DisplayName>
        </Owner>
        <StorageClass>STANDARD</StorageClass>
    </Contents>
    <CommonPrefixes>
        <Prefix>parent/sub/</Prefix>
    </CommonPrefixes>
</ListBucketResult>
<!-- gh-comment-id:1229149548 --> @garenchan commented on GitHub (Aug 27, 2022): I think the problem might be caused by `directory_empty` function. When bucket versioning is enabled, the `ListBucket` API will return deleted subdirectories. As shown below, `parent/sub/` has actually been deleted. So the `directory_empty` function mistakenly determines that the `parent` directory is not empty. If we try to get the attributes of the `parent/sub/` directory, we will get a 404 error. ```xml <ListBucketResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/"> <Name>test</Name> <Prefix>parent/</Prefix> <Marker></Marker> <MaxKeys>1000</MaxKeys> <Delimiter>/</Delimiter> <IsTruncated>false</IsTruncated> <Contents> <Key>parent/</Key> <LastModified>2022-08-26T13:13:08.712Z</LastModified> <ETag>&#34;d41d8cd98f00b204e9800998ecf8427e&#34;</ETag> <Size>0</Size> <Owner> <ID>02d6176db174dc93cb1b899f7c6078f08654445fe8cf1b6ce98d8855f66bdbf4</ID> <DisplayName>minio</DisplayName> </Owner> <StorageClass>STANDARD</StorageClass> </Contents> <CommonPrefixes> <Prefix>parent/sub/</Prefix> </CommonPrefixes> </ListBucketResult> ```
Author
Owner

@ggtakec commented on GitHub (Aug 27, 2022):

I think this seems like an issue with minio's versioning and listbucket api.
We should check if minio's ListBucket returns the deleted object.
If the ListBucket result contains deleted objects, then this issue will occur and we need to find a way around it.

<!-- gh-comment-id:1229157096 --> @ggtakec commented on GitHub (Aug 27, 2022): I think this seems like an issue with minio's versioning and listbucket api. We should check if minio's ListBucket returns the deleted object. If the ListBucket result contains deleted objects, then this issue will occur and we need to find a way around it.
Author
Owner

@garenchan commented on GitHub (Aug 27, 2022):

I think it might make sense to return deleted subdirectories. If the ListBucket API does not return deleted subdirectories, we may not be able to retrieve or restore deleted objects in them.

I have tested rclone mount and it worked well.

Do you have any plans to deal with this problem?

<!-- gh-comment-id:1229168686 --> @garenchan commented on GitHub (Aug 27, 2022): I think it might make sense to return deleted subdirectories. If the `ListBucket` API does not return deleted subdirectories, we may not be able to retrieve or restore deleted objects in them. I have tested [`rclone mount`](https://github.com/rclone/rclone) and it worked well. Do you have any plans to deal with this problem?
Author
Owner

@ggtakec commented on GitHub (Aug 28, 2022):

Getting the result of listbucket which deleted(past versioned) objects in ListBucket is problematic for s3fs.
If the listbucket response behaves as if the object exists even though it has been deleted, the current situation is that s3fs cannot handle it.

Deletion of files/directories(objects) via FUSE is split into sequential system calls and calling to s3fs.
The directory deletion is lastest called after deleting objects under that directory.
At this point, the deletion of the directory fails because it finds that the objects it has deleted just before exist.

In minio, if information with a deletion mark etc. is attached in the response of each object, s3fs may be possible to deal with it by excluding them.
However, the processing will be quite special for us.

<!-- gh-comment-id:1229361393 --> @ggtakec commented on GitHub (Aug 28, 2022): Getting the result of listbucket which deleted(past versioned) objects in ListBucket is problematic for s3fs. If the listbucket response behaves as if the object exists even though it has been deleted, the current situation is that s3fs cannot handle it. Deletion of files/directories(objects) via FUSE is split into sequential system calls and calling to s3fs. The directory deletion is lastest called after deleting objects under that directory. At this point, the deletion of the directory fails because it finds that the objects it has deleted just before exist. In minio, if information with a deletion mark etc. is attached in the response of each object, s3fs may be possible to deal with it by excluding them. However, the processing will be quite special for us.
Author
Owner

@garenchan commented on GitHub (Aug 28, 2022):

@ggtakec.

I tried to fix the problem and it worked fine after testing.
If you have time, you can take a look at the code. Although it is rough, I hope it can help you.
https://github.com/garenchan/s3fs-fuse/pull/1

<!-- gh-comment-id:1229439562 --> @garenchan commented on GitHub (Aug 28, 2022): @ggtakec. I tried to fix the problem and it worked fine after testing. If you have time, you can take a look at the code. Although it is rough, I hope it can help you. https://github.com/garenchan/s3fs-fuse/pull/1
Author
Owner

@creeew commented on GitHub (Aug 30, 2022):

Hi, @ggtakec We have the same issue. Deleted dir still exist if using minio version which is really bother us. Really appreciate you guys solving this problem.

<!-- gh-comment-id:1231259387 --> @creeew commented on GitHub (Aug 30, 2022): Hi, @ggtakec We have the same issue. Deleted dir still exist if using minio version which is really bother us. Really appreciate you guys solving this problem.
Author
Owner

@ggtakec commented on GitHub (Aug 30, 2022):

@garenchan
I checked your modified code and understood it.
(Please correct me if I'm wrong)

I understood that minio listed the objects that should have been deleted in the ListBucket response and However HEAD requests for that object would result in an error.
(This spec is correct, isn't this?)

About your modified code:
I think that the stats caches for deleted directory objects should not need to be deleted again.
This behavior seems unnecessary as s3fs removes the information about the object from the stat cache immediately after deleting it.
Also, I think not calling filler on directory objects will have other problems.
You work around this problem by doing almost the same thing as readdir with directory_empty, but I think this will create new performance problems

I believe this issues's problem is that ListBucket response lists objects that has been deleted.
Therefore, s3fs cannot distinguish whether an object that should have been deleted remains or actually exists.
(By the response of the Head request became ENOENT due to overbearing judgment, the method to solve this problem may affect compatibility with other distributed object storages and old s3fs.)

Is there a way to prevent the deleted object from being included in the ListBucket response when versioning is enabled according to the minio specification?
(ex. options, parameters, etc. )
Without resolving this root cause, fixing the s3fs side will create new problems in terms of compatibility with other distributed object stores.

<!-- gh-comment-id:1231724804 --> @ggtakec commented on GitHub (Aug 30, 2022): @garenchan I checked your modified code and understood it. _(Please correct me if I'm wrong)_ I understood that `minio` listed the objects that should have been deleted in the `ListBucket` response and However HEAD requests for that object would result in an error. _(This spec is correct, isn't this?)_ About your modified code: I think that the stats caches for deleted directory objects should not need to be deleted again. This behavior seems unnecessary as s3fs removes the information about the object from the stat cache immediately after deleting it. Also, I think not calling `filler` on directory objects will have other problems. You work around this problem by doing almost the same thing as `readdir` with `directory_empty`, but I think this will create new performance problems I believe this issues's problem is that `ListBucket` response lists objects that has been deleted. Therefore, s3fs cannot distinguish whether an object that should have been deleted remains or actually exists. _(By the response of the Head request became `ENOENT` due to overbearing judgment, the method to solve this problem may affect compatibility with other distributed object storages and old s3fs.)_ Is there a way to prevent the deleted object from being included in the `ListBucket` response when versioning is enabled according to the `minio` specification? (ex. options, parameters, etc. ) Without resolving this root cause, fixing the s3fs side will create new problems in terms of compatibility with other distributed object stores.
Author
Owner

@garenchan commented on GitHub (Aug 31, 2022):

@ggtakec

I understood that minio listed the objects that should have been deleted in the ListBucket response and However HEAD requests for that object would result in an error. (This spec is correct, isn't this?)

Yes.

My code changes include the following:

  1. In the directory_empty function, we need to check whether CommonPrefixes exists. In general, we may only need to call the get_object_attribute function once more. The impact on performance may be small.
  2. In the s3fs_readdir function, we do not use stat cache for CommonPrefixes. Because they might have been deleted. In addition, filler should not be applied to not found objects.

I may have missed something else.

<!-- gh-comment-id:1232385963 --> @garenchan commented on GitHub (Aug 31, 2022): @ggtakec > I understood that `minio` listed the objects that should have been deleted in the `ListBucket` response and However HEAD requests for that object would result in an error. _(This spec is correct, isn't this?)_ Yes. My code changes include the following: 1. In the `directory_empty` function, we need to check whether `CommonPrefixes` exists. In general, we may only need to call the `get_object_attribute` function once more. The impact on performance may be small. 2. In the `s3fs_readdir` function, we do not use stat cache for `CommonPrefixes`. Because they might have been deleted. In addition, `filler` should not be applied to not found objects. I may have missed something else.
Author
Owner

@ggtakec commented on GitHub (Aug 31, 2022):

Because they might have been deleted. In addition, filler should not be applied to not found objects.

Missing objects may exist.
This is a case where s3fs considers an object that lives flat in storage to be a path containing a directory by its object name.
There are cases where a directory that appears to exist in between does not exist.
(This case exists in cases such as when objects are uploaded using only the API)
s3fs will have to call a filler to handle these objects as well.

<!-- gh-comment-id:1232765317 --> @ggtakec commented on GitHub (Aug 31, 2022): > Because they might have been deleted. In addition, filler should not be applied to not found objects. Missing objects may exist. This is a case where s3fs considers an object that lives flat in storage to be a path containing a directory by its object name. There are cases where a directory that appears to exist in between does not exist. (This case exists in cases such as when objects are uploaded using only the API) s3fs will have to call a `filler` to handle these objects as well.
Author
Owner

@garenchan commented on GitHub (Aug 31, 2022):

Yes, you are right. This is a difficult problem.

<!-- gh-comment-id:1232838666 --> @garenchan commented on GitHub (Aug 31, 2022): Yes, you are right. This is a difficult problem.
Author
Owner

@marcinkuk commented on GitHub (Sep 1, 2022):

I can help with tests.

<!-- gh-comment-id:1234077058 --> @marcinkuk commented on GitHub (Sep 1, 2022): I can help with tests.
Author
Owner

@ggtakec commented on GitHub (Sep 2, 2022):

I found the following Issue:
https://github.com/minio/minio/issues/10914

After all, it seems that the directory is listed as if it was not deleted in the bucket when versioning enabled.
The only way around this problem seems to be to do a HEAD(or GET) request to all objects listed in directory_empty(similar to garenchan's piece of code).
However, it would be very slow in performance.

Also, the fact that ListBucket lists a deleted directory will make it appear that the directory has been revived in the same way by another listing call, even if you have modified directory_empty.
The only way around this is to not call the filler(that way is as same as garenchan's modified code).
But I think that would cause another problem.

I need to think about this issue a little more.

<!-- gh-comment-id:1235595653 --> @ggtakec commented on GitHub (Sep 2, 2022): I found the following Issue: https://github.com/minio/minio/issues/10914 After all, it seems that the directory is listed as if it was not deleted in the bucket when versioning enabled. The only way around this problem seems to be to do a HEAD(or GET) request to all objects listed in `directory_empty`(similar to garenchan's piece of code). However, it would be very slow in performance. Also, the fact that `ListBucket` lists a deleted directory will make it appear that the directory has been revived in the same way by another listing call, even if you have modified `directory_empty`. The only way around this is to not call the `filler`(that way is as same as garenchan's modified code). But I think that would cause another problem. I need to think about this issue a little more.
Author
Owner

@marcinkuk commented on GitHub (Sep 2, 2022):

I think it is better to have funcionality with poor performance oposite to no functionality.
In future performance can be achieved with some solution.
In order not to affect current performance it could be accesible with mount option for example "lazy_delete".

<!-- gh-comment-id:1235614261 --> @marcinkuk commented on GitHub (Sep 2, 2022): I think it is better to have funcionality with poor performance oposite to no functionality. In future performance can be achieved with some solution. In order not to affect current performance it could be accesible with mount option for example "lazy_delete".
Author
Owner

@ggtakec commented on GitHub (Sep 3, 2022):

I have prepared the fixed code for testing.(This is similar to garenchan's code.)
https://github.com/ggtakec/s3fs-fuse/tree/minio_baseof_no_dir_obj_listing

The explanation is long, but please read it.

I currently post a PR #2023 which fixes a bug in cases where the directory object does not exist.
It has to do with the compat_dir option.
And processing of filler etc. that is discussed in this issue has been changed from v1.91.
The test code which I created is based on this #2023.

You should be aware of the differences between v1.91 and master code.
The current master code differs from v1.91 in the alternative directory names option (and its default value).
The nosup_compat_dir option has been deprecated in favor of compat_dir. (The alternative directory names default has been changed from enable to disable.)
The current master code disables alternative directory names by default.

At the time of this issue (inappropriate behavior of s3fs found by garenchan), probably s3fs was running with alternative directory names enabled.
However, what garenchan did is creating and deleting directories via s3fs.
In other words, this works even if alternative directory names is disabled.

The code for testing adds new option called strict_dir_empty.
Start with this option when testing.

The strict_dir_empty option causes the list of objects received in ListBucket to be inspected separately on HEAD requests, similar to readdir.
And treat any object that receives a 404 as a non-existent object.
I believe this will allow the directory deletion in this issue's instructions to work even with MinIO's versioned buckets.

In addition, please do not start by adding compat_dir(specifying with both is deprecated).
I tested the same situation manually and the delete worked fine.(not using MinIO)

Could you build and test this test code for MinIO?

<!-- gh-comment-id:1236059878 --> @ggtakec commented on GitHub (Sep 3, 2022): I have prepared the fixed code for testing.(This is similar to garenchan's code.) https://github.com/ggtakec/s3fs-fuse/tree/minio_baseof_no_dir_obj_listing _The explanation is long, but please read it._ I currently post a PR #2023 which fixes a bug in cases where the directory object does not exist. _It has to do with the `compat_dir` option._ _And processing of `filler` etc. that is discussed in this issue has been changed from v1.91._ The test code which I created is based on this #2023. You should be aware of the differences between `v1.91` and `master` code. The current master code differs from `v1.91` in the `alternative directory names` option (and its default value). The `nosup_compat_dir` option has been deprecated in favor of `compat_dir`. (The `alternative directory names` default has been changed from enable to disable.) The current `master` code disables `alternative directory names` by default. At the time of this issue (inappropriate behavior of s3fs found by garenchan), probably s3fs was running with `alternative directory names` enabled. However, what garenchan did is creating and deleting directories via s3fs. In other words, this works even if `alternative directory names` is disabled. The code for testing adds new option called `strict_dir_empty`. Start with this option when testing. The `strict_dir_empty` option causes the list of objects received in `ListBucket` to be inspected separately on `HEAD` requests, similar to `readdir`. And treat any object that receives a `404` as a non-existent object. I believe this will allow the directory deletion in this issue's instructions to work even with MinIO's versioned buckets. In addition, please do not start by adding `compat_dir`(specifying with both is deprecated). I tested the same situation manually and the delete worked fine.(not using MinIO) Could you build and test this test code for MinIO?
Author
Owner

@creeew commented on GitHub (Sep 5, 2022):

@ggtakec Thank you for your good work. There's no deleted dir in s3fs when minio enable version.
But there are missing issues maybe have to bother you to fix again.

In s3fs cached situation, multi server mount the same bucket:

  1. Mount directory will cache the dir stat even the dir was deleted by other mount.
    reproduce:
    (we got two mount point named: mnt1 and mnt2)
    in mnt1, create a dir with sub dir: mkdir -p /mnt1/a/b
    in mnt2, delete the subdir b, rm -fr /mnt2/a/b
    back to mnt1, try to create sub dir a/b again since b was deleted, we suppose create dir b with no error, but the result is:
    mkdir: cannot create directory ‘a/b’: File exists
    (this issue was not cause by your this patch)

  2. Directory uploaded by minio web console but the dir cannot find in s3fs mount point.

<!-- gh-comment-id:1236653261 --> @creeew commented on GitHub (Sep 5, 2022): @ggtakec Thank you for your good work. There's no deleted dir in s3fs when minio enable version. But there are missing issues maybe have to bother you to fix again. In s3fs cached situation, multi server mount the same bucket: 1. Mount directory will cache the dir stat even the dir was deleted by other mount. reproduce: (we got two mount point named: mnt1 and mnt2) in mnt1, create a dir with sub dir: mkdir -p /mnt1/a/b in mnt2, delete the subdir b, rm -fr /mnt2/a/b back to mnt1, try to create sub dir a/b again since b was deleted, we suppose create dir b with no error, but the result is: `mkdir: cannot create directory ‘a/b’: File exists` (this issue was not cause by your this patch) 2. Directory uploaded by minio web console but the dir cannot find in s3fs mount point.
Author
Owner

@garenchan commented on GitHub (Sep 5, 2022):

Yes, my test results were the same as @creeew‘s.
We may not be able to use stat cachie for directories here. The directories do not have an Etag at this point and may have been deleted.
github.com/ggtakec/s3fs-fuse@000140ae4e/src/s3fs.cpp (L2779-L2786)

<!-- gh-comment-id:1236965690 --> @garenchan commented on GitHub (Sep 5, 2022): Yes, my test results were the same as @creeew‘s. We may not be able to use stat cachie for directories here. The directories do not have an Etag at this point and may have been deleted. https://github.com/ggtakec/s3fs-fuse/blob/000140ae4ec52a56e60764e1b24d78f0a1af3008/src/s3fs.cpp#L2779-L2786
Author
Owner

@ggtakec commented on GitHub (Sep 6, 2022):

@creeew @garenchan

  • Mount directory will cache the dir stat even the dir was deleted by other mount.

The stat cache of s3fs has its size and expiration time.
I think this problem occurs because the file stat will not be acquired to the S3 server during the expiration period.
If you can allow to change it, you can adjust the expiration with the stat_cache_expire option, etc.
(This issue should be a separate issue unrelated to this issue and MinIO)

  • Directory uploaded by minio web console but the dir cannot find in s3fs mount point.

This is probably solved with the nosup_compat_dir option (or compat_dir option in the master branch).
Note those option defaults are reversed between v1.91 and master.
I think that you can't see that directory when the compat_dir disabled.

<!-- gh-comment-id:1238215619 --> @ggtakec commented on GitHub (Sep 6, 2022): @creeew @garenchan > * Mount directory will cache the dir stat even the dir was deleted by other mount. The stat cache of s3fs has its size and expiration time. I think this problem occurs because the file stat will not be acquired to the S3 server during the expiration period. If you can allow to change it, you can adjust the expiration with the `stat_cache_expire` option, etc. (This issue should be a separate issue unrelated to this issue and MinIO) > * Directory uploaded by minio web console but the dir cannot find in s3fs mount point. This is probably solved with the `nosup_compat_dir` option (or `compat_dir` option in the master branch). Note those option defaults are reversed between v1.91 and master. I think that you can't see that directory when the `compat_dir` disabled.
Author
Owner

@tmfksoft commented on GitHub (Jul 17, 2023):

Is there any news on this?
I'm running into similar issues where I'm seeing directories that have been deleted along with their contents when versioning is enabled.

<!-- gh-comment-id:1638549297 --> @tmfksoft commented on GitHub (Jul 17, 2023): Is there any news on this? I'm running into similar issues where I'm seeing directories that have been deleted along with their contents when versioning is enabled.
Author
Owner

@marcinkuk commented on GitHub (Jul 18, 2023):

Is there any news on this? I'm running into similar issues where I'm seeing directories that have been deleted along with their contents when versioning is enabled.

Me too.
Did you try v1.92?

<!-- gh-comment-id:1639686857 --> @marcinkuk commented on GitHub (Jul 18, 2023): > Is there any news on this? I'm running into similar issues where I'm seeing directories that have been deleted along with their contents when versioning is enabled. Me too. Did you try v1.92?
Author
Owner

@adamqqqplay commented on GitHub (Aug 17, 2023):

Maybe use -o listobjectsv2 option could solve versioning related problem.

<!-- gh-comment-id:1681983120 --> @adamqqqplay commented on GitHub (Aug 17, 2023): Maybe use `-o listobjectsv2` option could solve versioning related problem.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1020
No description provided.