[GH-ISSUE #2395] rm: can't remove 'test': I/O error #1179

Open
opened 2026-03-04 01:51:57 +03:00 by kerem · 5 comments
Owner

Originally created by @FlowerBirds on GitHub (Jan 5, 2024).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2395

Additional Information

Version of s3fs being used (s3fs --version)

s3fs 1.19

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

LAPTOP-TC4A0SCV:/data/test4/s3fs-fuse-1.93# apk list | grep fuse-
fuse-exfat-1.3.0-r2 x86_64 {fuse-exfat} (GPL-2.0-or-later)
fuse-openrc-3.11.0-r0 x86_64 {fuse3} (GPL-2.0-only LGPL-2.1-only) [installed]
fuse-2.9.9-r1 x86_64 {fuse} (GPL-2.0-only LGPL-2.1-only) [installed]
ceph-fuse-16.2.10-r1 x86_64 {ceph} (LGPL-2.1-only AND LGPL-2.0-or-later AND GPL-2.0-only AND GPL-3.0-only AND CC-BY-SA-1.0 AND BSL-1.0 AND GPL-2.0-or-later WITH Autoconf-exception-2.0 AND BSD-3-Clause AND MIT AND custom)
fuse-overlayfs-1.8.2-r0 x86_64 {fuse-overlayfs} (GPL-3.0-or-later)
fuse-static-2.9.9-r1 x86_64 {fuse} (GPL-2.0-only LGPL-2.1-only)
fuse-overlayfs-doc-1.8.2-r0 x86_64 {fuse-overlayfs} (GPL-3.0-or-later)
kio-fuse-5.0.1-r2 x86_64 {kio-fuse} (GPL-3.0-only)
unionfs-fuse-doc-2.1-r0 x86_64 {unionfs-fuse} (BSD-3-Clause)
fuse-doc-2.9.9-r1 x86_64 {fuse} (GPL-2.0-only LGPL-2.1-only)
fuse-common-3.11.0-r0 x86_64 {fuse3} (GPL-2.0-only LGPL-2.1-only) [installed]
fuse-exfat-doc-1.3.0-r2 x86_64 {fuse-exfat} (GPL-2.0-or-later)
fuse-dev-2.9.9-r1 x86_64 {fuse} (GPL-2.0-only LGPL-2.1-only) [installed]
rbd-fuse-16.2.10-r1 x86_64 {ceph} (LGPL-2.1-only AND LGPL-2.0-or-later AND GPL-2.0-only AND GPL-3.0-only AND CC-BY-SA-1.0 AND BSL-1.0 AND GPL-2.0-or-later WITH Autoconf-exception-2.0 AND BSD-3-Clause AND MIT AND custom)
fuse-exfat-utils-1.3.0-r2 x86_64 {fuse-exfat} (GPL-2.0-or-later)
py3-confuse-1.7.0-r0 x86_64 {py3-confuse} (MIT)
unionfs-fuse-2.1-r0 x86_64 {unionfs-fuse} (BSD-3-Clause)
confuse-dev-3.3-r0 x86_64 {confuse} (ISC)
gvfs-fuse-1.50.1-r0 x86_64 {gvfs} (LGPL-2.0-or-later)
confuse-doc-3.3-r0 x86_64 {confuse} (ISC)
confuse-3.3-r0 x86_64 {confuse} (ISC)

Kernel information (uname -r)

5.15.133.1-microsoft-standard-WSL2

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Alpine Linux"
ID=alpine
VERSION_ID=3.16.0
PRETTY_NAME="Alpine Linux v3.16"
HOME_URL="https://alpinelinux.org/"
BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues"

How to run s3fs, if applicable

[x] command line
[] /etc/fstab

/usr/bin/s3fs -d data /data -f -o url=http://191.168.3.132:9100,passwd_file=/etc/passwd-s3fs,endpoint=us-east-1,allow_other,use_cache=/tmp,max_stat_cache_size=1000,stat_cache_expire=900,retries=5,connect_timeout=10,use_path_request_style

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

2024-01-05T03:15:36.545Z [INF]       curl.cpp:RemakeHandle(2165): Retry request. [type=0][url=http://191.168.3.132:9100/data/test4/s3fs-fuse-1.93/test/][path=/test4/s3fs-fuse-1.93/test/]
2024-01-05T03:15:36.545Z [INF]       curl.cpp:insertV4Headers(2892): computing signature [DELETE] [/test4/s3fs-fuse-1.93/test/] [] []
2024-01-05T03:15:36.545Z [INF]       curl_util.cpp:url_to_host(334): url is http://191.168.3.132:9100
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InternalError</Code><Message>We encountered an internal error, please try again.: cause(file is corrupted)</Message><Key>test4/s3fs-fuse-1.93/test/</Key><BucketName>data</BucketName><Resource>/data/test4/s3fs-fuse-1.93/test/</Resource><Region>us-east-1</Region><RequestId>17A75515C1848546</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>2024-01-05T03:15:36.549Z [INF]       curl.cpp:RequestPerform(2590): HTTP response code 500 was returned, slowing down
2024-01-05T03:16:24.549Z [INF] curl.cpp:RequestPerform(2719): ### retrying...

Details about issue

Step1 unzip file

tar xzf s3fs-fuse-1.93.tar.gz

Step2 remove s3fs-fuse-1.93 folder

LAPTOP-TC4A0SCV:/data/test4# rm -rf s3fs-fuse-1.93/test/
rm: can't remove 's3fs-fuse-1.93/test': I/O error

By observing the Wireshark packet capture, it was found that during the decompression process, the test folder was first created by calling PUT /data/test4/s3fs-fuse-1.93/test/, and then the files under the /data/test4/s3fs-fuse-1.93/test/ folder were created by calling PUT. After the files under the test folder were uploaded, PUT /data/test4/s3fs-fuse-1.93/test/ and PUT /data/test4/s3fs-fuse-1.93/ were called once again and successfully created. When manually creating a folder using mkdir test, only one PUT method was called, so the manually created folder can be deleted. However, when deleting the folder after decompression, it cannot be deleted, resulting in an I/O error.
image
image
image

Question:Why is it called twice to create a folder during decompression?

Originally created by @FlowerBirds on GitHub (Jan 5, 2024). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2395 <!-- -------------------------------------------------------------------------- The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all. Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD. --------------------------------------------------------------------------- --> ### Additional Information #### Version of s3fs being used (`s3fs --version`) <!-- example: V1.91 (commit:b19262a) --> s3fs 1.19 #### Version of fuse being used (`pkg-config --modversion fuse`, `rpm -qi fuse` or `dpkg -s fuse`) <!-- example: 2.9.2 --> ```bash LAPTOP-TC4A0SCV:/data/test4/s3fs-fuse-1.93# apk list | grep fuse- fuse-exfat-1.3.0-r2 x86_64 {fuse-exfat} (GPL-2.0-or-later) fuse-openrc-3.11.0-r0 x86_64 {fuse3} (GPL-2.0-only LGPL-2.1-only) [installed] fuse-2.9.9-r1 x86_64 {fuse} (GPL-2.0-only LGPL-2.1-only) [installed] ceph-fuse-16.2.10-r1 x86_64 {ceph} (LGPL-2.1-only AND LGPL-2.0-or-later AND GPL-2.0-only AND GPL-3.0-only AND CC-BY-SA-1.0 AND BSL-1.0 AND GPL-2.0-or-later WITH Autoconf-exception-2.0 AND BSD-3-Clause AND MIT AND custom) fuse-overlayfs-1.8.2-r0 x86_64 {fuse-overlayfs} (GPL-3.0-or-later) fuse-static-2.9.9-r1 x86_64 {fuse} (GPL-2.0-only LGPL-2.1-only) fuse-overlayfs-doc-1.8.2-r0 x86_64 {fuse-overlayfs} (GPL-3.0-or-later) kio-fuse-5.0.1-r2 x86_64 {kio-fuse} (GPL-3.0-only) unionfs-fuse-doc-2.1-r0 x86_64 {unionfs-fuse} (BSD-3-Clause) fuse-doc-2.9.9-r1 x86_64 {fuse} (GPL-2.0-only LGPL-2.1-only) fuse-common-3.11.0-r0 x86_64 {fuse3} (GPL-2.0-only LGPL-2.1-only) [installed] fuse-exfat-doc-1.3.0-r2 x86_64 {fuse-exfat} (GPL-2.0-or-later) fuse-dev-2.9.9-r1 x86_64 {fuse} (GPL-2.0-only LGPL-2.1-only) [installed] rbd-fuse-16.2.10-r1 x86_64 {ceph} (LGPL-2.1-only AND LGPL-2.0-or-later AND GPL-2.0-only AND GPL-3.0-only AND CC-BY-SA-1.0 AND BSL-1.0 AND GPL-2.0-or-later WITH Autoconf-exception-2.0 AND BSD-3-Clause AND MIT AND custom) fuse-exfat-utils-1.3.0-r2 x86_64 {fuse-exfat} (GPL-2.0-or-later) py3-confuse-1.7.0-r0 x86_64 {py3-confuse} (MIT) unionfs-fuse-2.1-r0 x86_64 {unionfs-fuse} (BSD-3-Clause) confuse-dev-3.3-r0 x86_64 {confuse} (ISC) gvfs-fuse-1.50.1-r0 x86_64 {gvfs} (LGPL-2.0-or-later) confuse-doc-3.3-r0 x86_64 {confuse} (ISC) confuse-3.3-r0 x86_64 {confuse} (ISC) ``` #### Kernel information (`uname -r`) <!-- example: 5.10.96-90.460.amzn2.x86_64 --> 5.15.133.1-microsoft-standard-WSL2 #### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) <!-- command result --> ``` NAME="Alpine Linux" ID=alpine VERSION_ID=3.16.0 PRETTY_NAME="Alpine Linux v3.16" HOME_URL="https://alpinelinux.org/" BUG_REPORT_URL="https://gitlab.alpinelinux.org/alpine/aports/-/issues" ``` #### How to run s3fs, if applicable <!-- Describe the s3fs "command line" or "/etc/fstab" entry used. --> [x] command line [] /etc/fstab <!-- Executed command line or /etc/fastab entry --> ``` /usr/bin/s3fs -d data /data -f -o url=http://191.168.3.132:9100,passwd_file=/etc/passwd-s3fs,endpoint=us-east-1,allow_other,use_cache=/tmp,max_stat_cache_size=1000,stat_cache_expire=900,retries=5,connect_timeout=10,use_path_request_style ``` #### s3fs syslog messages (`grep s3fs /var/log/syslog`, `journalctl | grep s3fs`, or `s3fs outputs`) <!-- if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages. --> ``` 2024-01-05T03:15:36.545Z [INF] curl.cpp:RemakeHandle(2165): Retry request. [type=0][url=http://191.168.3.132:9100/data/test4/s3fs-fuse-1.93/test/][path=/test4/s3fs-fuse-1.93/test/] 2024-01-05T03:15:36.545Z [INF] curl.cpp:insertV4Headers(2892): computing signature [DELETE] [/test4/s3fs-fuse-1.93/test/] [] [] 2024-01-05T03:15:36.545Z [INF] curl_util.cpp:url_to_host(334): url is http://191.168.3.132:9100 <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InternalError</Code><Message>We encountered an internal error, please try again.: cause(file is corrupted)</Message><Key>test4/s3fs-fuse-1.93/test/</Key><BucketName>data</BucketName><Resource>/data/test4/s3fs-fuse-1.93/test/</Resource><Region>us-east-1</Region><RequestId>17A75515C1848546</RequestId><HostId>dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8</HostId></Error>2024-01-05T03:15:36.549Z [INF] curl.cpp:RequestPerform(2590): HTTP response code 500 was returned, slowing down 2024-01-05T03:16:24.549Z [INF] curl.cpp:RequestPerform(2719): ### retrying... ``` ### Details about issue <!-- Please describe the content of the issue in detail. --> #### Step1 unzip file ``` tar xzf s3fs-fuse-1.93.tar.gz ``` #### Step2 remove s3fs-fuse-1.93 folder ``` LAPTOP-TC4A0SCV:/data/test4# rm -rf s3fs-fuse-1.93/test/ rm: can't remove 's3fs-fuse-1.93/test': I/O error ``` By observing the Wireshark packet capture, it was found that during the decompression process, the test folder was first created by calling PUT /data/test4/s3fs-fuse-1.93/test/, and then the files under the /data/test4/s3fs-fuse-1.93/test/ folder were created by calling PUT. After the files under the test folder were uploaded, PUT /data/test4/s3fs-fuse-1.93/test/ and PUT /data/test4/s3fs-fuse-1.93/ were called once again and successfully created. When manually creating a folder using mkdir test, only one PUT method was called, so the manually created folder can be deleted. However, when deleting the folder after decompression, it cannot be deleted, resulting in an I/O error. ![image](https://github.com/s3fs-fuse/s3fs-fuse/assets/9587518/be77a423-06d9-400d-90b7-b7f87071fb43) ![image](https://github.com/s3fs-fuse/s3fs-fuse/assets/9587518/032679d5-6503-4737-acff-42d4be1d9b47) ![image](https://github.com/s3fs-fuse/s3fs-fuse/assets/9587518/56fb6391-9e54-444f-8d77-768ff7dc781b) Question:Why is it called twice to create a folder during decompression?
Author
Owner

@FlowerBirds commented on GitHub (Jan 5, 2024):

LAPTOP-TC4A0SCV:/data/test4# ll
total 278
-rw-r-----    1 root     root        284353 Jan  5 09:39 s3fs-fuse-1.93.tar.gz
LAPTOP-TC4A0SCV:/data/test4# mkdir pp
LAPTOP-TC4A0SCV:/data/test4# ll
total 278
drwxr-xr-x    1 root     root             0 Jan  5 14:20 pp
-rw-r-----    1 root     root        284353 Jan  5 09:39 s3fs-fuse-1.93.tar.gz
LAPTOP-TC4A0SCV:/data/test4# chmod 777 pp
LAPTOP-TC4A0SCV:/data/test4# ll
total 278
drwxrwxrwx    1 root     root             0 Jan  5 14:20 pp
-rw-r-----    1 root     root        284353 Jan  5 09:39 s3fs-fuse-1.93.tar.gz
LAPTOP-TC4A0SCV:/data/test4# ll pp
total 0
LAPTOP-TC4A0SCV:/data/test4# rm -rf pp
rm: can't remove 'pp': I/O error
LAPTOP-TC4A0SCV:/data/test4#

I find chmod will send PUT method to create folder again, then delete failed.

<!-- gh-comment-id:1878187340 --> @FlowerBirds commented on GitHub (Jan 5, 2024): ``` LAPTOP-TC4A0SCV:/data/test4# ll total 278 -rw-r----- 1 root root 284353 Jan 5 09:39 s3fs-fuse-1.93.tar.gz LAPTOP-TC4A0SCV:/data/test4# mkdir pp LAPTOP-TC4A0SCV:/data/test4# ll total 278 drwxr-xr-x 1 root root 0 Jan 5 14:20 pp -rw-r----- 1 root root 284353 Jan 5 09:39 s3fs-fuse-1.93.tar.gz LAPTOP-TC4A0SCV:/data/test4# chmod 777 pp LAPTOP-TC4A0SCV:/data/test4# ll total 278 drwxrwxrwx 1 root root 0 Jan 5 14:20 pp -rw-r----- 1 root root 284353 Jan 5 09:39 s3fs-fuse-1.93.tar.gz LAPTOP-TC4A0SCV:/data/test4# ll pp total 0 LAPTOP-TC4A0SCV:/data/test4# rm -rf pp rm: can't remove 'pp': I/O error LAPTOP-TC4A0SCV:/data/test4# ``` I find `chmod` will send PUT method to create folder again, then delete failed.
Author
Owner

@FlowerBirds commented on GitHub (Jan 5, 2024):

https://github.com/minio/minio/issues/18739

<!-- gh-comment-id:1878222546 --> @FlowerBirds commented on GitHub (Jan 5, 2024): https://github.com/minio/minio/issues/18739
Author
Owner

@ggtakec commented on GitHub (Feb 12, 2024):

@FlowerBirds
Both creating and deleting directories are working normally.
We need a little more information to solve your problem.

If you are able, try the following:

  • Try using the code from the master branch of s3fs (If building is difficult, please try the following while using v1.91 you are currently using)
  • Try dbglevel and curldbg options and collect logs.
    This log can help detect problems.

Thanks in advance for your assistance.

<!-- gh-comment-id:1938213921 --> @ggtakec commented on GitHub (Feb 12, 2024): @FlowerBirds Both creating and deleting directories are working normally. We need a little more information to solve your problem. If you are able, try the following: - Try using the code from the master branch of s3fs (If building is difficult, please try the following while using v1.91 you are currently using) - Try `dbglevel` and `curldbg` options and collect logs. This log can help detect problems. Thanks in advance for your assistance.
Author
Owner

@monoflash commented on GitHub (Oct 24, 2024):

I have the same problem.
Any directories that were copied are not deleted. But the directories that I created myself are being deleted.
I suspect that some attributes are not working correctly somehow.

2024-10-24T01:10:51.036Z [INF] s3fs.cpp:s3fs_rmdir(1051): [path=/MacOS/8.3.20.1838]
2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_parent_object_access(627): [path=/MacOS/8.3.20.1838]
2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_object_access(519): [path=/MacOS]
2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_object_access(524): [pid=19097,uid=1000,gid=100]
2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:get_object_attribute(350): [path=/MacOS]
2024-10-24T01:10:51.036Z [DBG] cache.cpp:GetStat(266): stat cache hit [path=/MacOS/][time=1030051.633836594][hit count=22]
2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_object_access(519): [path=/]
2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_object_access(524): [pid=19097,uid=1000,gid=100]
2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:get_object_attribute(350): [path=/]
2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_object_access(519): [path=/MacOS]
2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_object_access(524): [pid=19097,uid=1000,gid=100]
2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:get_object_attribute(350): [path=/MacOS]
2024-10-24T01:10:51.036Z [DBG] cache.cpp:GetStat(266): stat cache hit [path=/MacOS/][time=1030051.633836594][hit count=23]
2024-10-24T01:10:51.036Z [INF]   s3fs.cpp:list_bucket(2707): [path=/MacOS/8.3.20.1838]
2024-10-24T01:10:51.036Z [INF]       curl.cpp:ListBucketRequest(3522): [tpath=/MacOS/8.3.20.1838]
2024-10-24T01:10:51.036Z [DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 30
2024-10-24T01:10:51.036Z [INF]       curl_util.cpp:prepare_url(255): URL is https://s3.myserver.tld/s3fs-test?delimiter=/&max-keys=2&prefix=MacOS/8.3.20.1838/
2024-10-24T01:10:51.036Z [INF]       curl_util.cpp:prepare_url(288): URL changed is https://s3.myserver.tld/s3fs-test/?delimiter=/&max-keys=2&prefix=MacOS/8.3.20.1838/
2024-10-24T01:10:51.036Z [DBG] curl.cpp:RequestPerform(2289): connecting to URL https://s3.myserver.tld/s3fs-test/?delimiter=/&max-keys=2&prefix=MacOS/8.3.20.1838/
2024-10-24T01:10:51.036Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=MacOS/8.3.20.1838/] []
2024-10-24T01:10:51.036Z [INF]       curl_util.cpp:url_to_host(332): url is https://s3.myserver.tld
2024-10-24T01:10:51.039Z [INF]       curl.cpp:RequestPerform(2324): HTTP response code 200
2024-10-24T01:10:51.039Z [DBG] s3fs_xml.cpp:append_objects_from_xml_ex(410): name is file or subdir in dir. but continue.
2024-10-24T01:10:51.039Z [DBG] s3fs_xml.cpp:append_objects_from_xml_ex(350): contents_xp->nodesetval is empty.
2024-10-24T01:10:51.039Z [DBG] curl_handlerpool.cpp:ReturnHandler(103): Return handler to pool
2024-10-24T01:10:51.039Z [INF]       curl.cpp:DeleteRequest(2775): [tpath=/MacOS/8.3.20.1838/]
2024-10-24T01:10:51.039Z [DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 30
2024-10-24T01:10:51.039Z [INF]       curl_util.cpp:prepare_url(255): URL is https://s3.myserver.tld/s3fs-test/MacOS/8.3.20.1838/
2024-10-24T01:10:51.039Z [INF]       curl_util.cpp:prepare_url(288): URL changed is https://s3.myserver.tld/s3fs-test/MacOS/8.3.20.1838/
2024-10-24T01:10:51.039Z [DBG] curl.cpp:RequestPerform(2289): connecting to URL https://s3.myserver.tld/s3fs-test/MacOS/8.3.20.1838/
2024-10-24T01:10:51.039Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [DELETE] [/MacOS/8.3.20.1838/] [] []
2024-10-24T01:10:51.039Z [INF]       curl_util.cpp:url_to_host(332): url is https://s3.myserver.tld
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InternalError</Code><Message>We encountered an internal error, please try again.: cause(file is corrupted)</Message><Key>MacOS/8.3.20.1838/</Key><BucketName>s3fs-test</BucketName><Resource>/s3fs-test/MacOS/8.3.20.1838/</Resource><Region>ru-hom
e-1</Region><RequestId>18013E515BC74E65</RequestId><HostId>cb1f953109a579f2c9743a90322365321603fb8c68df05b9684bfbb605862992</HostId></Error>2024-10-24T01:10:51.043Z [INF]       curl.cpp:RequestPerform(2394): HTTP response code 500 was returned, slowing dow
n
2024-10-24T01:10:51.043Z [DBG] curl.cpp:RequestPerform(2395): Body Text:
2024-10-24T01:10:54.043Z [INF] curl.cpp:RequestPerform(2523): ### retrying...
2024-10-24T01:10:54.043Z [INF]       curl.cpp:RemakeHandle(1969): Retry request. [type=0][url=https://s3.myserver.tld/s3fs-test/MacOS/8.3.20.1838/][path=/MacOS/8.3.20.1838/]
2024-10-24T01:10:54.043Z [INF]       curl.cpp:insertV4Headers(2696): computing signature [DELETE] [/MacOS/8.3.20.1838/] [] []
2024-10-24T01:10:54.043Z [INF]       curl_util.cpp:url_to_host(332): url is https://s3.myserver.tld
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InternalError</Code><Message>We encountered an internal error, please try again.: cause(file is corrupted)</Message><Key>MacOS/8.3.20.1838/</Key><BucketName>s3fs-test</BucketName><Resource>/s3fs-test/MacOS/8.3.20.1838/</Resource><Region>ru-hom
e-1</Region><RequestId>18013E520F683F84</RequestId><HostId>cb1f953109a579f2c9743a90322365321603fb8c68df05b9684bfbb605862992</HostId></Error>2024-10-24T01:10:54.057Z [INF]       curl.cpp:RequestPerform(2394): HTTP response code 500 was returned, slowing dow
n
2024-10-24T01:10:54.057Z [DBG] curl.cpp:RequestPerform(2395): Body Text:
2024-10-24T01:11:00.057Z [INF] curl.cpp:RequestPerform(2523): ### retrying...
<!-- gh-comment-id:2434055209 --> @monoflash commented on GitHub (Oct 24, 2024): I have the same problem. Any directories that were copied are not deleted. But the directories that I created myself are being deleted. I suspect that some attributes are not working correctly somehow. ``` 2024-10-24T01:10:51.036Z [INF] s3fs.cpp:s3fs_rmdir(1051): [path=/MacOS/8.3.20.1838] 2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_parent_object_access(627): [path=/MacOS/8.3.20.1838] 2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_object_access(519): [path=/MacOS] 2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_object_access(524): [pid=19097,uid=1000,gid=100] 2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:get_object_attribute(350): [path=/MacOS] 2024-10-24T01:10:51.036Z [DBG] cache.cpp:GetStat(266): stat cache hit [path=/MacOS/][time=1030051.633836594][hit count=22] 2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_object_access(519): [path=/] 2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_object_access(524): [pid=19097,uid=1000,gid=100] 2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:get_object_attribute(350): [path=/] 2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_object_access(519): [path=/MacOS] 2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:check_object_access(524): [pid=19097,uid=1000,gid=100] 2024-10-24T01:10:51.036Z [DBG] s3fs.cpp:get_object_attribute(350): [path=/MacOS] 2024-10-24T01:10:51.036Z [DBG] cache.cpp:GetStat(266): stat cache hit [path=/MacOS/][time=1030051.633836594][hit count=23] 2024-10-24T01:10:51.036Z [INF] s3fs.cpp:list_bucket(2707): [path=/MacOS/8.3.20.1838] 2024-10-24T01:10:51.036Z [INF] curl.cpp:ListBucketRequest(3522): [tpath=/MacOS/8.3.20.1838] 2024-10-24T01:10:51.036Z [DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 30 2024-10-24T01:10:51.036Z [INF] curl_util.cpp:prepare_url(255): URL is https://s3.myserver.tld/s3fs-test?delimiter=/&max-keys=2&prefix=MacOS/8.3.20.1838/ 2024-10-24T01:10:51.036Z [INF] curl_util.cpp:prepare_url(288): URL changed is https://s3.myserver.tld/s3fs-test/?delimiter=/&max-keys=2&prefix=MacOS/8.3.20.1838/ 2024-10-24T01:10:51.036Z [DBG] curl.cpp:RequestPerform(2289): connecting to URL https://s3.myserver.tld/s3fs-test/?delimiter=/&max-keys=2&prefix=MacOS/8.3.20.1838/ 2024-10-24T01:10:51.036Z [INF] curl.cpp:insertV4Headers(2696): computing signature [GET] [/] [delimiter=/&max-keys=2&prefix=MacOS/8.3.20.1838/] [] 2024-10-24T01:10:51.036Z [INF] curl_util.cpp:url_to_host(332): url is https://s3.myserver.tld 2024-10-24T01:10:51.039Z [INF] curl.cpp:RequestPerform(2324): HTTP response code 200 2024-10-24T01:10:51.039Z [DBG] s3fs_xml.cpp:append_objects_from_xml_ex(410): name is file or subdir in dir. but continue. 2024-10-24T01:10:51.039Z [DBG] s3fs_xml.cpp:append_objects_from_xml_ex(350): contents_xp->nodesetval is empty. 2024-10-24T01:10:51.039Z [DBG] curl_handlerpool.cpp:ReturnHandler(103): Return handler to pool 2024-10-24T01:10:51.039Z [INF] curl.cpp:DeleteRequest(2775): [tpath=/MacOS/8.3.20.1838/] 2024-10-24T01:10:51.039Z [DBG] curl_handlerpool.cpp:GetHandler(81): Get handler from pool: rest = 30 2024-10-24T01:10:51.039Z [INF] curl_util.cpp:prepare_url(255): URL is https://s3.myserver.tld/s3fs-test/MacOS/8.3.20.1838/ 2024-10-24T01:10:51.039Z [INF] curl_util.cpp:prepare_url(288): URL changed is https://s3.myserver.tld/s3fs-test/MacOS/8.3.20.1838/ 2024-10-24T01:10:51.039Z [DBG] curl.cpp:RequestPerform(2289): connecting to URL https://s3.myserver.tld/s3fs-test/MacOS/8.3.20.1838/ 2024-10-24T01:10:51.039Z [INF] curl.cpp:insertV4Headers(2696): computing signature [DELETE] [/MacOS/8.3.20.1838/] [] [] 2024-10-24T01:10:51.039Z [INF] curl_util.cpp:url_to_host(332): url is https://s3.myserver.tld <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InternalError</Code><Message>We encountered an internal error, please try again.: cause(file is corrupted)</Message><Key>MacOS/8.3.20.1838/</Key><BucketName>s3fs-test</BucketName><Resource>/s3fs-test/MacOS/8.3.20.1838/</Resource><Region>ru-hom e-1</Region><RequestId>18013E515BC74E65</RequestId><HostId>cb1f953109a579f2c9743a90322365321603fb8c68df05b9684bfbb605862992</HostId></Error>2024-10-24T01:10:51.043Z [INF] curl.cpp:RequestPerform(2394): HTTP response code 500 was returned, slowing dow n 2024-10-24T01:10:51.043Z [DBG] curl.cpp:RequestPerform(2395): Body Text: 2024-10-24T01:10:54.043Z [INF] curl.cpp:RequestPerform(2523): ### retrying... 2024-10-24T01:10:54.043Z [INF] curl.cpp:RemakeHandle(1969): Retry request. [type=0][url=https://s3.myserver.tld/s3fs-test/MacOS/8.3.20.1838/][path=/MacOS/8.3.20.1838/] 2024-10-24T01:10:54.043Z [INF] curl.cpp:insertV4Headers(2696): computing signature [DELETE] [/MacOS/8.3.20.1838/] [] [] 2024-10-24T01:10:54.043Z [INF] curl_util.cpp:url_to_host(332): url is https://s3.myserver.tld <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InternalError</Code><Message>We encountered an internal error, please try again.: cause(file is corrupted)</Message><Key>MacOS/8.3.20.1838/</Key><BucketName>s3fs-test</BucketName><Resource>/s3fs-test/MacOS/8.3.20.1838/</Resource><Region>ru-hom e-1</Region><RequestId>18013E520F683F84</RequestId><HostId>cb1f953109a579f2c9743a90322365321603fb8c68df05b9684bfbb605862992</HostId></Error>2024-10-24T01:10:54.057Z [INF] curl.cpp:RequestPerform(2394): HTTP response code 500 was returned, slowing dow n 2024-10-24T01:10:54.057Z [DBG] curl.cpp:RequestPerform(2395): Body Text: 2024-10-24T01:11:00.057Z [INF] curl.cpp:RequestPerform(2523): ### retrying... ```
Author
Owner

@monoflash commented on GitHub (Oct 24, 2024):

I have now found out the reason. If you create a directory and then change attributes of the created directory, you will not be able to delete it. And if you create a directory and do not change anything in the directory, then it is can deleted.

s3fs is mounted from under the user, not a root user.

<!-- gh-comment-id:2434095941 --> @monoflash commented on GitHub (Oct 24, 2024): I have now found out the reason. If you create a directory and then change attributes of the created directory, you will not be able to delete it. And if you create a directory and do not change anything in the directory, then it is can deleted. s3fs is mounted from under the user, not a root user.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1179
No description provided.