mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #2246] Input/Output Error when using Dell ECS #1135
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1135
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @fabio79acn on GitHub (Aug 3, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2246
Additional Information
Version of s3fs being used (
s3fs --version)Amazon Simple Storage Service File System V1.93 (commit:unknown) with OpenSSL
Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)Name : fuse
Version : 2.9.7
Release : 15.el8
Architecture: x86_64
Install Date: Fri 24 Jun 2022 10:28:33 PM CEST
Group : Unspecified
Size : 208300
License : GPL+
Signature : RSA/SHA256, Fri 25 Feb 2022 07:38:22 PM CET, Key ID 199e2f91fd431d51
Source RPM : fuse-2.9.7-15.el8.src.rpm
Build Date : Thu 24 Feb 2022 06:57:08 PM CET
Build Host : x86-vm-55.build.eng.bos.redhat.com
Relocations : (not relocatable)
Packager : Red Hat, Inc. http://bugzilla.redhat.com/bugzilla
Vendor : Red Hat, Inc.
URL : http://fuse.sf.net
Summary : File System in Userspace (FUSE) v2 utilities
Description :
With FUSE it is possible to implement a fully functional filesystem in a
userspace program. This package contains the FUSE v2 userspace tools to
mount a FUSE filesystem.
Kernel information (
uname -r)4.18.0-372.9.1.el8.x86_64
GNU/Linux Distribution, if applicable (
cat /etc/os-release)NAME="Red Hat Enterprise Linux"
VERSION="8.6 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.6"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.6 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8::baseos"
HOME_URL="https://www.redhat.com/"
DOCUMENTATION_URL="https://access.redhat.com/documentation/red_hat_enterprise_linux/8/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.6
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.6"
How to run s3fs, if applicable
[] command line
s3fs -f
openshift_s3_backup
/mnt/s3fs/ecs-rm1.mysite.it/openshift_s3_backup
-o passwd_file=~/.passwd-s3fs
-o use_path_request_style
-o dbglevel=info
-o no_check_certificate
-o curldbg=body
-o proxy=http://proxy.rgs.mysite.somewhere.it:80
-o ssl_verify_hostname=0
-o url=https://ecs-rm1.sogei.it:9021/
[] /etc/fstab
fstab not used
s3fs syslog messages (
grep s3fs /var/log/syslog,journalctl | grep s3fs, ors3fs outputs)Details about issue
Even though the s3fs command seems to work and when I run df -h I can clearly see my S3 bucket mounted on the local dir /mnt/s3fs/ecs-rm1.mysite.it/openshift_s3_backup, every time I try to access that dir I get Input/Output error
I'm completely stuck right now :-(
may you please help me ?
@auradk commented on GitHub (Aug 22, 2023):
Had exactly same issue, same version, same os version, but with hitachi content platform (hcp).
I managed to downgrade to 1.91-4.el8, then it worked again. I have locked the version for now.
My downgrade procedure + version lock:
@fabio79acn commented on GitHub (Aug 24, 2023):
Hi @auradk , many thanks for your hints but I could not find the s3fs-fuse RPM you mentioned in the EPEL repo and I don't want to install RPM published on other repos; please from where did you exactly download that RPM ?
Hi @rrizun, since seemingly I'm not the only one having this issue, please which is your advice here ? thanks.
@auradk commented on GitHub (Aug 25, 2023):
Ahh my bad, i did write it was the same os, what i meant was linux 8 - i am actual on Oracle Linux (kernel 5.15.0-103.114.4.el8uek.x86_64) - so linux 8.8 to be exact.
So i get the s3fs-fuse package directly from Oracle:
Oracle Linux 8 (x86_64) EPEL
Hopes this helps - sorry for the confusion.
But the main idea was just to keep downgrading until it worked :) (since it worked ealier)
@NicolaeMarius commented on GitHub (Aug 28, 2023):
Hi @auradk
I'm a fellow colleague of @fabio79acn, I just wanted to thank you for being a life saver!
The 1.91-4 version seems to be working properly on rhel 8 without giving us the issue of I/O error
Thank you again!
@fabio79acn commented on GitHub (Oct 18, 2023):
Please any news about when this will also work in the latest s3fs release ?
@gaul commented on GitHub (Oct 18, 2023):
If someone knows how to use
git bisectthey could help by showing which commit introduced the regression. But unfortunately I don't have access to Dell ECS so I cannot debug this on my own.@davidklika commented on GitHub (Oct 16, 2024):
Hi @gaul
My issue match to problems described in issues #2246, #2423 and #2304. Mounting is possible but accessing just the mountpoint raises IO error (curl log does not contains any errors). Objects under the mountpoint can be accessed after mount until the mountpoint is accessed.
I compiled many s3fs versions and it seems that the problem is introduced in pull request #1964. Current master still affected.
If you propose a fix or have instructions to debug it I can test it on a HCP system where the problem is manifested.
Thank you
@davidklika commented on GitHub (Oct 17, 2024):
There is the curl+debug log that is produced just when the mountpoint is touched at first time after mount. You can see that HCP does not produce x-amz-meta-* headers that are subject of PR #1964.
2024-10-17T09:44:24.014Z [INF] s3fs.cpp:s3fs_getattr(866): [path=/]
2024-10-17T09:44:24.014Z [DBG] s3fs.cpp:check_parent_object_access(718): [path=/]
2024-10-17T09:44:24.014Z [DBG] s3fs.cpp:check_object_access(610): [path=/]
2024-10-17T09:44:24.014Z [DBG] s3fs.cpp:check_object_access(615): [pid=164585,uid=0,gid=0]
2024-10-17T09:44:24.014Z [DBG] s3fs.cpp:get_object_attribute(376): [path=/]
2024-10-17T09:44:24.014Z [INF] curl.cpp:HeadRequest(3177): [tpath=//]
2024-10-17T09:44:24.014Z [INF] curl.cpp:PreHeadRequest(3137): [tpath=//][bpath=][save=][sseckeypos=18446744073709551615]
2024-10-17T09:44:24.014Z [INF] curl_util.cpp:prepare_url(257): URL is https://temp.a359720.katastr.int/lms//
2024-10-17T09:44:24.014Z [INF] curl_util.cpp:prepare_url(290): URL changed is https://lms.temp.a359720.katastr.int//
2024-10-17T09:44:24.014Z [DBG] curl_handlerpool.cpp:GetHandler(79): Get handler from pool: rest = 31
2024-10-17T09:44:24.014Z [DBG] curl.cpp:ResetHandle(1895): 'no_check_certificate' option in effect.
2024-10-17T09:44:24.014Z [DBG] curl.cpp:ResetHandle(1896): The server certificate won't be checked against the available certificate authorities.
2024-10-17T09:44:24.014Z [DBG] curl.cpp:RequestPerform(2366): connecting to URL https://lms.temp.a359720.katastr.int//
2024-10-17T09:44:24.014Z [INF] curl.cpp:insertV4Headers(2773): computing signature [HEAD] [//] [] []
2024-10-17T09:44:24.014Z [INF] curl_util.cpp:url_to_host(334): url is https://temp.a359720.katastr.int
2024-10-17T09:44:24.014Z [CURL DBG] * Found bundle for host lms.temp.a359720.katastr.int: 0x7fa3dc011fe0 [serially]
2024-10-17T09:44:24.014Z [CURL DBG] * Can not multiplex, even if we wanted to!
2024-10-17T09:44:24.015Z [CURL DBG] * Re-using existing connection! (#0) with host lms.temp.a359720.katastr.int
2024-10-17T09:44:24.015Z [CURL DBG] * Connected to lms.temp.a359720.katastr.int (10.51.16.72) port 443 (#0)
2024-10-17T09:44:24.016Z [CURL DBG] * TLSv1.2 (OUT), TLS header, Unknown (23):
2024-10-17T09:44:24.016Z [CURL DBG] > HEAD // HTTP/1.1
2024-10-17T09:44:24.016Z [CURL DBG] > Host: lms.temp.a359720.katastr.int
2024-10-17T09:44:24.016Z [CURL DBG] > User-Agent: s3fs/1.91 (commit hash
6e89e69; OpenSSL)2024-10-17T09:44:24.016Z [CURL DBG] > Accept: /
2024-10-17T09:44:24.016Z [CURL DBG] > Authorization: AWS4-HMAC-SHA256 Credential=cGNlLXVzZXI=/20241017/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=697d49b8501c8013510a7e10efa4103751bee6866e8d7d21d7d5fa74be1b890d
2024-10-17T09:44:24.016Z [CURL DBG] > x-amz-content-sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
2024-10-17T09:44:24.016Z [CURL DBG] > x-amz-date: 20241017T094424Z
2024-10-17T09:44:24.016Z [CURL DBG] >
2024-10-17T09:44:24.028Z [CURL DBG] * TLSv1.2 (IN), TLS header, Unknown (23):
2024-10-17T09:44:24.028Z [CURL DBG] * Mark bundle as not supporting multiuse
2024-10-17T09:44:24.028Z [CURL DBG] < HTTP/1.1 200 OK
2024-10-17T09:44:24.028Z [CURL DBG] < Date: Thu, 17 Oct 2024 09:44:24 GMT
2024-10-17T09:44:24.028Z [CURL DBG] < Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-eval' 'unsafe-inline'; connect-src 'self'; img-src 'self'; style-src 'self' 'unsafe-inline'; object-src 'self'; frame-ancestors 'self';
2024-10-17T09:44:24.028Z [CURL DBG] < Cache-Control: no-cache,no-store,must-revalidate
2024-10-17T09:44:24.028Z [CURL DBG] < X-Download-Options: noopen
2024-10-17T09:44:24.028Z [CURL DBG] < Strict-Transport-Security: max-age=31536000; includeSubDomains
2024-10-17T09:44:24.028Z [CURL DBG] < X-Frame-Options: SAMEORIGIN
2024-10-17T09:44:24.028Z [CURL DBG] < Pragma: no-cache
2024-10-17T09:44:24.028Z [CURL DBG] < Vary: Origin, Access-Control-Request-Headers, Access-Control-Request-Method
2024-10-17T09:44:24.028Z [CURL DBG] < X-XSS-Protection: 1; mode=block
2024-10-17T09:44:24.028Z [CURL DBG] < Expires: Thu, 01 Jan 1970 00:00:00 GMT
2024-10-17T09:44:24.028Z [CURL DBG] < X-DNS-Prefetch-Control: off
2024-10-17T09:44:24.028Z [CURL DBG] < X-Content-Type-Options: nosniff
2024-10-17T09:44:24.028Z [CURL DBG] < Content-Length: 0
2024-10-17T09:44:24.028Z [CURL DBG] <
2024-10-17T09:44:24.028Z [CURL DBG] * Connection #0 to host lms.temp.a359720.katastr.int left intact
2024-10-17T09:44:24.028Z [INF] curl.cpp:RequestPerform(2401): HTTP response code 200
2024-10-17T09:44:24.029Z [DBG] curl_handlerpool.cpp:ReturnHandler(101): Return handler to pool
2024-10-17T09:44:24.029Z [INF] cache.cpp:AddStat(342): add stat cache entry[path=/]
2024-10-17T09:44:24.029Z [DBG] cache.cpp:GetStat(264): stat cache hit [path=/][time=1517995.000561089][hit count=0]
2024-10-17T09:44:24.029Z [DBG] fdcache.cpp:OpenExistFdEntity(645): [path=/][flags=0x0]
2024-10-17T09:44:24.029Z [DBG] fdcache.cpp:Open(537): [path=/][size=-1][ts_mctime=0.1073741822][flags=0x0][force_tmpfile=no][create=no][ignore_modify=no]
2024-10-17T09:44:24.029Z [DBG] s3fs.cpp:s3fs_getattr(889): [path=/] uid=0, gid=0, mode=100750
@devhaozi commented on GitHub (Mar 29, 2025):
I encountered the same problem on Tencent Cloud COS, using s3fs 1.95.
@devhaozi commented on GitHub (Mar 29, 2025):
Downgrading s3fs to 1.91 solved it, I confirmed that it was caused by #1964.
@ggtakec commented on GitHub (Mar 31, 2025):
I don't have a Dell ECS or HCP environment, so this is a somewhat hypothetical result.
First, I think the response to a HEAD request for a mount point in these S3-compatible APIs might be different from that of AWS.
This is the following part of the first log for this issue.
Even though the HEAD response to
/openshift_s3_backup//is 200, the x-amz header is not received.(In the case of AWS, the
/openshift_s3_backup//object does not exist, so a 404 error occurs.)I think this is probably because Dell ECS interprets
/openshift_s3_backup//as/openshift_s3_backup.So that s3fs does not expect this 200 response, it seems to be confused when getting the attribute for the mount point.
In addition, an
EPERMerror may have occurred in the multithread request processing part as of v1.95.But that this multithread processing has been significantly changed in the current
masterbranch, so if using the master branch, thisEPERMerror should not occur.I made a patch for this problem against the master branch.
Could anyone apply this patch to the current master branch of s3fs and test it on Dell ECS or HCP?
Thanks in advance for your assistance.
@devhaozi commented on GitHub (Mar 31, 2025):
I confirm that this patch can solve the problem on Tencent Cloud COS.
@ggtakec commented on GitHub (Mar 31, 2025):
@devhaozi Thanks for checking.
I'll try to submit this fix as an official PR.
Anyone else who could check this?
@davidklika commented on GitHub (Apr 3, 2025):
I tried to build the patch on AlmaLinux 9 according to the instructions (https://github.com/s3fs-fuse/s3fs-fuse/wiki/Installation-Notes#fedora--centos--rhel) but I did not succeed, configure error:
...
checking for fuse_library_checking... no
configure: error: Package requirements (fuse >= 2.8.4) were not met:
Package 'fuse', required by 'virtual:world', not found
...
Can anyone hint me on this?
@devhaozi commented on GitHub (Apr 3, 2025):
@davidklika commented on GitHub (Apr 3, 2025):
@devhaozi Thank you for the hint, I missed the crb repo. Now I can confirm that the patched version works well with the Hitachi Content Platform as well. Great!
@ggtakec commented on GitHub (Apr 3, 2025):
@davidklika Thanks for checking this out.
PR #2653 has been merged and the patch is now in the master branch.