mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #2670] Issues with IONOS Cloud S3 "Input/Output error" #1274
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1274
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @agowa on GitHub (May 12, 2025).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2670
Additional Information
Version of s3fs being used (
s3fs --version)Amazon Simple Storage Service File System V1.93 (commit:unknown) with OpenSSL
Copyright (C) 2010 Randy Rizun rrizun@gmail.com
License GPL2: GNU GPL version 2 https://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)2.9.9
Provider (
AWS,OVH,Hetzner,iDrive E2, ...)IONOS Cloud
Kernel information (
uname -r)6.14.5-arch1-1
GNU/Linux Distribution, if applicable (
cat /etc/os-release)NAME="Arch Linux"
PRETTY_NAME="Arch Linux"
ID=arch
BUILD_ID=rolling
ANSI_COLOR="38;2;23;147;209"
HOME_URL="https://archlinux.org/"
DOCUMENTATION_URL="https://wiki.archlinux.org/"
SUPPORT_URL="https://bbs.archlinux.org/"
BUG_REPORT_URL="https://gitlab.archlinux.org/groups/archlinux/-/issues"
PRIVACY_POLICY_URL="https://terms.archlinux.org/docs/privacy-policy/"
LOGO=archlinux-logo
How to run s3fs, if applicable
I'm using a systemd mount-unit. However I'm also going to attach the resulting command line just in case.
[x] command line:
s3fs 88dbcaef-04bc-4e15-a177-c760a46ca70b /mnt/s3fs -o rw,dbglevel=info,curldbg,logfile=/var/log/s3fs.log,nosuid,nodev,noexec,noatime,allow_other,uid=1000,gid=984,default_permissions,passwd_file=/home/user/.passwd-s3fs,url=https://s3.eu-central-3.ionoscloud.com,tmpdir=/mnt/s3fs-tmp,use_cache=/mnt/s3fs-cache,max_stat_cache_size=2500,parallel_count=48,ensure_diskfree=10240,host=https://s3.eu-central-3.ionoscloud.com,endpoint=eu-central-3,enable_content_md5[] /etc/fstab
[x] systemd
s3fs syslog messages (
grep s3fs /var/log/syslog,journalctl | grep s3fs, ors3fs outputs)Details about issue
ache.cpp:AddNoObjectCache(465): add no object cache entry[path=/test12345]line. When it works it creates more log output but in this error state this is the last line written. (Aka there is NO lines3fs.cpp:s3fs_create(1169): [path=/test12345][mode=100644][flags=0x8841]and following written into the logs it fails somewhere in between these two lines)@CarstenGrohmann commented on GitHub (Mar 1, 2026):
The error is likely caused by use_cache exhausting all available file descriptors when reading and writing a large number of files simultaneously.
The log entry points to this:
Error 24 is EMFILE — "Too many open files". With
use_cacheandparallel_count=48many concurrent file operations can exceed the default limit of 1024.Workaround: raise LimitNOFILE in the systemd mount unit:
IMHO: This is normal Unix behaviour, not a bug in s3fs