mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 13:26:00 +03:00
[GH-ISSUE #1594] zip-ing a local folder to object storage hangs when the local cache ensure_diskfree limit kicks in. #837
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#837
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @erki-karblane on GitHub (Mar 3, 2021).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1594
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD
Version of s3fs being used (s3fs --version)
v 1.89
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
Name : fuse
Version : 2.9.2
Release : 11.el7
Kernel information (uname -r)
command result: uname -r
GNU/Linux Distribution, if applicable (cat /etc/os-release)
3.10.0-1160.15.2.el7.x86_64
s3fs command line used, if applicable
/etc/fstab entry, if applicable
s3fs#application-backup /mnt/application-backup fuse _netdev,allow_other,max_dirty_data=-1,ensure_diskfree=1000,url=https://s3.private.eu-de.cloud-object-storage.appdomain.cloud 0 0
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue
As I am running low on diskspace and I need to zip the backups to a Object storage directly (not to zip to local drive, which is almost full), how can I use s3fs mount as the destination so that the temp file being made, will not fill my local drive and stop the zipping process to the object storage.
@erki-karblane commented on GitHub (Mar 4, 2021):
eventually the zipping procces ends with:
zip I/O error: Input/output error
zip error: Output file write failure (write error on zip file)