mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #1328] I/O Error when trying to move a large file #711
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#711
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @avaiss on GitHub (Jul 8, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1328
Whenever I am trying to either:
the operation fails for I/O error.
Note that does not happen for all files larger than 1G (my configured multipart size), only big ones (19G in my tests).
Also, I am not using Amazon S3, but S3 object storage provided by scaleway
Additional Information
Version of s3fs being used (s3fs --version)
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
Kernel information (uname -r)
GNU/Linux Distribution, if applicable (cat /etc/os-release)
s3fs command line used, if applicable
N/A
/etc/fstab entry, if applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
s3fs.txt.gz
Details about issue
Reproduce steps.
@gaul commented on GitHub (Jul 8, 2020):
Could you try testing with the latest version 1.86 or master? If symptoms persist, please run s3fs with debug logging via
-f -d -o curldbgso we can investigate further.@avaiss commented on GitHub (Jul 8, 2020):
Same issue with master (
1f796d4).s3fs.master.txt.gz
@gaul commented on GitHub (Jul 8, 2020):
Relevant error from the logs:
It seems like Scaleway has some incompatibility with the S3 protocol. You can work around this with
-o nocopyapiuntil we can debug this further.@avaiss commented on GitHub (Jul 8, 2020):
Thanks. It would seem that
-o nocopyapihelps for renaming directories containing large files. Well... at least files are moved somehow but the destination directory does not look as the original one ;)Doing
mv run-1 run-xwith run-1leads to:
It does not work for the first problem (moving individual very large files). As the error message you found made me think of parallelism issues, I mounted the dir with
-o parallel_count=1and moving individual large files works with that option (but is really slow). Moving directories does not.So I guess
-o nocopyapi -o parallel_count=1is the way to go (but nocopyapi means 40G traffic for moving a 20G file, which is not ideal :/)@ggtakec commented on GitHub (Jul 13, 2020):
@avaiss If you specify
nocopyapi, whole object(file) will be downloaded once to localhost and then uploaded.In other words, the amount of communication will increase as you mentioned.
I think this is difficult to avoid and cannot be mitigated without enabling caching.
It is a matter that you have to specify
parallel_count=1, but it may not recur due to other corrections(ex. #1313, #1319).Please test again with the latest master code, if you allow it.
Thanks in advance for your kindness.
@avaiss commented on GitHub (Oct 3, 2020):
I did finally have a bit of time to retest, and well it would seem that indeed, with current master(
4623472) moving individual large files on scaleway do not requireparallel_count=1.Still have the issue of failure when moving a directory containing a large file though. Here is an extract from
debug output I found interesting.
I was doing
mv ice fire,icebeing a directory containing one largeice.full....file.@gaul commented on GitHub (Feb 8, 2021):
@avaiss #1553 may address these symptoms. Can you retest with master?
@gaul commented on GitHub (Feb 11, 2021):
Potential duplicate of #1547.
@gaul commented on GitHub (Feb 20, 2021):
Please reopen if you can reproduce these symptoms with the latest master.