mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #656] Impossible to read large files from a mount in a raspberry pi 2 #374
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#374
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ghost on GitHub (Oct 9, 2017).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/656
Hello!
When I do a mv/cp of a large file in the local disk, I get an I/O error:
Same with any software (e.g. video soft cannot read large videos):
I get these errors in the journal:
Tested with different files.
It works with relatively small files, so I don’t think it’s permission-related.
Also, I use SSHFS on this rpi to retrieve the same files and never had any problem. Webdav also works correctly.
However, s3fs works well on my laptop with the exact same configuration, except for kernel (4.13.3, amd64).
Also, thank you for your work!
Steps to reproduce
big.file.mkdir /mnt/s3 && chown me:me /mnt/s3s3fs my-bucket /mnt/s3 -o passwd_file=~/.config/s3fs-credentials,allow_other,connect_timeout=15,retries=3,noatime,curldbg -dcp /mnt/s3/big.file ~/Additional Information
Hardware: Rpi 2 (armv7)
Distro: Archlinux
Kernel: Linux 4.9.52-1
s3fs version: 1.80
fuse version: 2.9.7
@ggtakec commented on GitHub (Oct 15, 2017):
@gui-don
This error appears to have received EINVAL(The argument length is negative or larger than the maximum file size) from ftruncate function.
Although raspberry pi is not detailed, there seems to be some restriction in raspberry pi, like "It works with relatively small files" you indicate.
s3fs opens a temporary file with tmpfile and changes its size with ftruncate.
Could you check whether the size of ftruncate on raspberry pi is restricted?
Thanks in advance for your assistance.
@gaul commented on GitHub (Oct 16, 2017):
I wonder if Raspberry Pi has a 32-bit
off_t? Maybe try recompiling with replacingftruncatewithftruncate64?@gorky commented on GitHub (Dec 30, 2017):
The Raspberry PIs < PI3 are all 32bit CPUs. Raspberry PI3 is the first of the 64bit PIs. So maybe ftruncate64 would be best?
@arl commented on GitHub (Feb 10, 2018):
@gui-don did you have the opportunity to try with
ftruncate64, if so I'd be interested to know if that changed the outcome@ghost commented on GitHub (Feb 22, 2018):
I’m sorry, I don’t have the time to test it for now. I’ll post anything I try here.
@gaul commented on GitHub (Mar 15, 2019):
I successfully ran s3fs tests on a 64-bit Amazon ARM instance but lack a 32-bit instance that I can test on.
@gaul commented on GitHub (Apr 9, 2019):
If someone can provide a Raspberry Pi that I can ssh into or a working VM image we can investigate this.