mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 21:35:58 +03:00
[GH-ISSUE #2243] File too large when generating files or copy files to aws bucket #1132
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1132
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @chunxuan-hs on GitHub (Jul 30, 2023).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2243
Additional Information
Version of s3fs being used (
s3fs --version)Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)Kernel information (
uname -r)5.19.0-1029-awsGNU/Linux Distribution, if applicable (
cat /etc/os-release)How to run s3fs, if applicable
[x] command line
[] /etc/fstab
s3fs syslog messages (
grep s3fs /var/log/syslog,journalctl | grep s3fs, ors3fs outputs)Details about issue
Tried to cat file1 file2 > file3, and got the error of "File too large". Tried to generate file3 first and copy to bucket via rsync, got the same error.
Initially I used
s3fs -o compat_dir bucket_name local_dir, and played a few other options but didn't work.Any suggestions? Thanks!
@gaul commented on GitHub (Sep 8, 2023):
Let's be clear about which options you are using. The defaults use multi-part upload which generally enables > 5 GB objects. By specifying
-o nomultipartthis limits s3fs to single-part uploads. AWS S3 does not support > 5 GB single-part objects although some compatible object stores do. What are you doing specifically?@chunxuan-hs commented on GitHub (Sep 13, 2023):
The initial command I used is
s3fs -o compat_dir bucket_name local_dir,However, it failed to copy a file larger than 100GB.
Then I searched on the internet, and it seems people suggested to use
-o nomultipart. So I tried:s3fs -o compat_dir -o nomultipart bucket_name local_dir,but same error appeared.