[GH-ISSUE #1229] Fail to copy 230GB file to s3 ceph using s3fs #659

Closed
opened 2026-03-04 01:47:38 +03:00 by kerem · 1 comment
Owner

Originally created by @alphainets on GitHub (Jan 20, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1229

Version of s3fs being used (s3fs --version)

1.85

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.2

Kernel information (uname -r)

5.4.6-1.el7.elrepo.x86_64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

centos-7

s3fs command line used, if applicable

s3fs testing /mnt/testing -o passwd_file=/testing/passwd-s3fs -o url=http://192.168.0.100:7480 -o use_path_request_style -o dbglevel=dbg -f -o use_cache="/nova/tmp" -o curldbg

cp file /mnt/testing/file

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

fail to close: operation not supported

Details about issue

I have problem uploading 230GB file to ceph s3 server,
I saw it was caching locally,
but when copy was donw, the file size in s3 became 0

May I know if there is any configuration mistakes?
I have no problem when file size is small, says 20GB.

Originally created by @alphainets on GitHub (Jan 20, 2020). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1229 #### Version of s3fs being used (s3fs --version) 1.85 #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.2 #### Kernel information (uname -r) 5.4.6-1.el7.elrepo.x86_64 #### GNU/Linux Distribution, if applicable (cat /etc/os-release) centos-7 #### s3fs command line used, if applicable ``` s3fs testing /mnt/testing -o passwd_file=/testing/passwd-s3fs -o url=http://192.168.0.100:7480 -o use_path_request_style -o dbglevel=dbg -f -o use_cache="/nova/tmp" -o curldbg cp file /mnt/testing/file ``` #### s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs) ``` fail to close: operation not supported ``` ### Details about issue I have problem uploading 230GB file to ceph s3 server, I saw it was caching locally, but when copy was donw, the file size in s3 became 0 May I know if there is any configuration mistakes? I have no problem when file size is small, says 20GB.
kerem closed this issue 2026-03-04 01:47:38 +03:00
Author
Owner

@alphainets commented on GitHub (Jan 20, 2020):

After reading the docs carefully, it should be related to default multipart size.
I have changed the multiple size to fit my needs.
Now testing, if okay will close the issue~

<!-- gh-comment-id:576138730 --> @alphainets commented on GitHub (Jan 20, 2020): After reading the docs carefully, it should be related to default multipart size. I have changed the multiple size to fit my needs. Now testing, if okay will close the issue~
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#659
No description provided.