[GH-ISSUE #1822] "multipart_size" parameter seems to be invalid #934

Closed
opened 2026-03-04 01:50:03 +03:00 by kerem · 4 comments
Owner

Originally created by @CRblog on GitHub (Dec 24, 2021).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1822

Version of s3fs being used (s3fs --version)

V1.90 (commit:5de92e9)

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.2

s3fs command line used, if applicable

s3fs testbucket /data/testbucket -o url=http://xxxx -o passwd_file=/root/.passwd-s3fs -o dbglevel=info -o curldbg,use_path_request_style,allow_other -o retries=1 -o multipart_size=8 -o multireq_max=8 -o parallel_count=32

Details about issue

[root]# cp testdata testbucket/
cp: failed to close ‘testbucket/testdata’: Input/output error

When I try to copy a file of 200MB in size, I feel that the "multipart_size" parameter seems to be invalid. The s3 service I use requires me to set the part size to 8MB, and the request sent to the server contains 23 8MB parts and a 16MB part, but theoretically all parts should be 8MB (except the last part may be less than 8MB)

I have tested other file size and it seems to be the same. I am not sure which part is the problem, but there is always one part that does not satisfy multipart_size (guess it is the last part)

What I learned is that the "multipart_size" parameter is effective in this scenario of version 1.86

Originally created by @CRblog on GitHub (Dec 24, 2021). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1822 Version of s3fs being used (s3fs --version) V1.90 (commit:5de92e9) Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) 2.9.2 s3fs command line used, if applicable s3fs testbucket /data/testbucket -o url=http://xxxx -o passwd_file=/root/.passwd-s3fs -o dbglevel=info -o curldbg,use_path_request_style,allow_other -o retries=1 -o multipart_size=8 -o multireq_max=8 -o parallel_count=32 Details about issue [root]# cp testdata testbucket/ cp: failed to close ‘testbucket/testdata’: Input/output error When I try to copy a file of 200MB in size, I feel that the "multipart_size" parameter seems to be invalid. The s3 service I use requires me to set the part size to 8MB, and the request sent to the server contains 23 8MB parts and a 16MB part, but theoretically all parts should be 8MB (except the last part may be less than 8MB) I have tested other file size and it seems to be the same. I am not sure which part is the problem, but there is always one part that does not satisfy multipart_size (guess it is the last part) What I learned is that the "multipart_size" parameter is effective in this scenario of version 1.86
kerem closed this issue 2026-03-04 01:50:03 +03:00
Author
Owner

@skepticalwaves commented on GitHub (Jan 2, 2022):

Maybe related to #1816

<!-- gh-comment-id:1003761440 --> @skepticalwaves commented on GitHub (Jan 2, 2022): Maybe related to #1816
Author
Owner

@CRblog commented on GitHub (Jan 4, 2022):

Maybe related to #1816

Thanks, but it doesn't seem to be the same problem

<!-- gh-comment-id:1004547692 --> @CRblog commented on GitHub (Jan 4, 2022): > Maybe related to #1816 Thanks, but it doesn't seem to be the same problem
Author
Owner

@skepticalwaves commented on GitHub (Jan 4, 2022):

You may want to go through the whole debug logging like I did then, to give more info.

<!-- gh-comment-id:1005089477 --> @skepticalwaves commented on GitHub (Jan 4, 2022): You may want to go through the whole debug logging like I did then, to give more info.
Author
Owner

@ggtakec commented on GitHub (Jan 7, 2023):

@CRblog I'm sorry for my late reply.

You can specify nomixupload option to solve this problem.
I believe this makes each part size of the multipart the fixed length you expect.

This issue will be closed, but if you still have problems, please reopen or post another issue.
Thanks in advance for your assistance.

NOTE

s3fs by default uses MIX multipart upload (COPY API is included on upload).
This respects each specified part size by trying to keep part sizes as coherent as possible, but in some situations the part size is variable.
Your issue is a problem caused by this.

<!-- gh-comment-id:1374369030 --> @ggtakec commented on GitHub (Jan 7, 2023): @CRblog I'm sorry for my late reply. You can specify `nomixupload` option to solve this problem. I believe this makes each part size of the multipart the fixed length you expect. This issue will be closed, but if you still have problems, please reopen or post another issue. Thanks in advance for your assistance. #### NOTE s3fs by default uses **MIX** multipart upload (COPY API is included on upload). This respects each specified part size by trying to keep part sizes as coherent as possible, but in some situations the part size is variable. Your issue is a problem caused by this.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#934
No description provided.