[GH-ISSUE #2764] Chown and Chmod fails, except if nomultipart is set #1293

Open
opened 2026-03-04 01:52:52 +03:00 by kerem · 10 comments
Owner

Originally created by @gtz63 on GitHub (Dec 3, 2025).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2764

Additional Information

Version of s3fs being used (s3fs --version)

V1.92

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse or dpkg -s fuse)

Debian or recompiled version

Provider (AWS, OVH, Hetzner, iDrive E2, ...)

Openstack Swift

Kernel information (uname -r)

5.10.0-21-cloud-amd64

GNU/Linux Distribution, if applicable (cat /etc/os-release)

Debian 11 / 12

How to run s3fs, if applicable

sudo ./s3fs mybucket mybucket -f -o use_path_request_style,nosuid,rw,url=https://swift.private.cloud,allow_other,dbglevel=debug

s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)

s3fs.cpp:put_headers(906): [path=/afile.txt]
  curl.cpp:MultipartHeadRequest(4342): [tpath=/afile.txt]
  curl.cpp:PreMultipartPostRequest(3807): [tpath=/afile.txt]

curl_handlerpool.cpp:GetHandler(79): Get handler from pool: rest = 31
curl_util.cpp:prepare_url(257): URL is https://swift.private.cloud/mybucket/afile.txt?uploads
curl_util.cpp:prepare_url(290): URL changed is https://swift.private.cloud/mybucket/afile.txt?uploads
curl.cpp:RequestPerform(2484): connecting to URL https://swift.private.cloud/mybucket/afile.txt?uploads
curl.cpp:insertV4Headers(2891): computing signature [POST] [/afile.txt] [uploads] []
curl_util.cpp:url_to_host(334): url is https://swift.private.cloud
curl.cpp:RequestPerform(2519): HTTP response code 200
curl_handlerpool.cpp:ReturnHandler(101): Return handler to pool
curl.cpp:CopyMultipartPostSetup(4207): [from=/afile.txt][to=/afile.txt][part=1]
curl_util.cpp:prepare_url(257): URL is https://swift.private.cloud/mybucket/afile.txt?partNumber=1&uploadId=2YMOwdcgXW44b690iPVQ9gM1otvnJXy5
curl_util.cpp:prepare_url(290): URL changed is https://swift.private.be-ys.cloud/mybucket/afile.txt?partNumber=1&uploadId=2
YMOwdcgXW44b690iPVQ9gM1otvnJXy5
curl.cpp:CopyMultipartPostSetup(4266): copying... [from=/afile.txt][to=/afile.txt][part=1]
curl_multi.cpp:Request(324): [count=1]
curl_handlerpool.cpp:GetHandler(79): Get handler from pool: rest = 31
curl.cpp:RequestPerform(2484): connecting to URL https://swift.private.cloud/mybucket/afile?partNumber=1&uploadId=2YMOwdcgXW44b690iPVQ9gM1otvnJXy5
curl.cpp:insertV4Headers(2891): computing signature [PUT] [/afile.txt] [partNumber=1&uploadId=2
YMOwdcgXW44b690iPVQ9gM1otvnJXy5] []
curl_util.cpp:url_to_host(334): url is https://swift.private.cloud
curl.cpp:RequestPerform(2571): HTTP response code 404 was returned, returning ENOENT
curl.cpp:RequestPerform(2572): Body Text: NoSuchKeymybuckettx00000000000000447f67b-00693049d1-ae969cdc-defaultae969cdc-default-default
curl_handlerpool.cpp:ReturnHandler(101): Return handler to pool
curl_multi.cpp:MultiPerform(195): thread terminated with non-zero return code: -2
curl_multi.cpp:MultiRead(234): failed a request(404: https://swift.private.cloud/mybucket/afile.txt?partNumber=1&uploadId=2~YMOwdcgXW44b690iPVQ9gM1otvnJXy5)

Details about issue

When using chmod or chown on a file, if file size is more than the multipart threshold, on OpenStack Swift it fails with an input/output error.
After some digging, i found the put_headers function is using a multipart request, but Swift does not allow that.

If i modify the put_headers function signature:

int put_headers(const char* path, headers_t& meta, bool is_copy, bool use_st_size = true, bool allow_multipart = true);

and change multipart condition:

if(!nocopyapi && !nomultipart && allow_multipart && size >= multipart_threshold){

the problem is fixed (passing allow_multipart as false in s3fs_chmod and s3fs_chmod functions).

Does changing file owner and permissions really needs a multipart query (the whole file is sent) ? Or should put_headers not consider the filesize as a threshold for a chmod/chown query ? Rename may also be concerned.

Thanks.

Originally created by @gtz63 on GitHub (Dec 3, 2025). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2764 ### Additional Information #### Version of s3fs being used (`s3fs --version`) V1.92 #### Version of fuse being used (`pkg-config --modversion fuse`, `rpm -qi fuse` or `dpkg -s fuse`) Debian or recompiled version #### Provider (`AWS`, `OVH`, `Hetzner`, `iDrive E2`, ...) Openstack Swift #### Kernel information (`uname -r`) 5.10.0-21-cloud-amd64 #### GNU/Linux Distribution, if applicable (`cat /etc/os-release`) Debian 11 / 12 #### How to run s3fs, if applicable sudo ./s3fs mybucket mybucket -f -o use_path_request_style,nosuid,rw,url=https://swift.private.cloud,allow_other,dbglevel=debug #### s3fs syslog messages (`grep s3fs /var/log/syslog`, `journalctl | grep s3fs`, or `s3fs outputs`) s3fs.cpp:put_headers(906): [path=/afile.txt] curl.cpp:MultipartHeadRequest(4342): [tpath=/afile.txt] curl.cpp:PreMultipartPostRequest(3807): [tpath=/afile.txt] curl_handlerpool.cpp:GetHandler(79): Get handler from pool: rest = 31 curl_util.cpp:prepare_url(257): URL is https://swift.private.cloud/mybucket/afile.txt?uploads curl_util.cpp:prepare_url(290): URL changed is https://swift.private.cloud/mybucket/afile.txt?uploads curl.cpp:RequestPerform(2484): connecting to URL https://swift.private.cloud/mybucket/afile.txt?uploads curl.cpp:insertV4Headers(2891): computing signature [POST] [/afile.txt] [uploads] [] curl_util.cpp:url_to_host(334): url is https://swift.private.cloud curl.cpp:RequestPerform(2519): HTTP response code 200 curl_handlerpool.cpp:ReturnHandler(101): Return handler to pool curl.cpp:CopyMultipartPostSetup(4207): [from=/afile.txt][to=/afile.txt][part=1] curl_util.cpp:prepare_url(257): URL is https://swift.private.cloud/mybucket/afile.txt?partNumber=1&uploadId=2~YMOwdcgXW44b690iPVQ9gM1otvnJXy5 curl_util.cpp:prepare_url(290): URL changed is https://swift.private.be-ys.cloud/mybucket/afile.txt?partNumber=1&uploadId=2~YMOwdcgXW44b690iPVQ9gM1otvnJXy5 curl.cpp:CopyMultipartPostSetup(4266): copying... [from=/afile.txt][to=/afile.txt][part=1] curl_multi.cpp:Request(324): [count=1] curl_handlerpool.cpp:GetHandler(79): Get handler from pool: rest = 31 curl.cpp:RequestPerform(2484): connecting to URL https://swift.private.cloud/mybucket/afile?partNumber=1&uploadId=2~YMOwdcgXW44b690iPVQ9gM1otvnJXy5 curl.cpp:insertV4Headers(2891): computing signature [PUT] [/afile.txt] [partNumber=1&uploadId=2~YMOwdcgXW44b690iPVQ9gM1otvnJXy5] [] curl_util.cpp:url_to_host(334): url is https://swift.private.cloud curl.cpp:RequestPerform(2571): HTTP response code 404 was returned, returning ENOENT curl.cpp:RequestPerform(2572): Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>mybucket</BucketName><RequestId>tx00000000000000447f67b-00693049d1-ae969cdc-default</RequestId><HostId>ae969cdc-default-default</HostId></Error> curl_handlerpool.cpp:ReturnHandler(101): Return handler to pool curl_multi.cpp:MultiPerform(195): thread terminated with non-zero return code: -2 curl_multi.cpp:MultiRead(234): failed a request(404: https://swift.private.cloud/mybucket/afile.txt?partNumber=1&uploadId=2~YMOwdcgXW44b690iPVQ9gM1otvnJXy5) ### Details about issue When using chmod or chown on a file, if file size is more than the multipart threshold, on OpenStack Swift it fails with an input/output error. After some digging, i found the put_headers function is using a multipart request, but Swift does not allow that. If i modify the put_headers function signature: int put_headers(const char* path, headers_t& meta, bool is_copy, bool use_st_size = true, bool allow_multipart = true); and change multipart condition: if(!nocopyapi && !nomultipart && allow_multipart && size >= multipart_threshold){ the problem is fixed (passing allow_multipart as false in s3fs_chmod and s3fs_chmod functions). Does changing file owner and permissions really needs a multipart query (the whole file is sent) ? Or should put_headers not consider the filesize as a threshold for a chmod/chown query ? Rename may also be concerned. Thanks.
Author
Owner

@juliogonzalez commented on GitHub (Dec 3, 2025):

Can you reproduce with 1.96?

You are using 1.92 which is more than two years old.

IIUC your s3fs does not come from your distribution (Debian) as 1.92 is not on Debian 11 not on Debian 12. As it looks you're using a self-compiled s3fs-fuse, could be worth checking 1.96 :)

I wonder if we should not enhance the template to ask clearly from where s3fs comes (as it's more or less done already for fuse)

<!-- gh-comment-id:3607798065 --> @juliogonzalez commented on GitHub (Dec 3, 2025): Can you reproduce with 1.96? You are using 1.92 which is more than two years old. IIUC your s3fs does not come from your distribution (Debian) as 1.92 is [not on Debian 11 not on Debian 12](https://packages.debian.org/search?keywords=s3fs). As it looks you're using a self-compiled s3fs-fuse, could be worth checking 1.96 :) I wonder if we should not enhance the template to ask clearly from where s3fs comes (as it's more or less done already for fuse)
Author
Owner

@gtz63 commented on GitHub (Dec 4, 2025):

Thanks for your reply.

I do reproduce with v1.96 on debian13 (docker):

docker run -it --privileged --name s3fs_debian13 debian:13
apt-get install build-essential git libfuse-dev libcurl4-openssl-dev libxml2-dev media-types automake libtool pkg-config libssl-dev git fuse3 libfuse3-dev wget vim
wget https://github.com/s3fs-fuse/s3fs-fuse/archive/refs/tags/v1.96.tar.gz
tar xvzf v1.96.tar.gz
cd s3fs-fuse-1.96/
./autogen.sh
./configure --prefix=/usr
make

The test:

echo xxxxxxxxxxxxxxx:yyyyyyyyyyyyyy > ~/.passwd-s3fs
chmod 400 ~/.passwd-s3fs
mkdir t
./src/s3fs testjhe t -f -o _netdev,use_path_request_style,nosuid,rw,url=https://swift.private.cloud,allow_other,dbglevel=debug
cd t
touch a
chmod 400 a --> OK
dd if=/dev/zero of=b bs=1M count=500
chmod 400 b --> chmod: changing permissions of 'b': No such file or directory
ls b --> OK
rm b --> OK

The error in logs:

025-12-04T08:50:49.041Z [DBG] curl.cpp:insertV4Headers(2373): computing signature [PUT] [/b] [partNumber=1&uploadId=2~wWUxsLUPqJVusRZnwN8VJdhGm108KDK] []
2025-12-04T08:50:49.041Z [INF]       curl_util.cpp:url_to_host(268): url is https://swift.private.cloud
2025-12-04T08:50:49.065Z [INF]       curl.cpp:RequestPerform(2037): HTTP response code 404 was returned, returning ENOENT
2025-12-04T08:50:49.065Z [DBG] curl.cpp:RequestPerform(2038): Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>testjhe</BucketName><RequestId>tx00000000000000480a976-0069314b69-ae8dc0aa-default</RequestId><HostId>ae8dc0aa-default-default</HostId></Error>
2025-12-04T08:50:49.065Z [WAN] s3fs_threadreqs.cpp:multipart_put_head_req_threadworker(467): Put Head Request(/b->/b) got 404 response code.

Note that when using the "nocopyapi" flag, the chmod is slow but OK.

By patching s3fs.cpp (no multipart in put_headers function for chmod), is also OK and fast.

According to https://docs.openstack.org/swift/latest/s3_compat.html, copy API is supported by Swift.

So this is more a performance / compatibilty issue with Swift. I'm not specialist of the S3 API and different implementations, should multipart be used only when uploading or also when using the copy API.

After some more tests: it depends of the Openstack Swift/Ceph versions, on latest versions a "chmod" gives no error with or without multipart, but without multipart a chown is far more faster.
Maybe having a different size threshold when there is really an upload and when this is a server side copy may optimize the requests ? I mean allow to use a different value when is_copy is true.

<!-- gh-comment-id:3611026667 --> @gtz63 commented on GitHub (Dec 4, 2025): Thanks for your reply. I do reproduce with v1.96 on debian13 (docker): ``` docker run -it --privileged --name s3fs_debian13 debian:13 apt-get install build-essential git libfuse-dev libcurl4-openssl-dev libxml2-dev media-types automake libtool pkg-config libssl-dev git fuse3 libfuse3-dev wget vim wget https://github.com/s3fs-fuse/s3fs-fuse/archive/refs/tags/v1.96.tar.gz tar xvzf v1.96.tar.gz cd s3fs-fuse-1.96/ ./autogen.sh ./configure --prefix=/usr make ``` The test: ``` echo xxxxxxxxxxxxxxx:yyyyyyyyyyyyyy > ~/.passwd-s3fs chmod 400 ~/.passwd-s3fs mkdir t ./src/s3fs testjhe t -f -o _netdev,use_path_request_style,nosuid,rw,url=https://swift.private.cloud,allow_other,dbglevel=debug cd t touch a chmod 400 a --> OK dd if=/dev/zero of=b bs=1M count=500 chmod 400 b --> chmod: changing permissions of 'b': No such file or directory ls b --> OK rm b --> OK ``` The error in logs: ``` 025-12-04T08:50:49.041Z [DBG] curl.cpp:insertV4Headers(2373): computing signature [PUT] [/b] [partNumber=1&uploadId=2~wWUxsLUPqJVusRZnwN8VJdhGm108KDK] [] 2025-12-04T08:50:49.041Z [INF] curl_util.cpp:url_to_host(268): url is https://swift.private.cloud 2025-12-04T08:50:49.065Z [INF] curl.cpp:RequestPerform(2037): HTTP response code 404 was returned, returning ENOENT 2025-12-04T08:50:49.065Z [DBG] curl.cpp:RequestPerform(2038): Body Text: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>testjhe</BucketName><RequestId>tx00000000000000480a976-0069314b69-ae8dc0aa-default</RequestId><HostId>ae8dc0aa-default-default</HostId></Error> 2025-12-04T08:50:49.065Z [WAN] s3fs_threadreqs.cpp:multipart_put_head_req_threadworker(467): Put Head Request(/b->/b) got 404 response code. ``` Note that when using the "nocopyapi" flag, the chmod is slow but OK. By patching s3fs.cpp (no multipart in put_headers function for chmod), is also OK and fast. According to https://docs.openstack.org/swift/latest/s3_compat.html, copy API is supported by Swift. So this is more a performance / compatibilty issue with Swift. I'm not specialist of the S3 API and different implementations, should multipart be used only when uploading or also when using the copy API. After some more tests: it depends of the Openstack Swift/Ceph versions, on latest versions a "chmod" gives no error with or without multipart, but without multipart a chown is far more faster. Maybe having a different size threshold when there is really an upload and when this is a server side copy may optimize the requests ? I mean allow to use a different value when is_copy is true.
Author
Owner

@ggtakec commented on GitHub (Dec 4, 2025):

Does Swift not support multipart uploads for PUT?(Or is it CopyObject?)

In any case, the current multipart upload settings for s3fs are the same for both object uploads and header(meta data) uploads.
So it seems there's no other way than to specify nomultipart or nocopyapi for your Swift.

If we were to adopt your proposal, we'd need to add new option, so that's something we'll need to consider.

@gaul
Ideally, Swift should support mimicking the S3 API, but I think adding an option to s3fs would be fine.
(If it's just put_header, it could be prepared as an option with limited functionality.)

<!-- gh-comment-id:3612366702 --> @ggtakec commented on GitHub (Dec 4, 2025): Does Swift not support multipart uploads for PUT?(Or is it CopyObject?) In any case, the current multipart upload settings for s3fs are the same for both object uploads and header(meta data) uploads. So it seems there's no other way than to specify nomultipart or nocopyapi for your Swift. If we were to adopt your proposal, we'd need to add new option, so that's something we'll need to consider. @gaul Ideally, Swift should support mimicking the S3 API, but I think adding an option to s3fs would be fine. (If it's just put_header, it could be prepared as an option with limited functionality.)
Author
Owner

@gaul commented on GitHub (Dec 4, 2025):

OpenStack Swift does support S3 multipart: https://docs.openstack.org/swift/latest/s3_compat.html so I would first look to see if you have a misconfiguration or are running an older version.

<!-- gh-comment-id:3612993393 --> @gaul commented on GitHub (Dec 4, 2025): OpenStack Swift does support S3 multipart: https://docs.openstack.org/swift/latest/s3_compat.html so I would first look to see if you have a misconfiguration or are running an older version.
Author
Owner

@gtz63 commented on GitHub (Dec 5, 2025):

I did this benchmark: rsync 50 files of 100MB each on a new bucket.
s3fs v1.96 rebuild on debian 13, S3 implementation being OpenStack Swift/Ceph v18.2.7

Mount command:

./src/s3fs test t -f -o use_path_request_style,rw,url=https://swift.private.cloud
$time rsync -a ./ ../t
real    4m9.509s
user    0m1.078s
sys     0m5.410s

Same test with a modified line in s3fs.cpp:

if(!nocopyapi && !nomultipart && size >= (is_copy ? singlepart_copy_limit : multipart_threshold)){
$time rsync -a ./ ../t
real    1m37.080s
user    0m1.380s
sys     0m7.028s

As rsync is syncing owner and permissions, the synchronization is x2.5 faster.
Can you try to reproduce on other S3 server implementations ?
For me as long as the file is less than the 5GB limit (https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), multipart is less efficient. I use the existing single_part_copy for testing, it may a new parameter.

Rsync is a case, it used also visible on log rotating of 10x1GB files, almost instantly done without multipart, because of deduplication (https://docs.ceph.com/en/latest/dev/deduplication/), no real copy is done server side.

<!-- gh-comment-id:3616241001 --> @gtz63 commented on GitHub (Dec 5, 2025): I did this benchmark: rsync 50 files of 100MB each on a new bucket. s3fs v1.96 rebuild on debian 13, S3 implementation being OpenStack Swift/Ceph v18.2.7 Mount command: ``` ./src/s3fs test t -f -o use_path_request_style,rw,url=https://swift.private.cloud ``` ``` $time rsync -a ./ ../t real 4m9.509s user 0m1.078s sys 0m5.410s ``` Same test with a modified line in s3fs.cpp: ``` if(!nocopyapi && !nomultipart && size >= (is_copy ? singlepart_copy_limit : multipart_threshold)){ ``` ``` $time rsync -a ./ ../t real 1m37.080s user 0m1.380s sys 0m7.028s ``` As rsync is syncing owner and permissions, the synchronization is x2.5 faster. Can you try to reproduce on other S3 server implementations ? For me as long as the file is less than the 5GB limit (https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), multipart is less efficient. I use the existing single_part_copy for testing, it may a new parameter. Rsync is a case, it used also visible on log rotating of 10x1GB files, almost instantly done without multipart, because of deduplication (https://docs.ceph.com/en/latest/dev/deduplication/), no real copy is done server side.
Author
Owner

@ggtakec commented on GitHub (Dec 7, 2025):

First, I simplified this issue by testing s3fs-fuse enable and disable multipart uploading of put_header.
In my environment, I tried the rsync command with the same parameters, and the results showed that multipart was slightly faster.

The test was performed under the following conditions:

  • Prepare a directory containing three 128MB files for the test.
  • Perform rsync beforehand (ensuring the contents of the source and destination directories are identical).
  • Write a few bytes to only one of the files by appending.
  • [Test] Execute the rsync command.

These test was then performed using v1.96.
And the test was performed without specifying the multipart_threshold option(using multipart), with multipart_threshold>128MB(not using multipart).

The results showed that not specifying multipart_threshold was faster(or equivalent).

The internal processing is almost the same, so I couldn't find any reason why using multipart would be slower.

Since you're testing this with Swift, if the results are different, it could be due to a Swift-dependent issue.
Also, what happens if you add the --inplace option to prevent temporary files from being created when running rsync?
(I don't think the results will change.)

So, I'm sorry, but I haven't yet figured out why multipart is slower when enabled than when disabled, yet.

<!-- gh-comment-id:3621787857 --> @ggtakec commented on GitHub (Dec 7, 2025): First, I simplified this issue by testing s3fs-fuse enable and disable multipart uploading of put_header. In my environment, I tried the rsync command with the same parameters, and the results showed that multipart was slightly faster. The test was performed under the following conditions: - Prepare a directory containing three 128MB files for the test. - Perform rsync beforehand (ensuring the contents of the source and destination directories are identical). - Write a few bytes to only one of the files by appending. - [Test] Execute the rsync command. These test was then performed using v1.96. And the test was performed without specifying the `multipart_threshold` option(using multipart), with `multipart_threshold>128MB`(not using multipart). The results showed that not specifying `multipart_threshold` was faster(or equivalent). The internal processing is almost the same, so I couldn't find any reason why using multipart would be slower. Since you're testing this with Swift, if the results are different, it could be due to a Swift-dependent issue. Also, what happens if you add the `--inplace` option to prevent temporary files from being created when running rsync? (I don't think the results will change.) So, I'm sorry, but I haven't yet figured out why multipart is slower when enabled than when disabled, yet.
Author
Owner

@gtz63 commented on GitHub (Dec 8, 2025):

I agree that concerning file content, multipart is faster, that's why i do not want to disable it. Without multipart my rsync test is 2m21s : data upload is slower but changing owner and file attributes are faster than reference test.

Do a chmod and chown without changing file content, then a "rsync -a". In my case of files above multipart threshold, it takes many seconds per file (the same time of a full upload).

And yes performance tuning is closely linked to S3 implementation.

<!-- gh-comment-id:3625659509 --> @gtz63 commented on GitHub (Dec 8, 2025): I agree that concerning file content, multipart is faster, that's why i do not want to disable it. Without multipart my rsync test is 2m21s : data upload is slower but changing owner and file attributes are faster than reference test. Do a chmod and chown without changing file content, then a "rsync -a". In my case of files above multipart threshold, it takes many seconds per file (the same time of a full upload). And yes performance tuning is closely linked to S3 implementation.
Author
Owner

@ggtakec commented on GitHub (Dec 8, 2025):

@gtz63
Are you able to use v1.96 (or v1.97 released today) or the master branch code?
This version has enhanced caching, so the results may be different from the v1.92 you're using.
(I think the behavior of rsync (and probably without --inplace option as well) seems to involve excessive access to file stat information, so these caches seem to take advantage.)

<!-- gh-comment-id:3627191909 --> @ggtakec commented on GitHub (Dec 8, 2025): @gtz63 Are you able to use v1.96 (or v1.97 released today) or the master branch code? This version has enhanced caching, so the results may be different from the v1.92 you're using. (I think the behavior of rsync (and probably without --inplace option as well) seems to involve excessive access to file stat information, so these caches seem to take advantage.)
Author
Owner

@gtz63 commented on GitHub (Dec 9, 2025):

This is my bench results with v1.97:

Mount without particular options:

time rsync -a ./ ../t
real    4m18.389s
user    0m1.458s
sys     0m7.221s

Patched put_changes:

time rsync -a ./ ../t
real    2m12.488s
user    0m1.381s
sys     0m6.160s

Retested v1.96 with patch:

time rsync -a ./ ../t
real    1m42.366s
user    0m1.283s
sys     0m7.213s

Briefly this the same, tuning a different threshold for put_changes is better for my case.

<!-- gh-comment-id:3632899382 --> @gtz63 commented on GitHub (Dec 9, 2025): This is my bench results with v1.97: Mount without particular options: ``` time rsync -a ./ ../t real 4m18.389s user 0m1.458s sys 0m7.221s ``` Patched put_changes: ``` time rsync -a ./ ../t real 2m12.488s user 0m1.381s sys 0m6.160s ``` Retested v1.96 with patch: ``` time rsync -a ./ ../t real 1m42.366s user 0m1.283s sys 0m7.213s ``` Briefly this the same, tuning a different threshold for put_changes is better for my case.
Author
Owner

@ggtakec commented on GitHub (Dec 9, 2025):

@gtz63 Thanks.
Judging from the results, it looks like the number of put_head calls may be slowing down performance.

<!-- gh-comment-id:3634695688 --> @ggtakec commented on GitHub (Dec 9, 2025): @gtz63 Thanks. Judging from the results, it looks like the number of put_head calls may be slowing down performance.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#1293
No description provided.