mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-24 21:06:02 +03:00
[GH-ISSUE #2764] Chown and Chmod fails, except if nomultipart is set #1293
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#1293
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @gtz63 on GitHub (Dec 3, 2025).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/2764
Additional Information
Version of s3fs being used (
s3fs --version)V1.92
Version of fuse being used (
pkg-config --modversion fuse,rpm -qi fuseordpkg -s fuse)Debian or recompiled version
Provider (
AWS,OVH,Hetzner,iDrive E2, ...)Openstack Swift
Kernel information (
uname -r)5.10.0-21-cloud-amd64
GNU/Linux Distribution, if applicable (
cat /etc/os-release)Debian 11 / 12
How to run s3fs, if applicable
sudo ./s3fs mybucket mybucket -f -o use_path_request_style,nosuid,rw,url=https://swift.private.cloud,allow_other,dbglevel=debug
s3fs syslog messages (
grep s3fs /var/log/syslog,journalctl | grep s3fs, ors3fs outputs)curl_handlerpool.cpp:GetHandler(79): Get handler from pool: rest = 31
curl_util.cpp:prepare_url(257): URL is https://swift.private.cloud/mybucket/afile.txt?uploads
curl_util.cpp:prepare_url(290): URL changed is https://swift.private.cloud/mybucket/afile.txt?uploads
curl.cpp:RequestPerform(2484): connecting to URL https://swift.private.cloud/mybucket/afile.txt?uploads
curl.cpp:insertV4Headers(2891): computing signature [POST] [/afile.txt] [uploads] []
curl_util.cpp:url_to_host(334): url is https://swift.private.cloud
curl.cpp:RequestPerform(2519): HTTP response code 200
curl_handlerpool.cpp:ReturnHandler(101): Return handler to pool
curl.cpp:CopyMultipartPostSetup(4207): [from=/afile.txt][to=/afile.txt][part=1]
curl_util.cpp:prepare_url(257): URL is https://swift.private.cloud/mybucket/afile.txt?partNumber=1&uploadId=2
YMOwdcgXW44b690iPVQ9gM1otvnJXy5YMOwdcgXW44b690iPVQ9gM1otvnJXy5curl_util.cpp:prepare_url(290): URL changed is https://swift.private.be-ys.cloud/mybucket/afile.txt?partNumber=1&uploadId=2
curl.cpp:CopyMultipartPostSetup(4266): copying... [from=/afile.txt][to=/afile.txt][part=1]
curl_multi.cpp:Request(324): [count=1]
curl_handlerpool.cpp:GetHandler(79): Get handler from pool: rest = 31
curl.cpp:RequestPerform(2484): connecting to URL https://swift.private.cloud/mybucket/afile?partNumber=1&uploadId=2
YMOwdcgXW44b690iPVQ9gM1otvnJXy5YMOwdcgXW44b690iPVQ9gM1otvnJXy5] []curl.cpp:insertV4Headers(2891): computing signature [PUT] [/afile.txt] [partNumber=1&uploadId=2
curl_util.cpp:url_to_host(334): url is https://swift.private.cloud
curl.cpp:RequestPerform(2571): HTTP response code 404 was returned, returning ENOENT
curl.cpp:RequestPerform(2572): Body Text:
NoSuchKeymybuckettx00000000000000447f67b-00693049d1-ae969cdc-defaultae969cdc-default-defaultcurl_handlerpool.cpp:ReturnHandler(101): Return handler to pool
curl_multi.cpp:MultiPerform(195): thread terminated with non-zero return code: -2
curl_multi.cpp:MultiRead(234): failed a request(404: https://swift.private.cloud/mybucket/afile.txt?partNumber=1&uploadId=2~YMOwdcgXW44b690iPVQ9gM1otvnJXy5)
Details about issue
When using chmod or chown on a file, if file size is more than the multipart threshold, on OpenStack Swift it fails with an input/output error.
After some digging, i found the put_headers function is using a multipart request, but Swift does not allow that.
If i modify the put_headers function signature:
int put_headers(const char* path, headers_t& meta, bool is_copy, bool use_st_size = true, bool allow_multipart = true);
and change multipart condition:
if(!nocopyapi && !nomultipart && allow_multipart && size >= multipart_threshold){
the problem is fixed (passing allow_multipart as false in s3fs_chmod and s3fs_chmod functions).
Does changing file owner and permissions really needs a multipart query (the whole file is sent) ? Or should put_headers not consider the filesize as a threshold for a chmod/chown query ? Rename may also be concerned.
Thanks.
@juliogonzalez commented on GitHub (Dec 3, 2025):
Can you reproduce with 1.96?
You are using 1.92 which is more than two years old.
IIUC your s3fs does not come from your distribution (Debian) as 1.92 is not on Debian 11 not on Debian 12. As it looks you're using a self-compiled s3fs-fuse, could be worth checking 1.96 :)
I wonder if we should not enhance the template to ask clearly from where s3fs comes (as it's more or less done already for fuse)
@gtz63 commented on GitHub (Dec 4, 2025):
Thanks for your reply.
I do reproduce with v1.96 on debian13 (docker):
The test:
The error in logs:
Note that when using the "nocopyapi" flag, the chmod is slow but OK.
By patching s3fs.cpp (no multipart in put_headers function for chmod), is also OK and fast.
According to https://docs.openstack.org/swift/latest/s3_compat.html, copy API is supported by Swift.
So this is more a performance / compatibilty issue with Swift. I'm not specialist of the S3 API and different implementations, should multipart be used only when uploading or also when using the copy API.
After some more tests: it depends of the Openstack Swift/Ceph versions, on latest versions a "chmod" gives no error with or without multipart, but without multipart a chown is far more faster.
Maybe having a different size threshold when there is really an upload and when this is a server side copy may optimize the requests ? I mean allow to use a different value when is_copy is true.
@ggtakec commented on GitHub (Dec 4, 2025):
Does Swift not support multipart uploads for PUT?(Or is it CopyObject?)
In any case, the current multipart upload settings for s3fs are the same for both object uploads and header(meta data) uploads.
So it seems there's no other way than to specify nomultipart or nocopyapi for your Swift.
If we were to adopt your proposal, we'd need to add new option, so that's something we'll need to consider.
@gaul
Ideally, Swift should support mimicking the S3 API, but I think adding an option to s3fs would be fine.
(If it's just put_header, it could be prepared as an option with limited functionality.)
@gaul commented on GitHub (Dec 4, 2025):
OpenStack Swift does support S3 multipart: https://docs.openstack.org/swift/latest/s3_compat.html so I would first look to see if you have a misconfiguration or are running an older version.
@gtz63 commented on GitHub (Dec 5, 2025):
I did this benchmark: rsync 50 files of 100MB each on a new bucket.
s3fs v1.96 rebuild on debian 13, S3 implementation being OpenStack Swift/Ceph v18.2.7
Mount command:
Same test with a modified line in s3fs.cpp:
As rsync is syncing owner and permissions, the synchronization is x2.5 faster.
Can you try to reproduce on other S3 server implementations ?
For me as long as the file is less than the 5GB limit (https://docs.aws.amazon.com/AmazonS3/latest/API/API_CopyObject.html), multipart is less efficient. I use the existing single_part_copy for testing, it may a new parameter.
Rsync is a case, it used also visible on log rotating of 10x1GB files, almost instantly done without multipart, because of deduplication (https://docs.ceph.com/en/latest/dev/deduplication/), no real copy is done server side.
@ggtakec commented on GitHub (Dec 7, 2025):
First, I simplified this issue by testing s3fs-fuse enable and disable multipart uploading of put_header.
In my environment, I tried the rsync command with the same parameters, and the results showed that multipart was slightly faster.
The test was performed under the following conditions:
These test was then performed using v1.96.
And the test was performed without specifying the
multipart_thresholdoption(using multipart), withmultipart_threshold>128MB(not using multipart).The results showed that not specifying
multipart_thresholdwas faster(or equivalent).The internal processing is almost the same, so I couldn't find any reason why using multipart would be slower.
Since you're testing this with Swift, if the results are different, it could be due to a Swift-dependent issue.
Also, what happens if you add the
--inplaceoption to prevent temporary files from being created when running rsync?(I don't think the results will change.)
So, I'm sorry, but I haven't yet figured out why multipart is slower when enabled than when disabled, yet.
@gtz63 commented on GitHub (Dec 8, 2025):
I agree that concerning file content, multipart is faster, that's why i do not want to disable it. Without multipart my rsync test is 2m21s : data upload is slower but changing owner and file attributes are faster than reference test.
Do a chmod and chown without changing file content, then a "rsync -a". In my case of files above multipart threshold, it takes many seconds per file (the same time of a full upload).
And yes performance tuning is closely linked to S3 implementation.
@ggtakec commented on GitHub (Dec 8, 2025):
@gtz63
Are you able to use v1.96 (or v1.97 released today) or the master branch code?
This version has enhanced caching, so the results may be different from the v1.92 you're using.
(I think the behavior of rsync (and probably without --inplace option as well) seems to involve excessive access to file stat information, so these caches seem to take advantage.)
@gtz63 commented on GitHub (Dec 9, 2025):
This is my bench results with v1.97:
Mount without particular options:
Patched put_changes:
Retested v1.96 with patch:
Briefly this the same, tuning a different threshold for put_changes is better for my case.
@ggtakec commented on GitHub (Dec 9, 2025):
@gtz63 Thanks.
Judging from the results, it looks like the number of put_head calls may be slowing down performance.