[GH-ISSUE #186] Consider defaulting to big writes #103

Closed
opened 2026-03-04 01:42:09 +03:00 by kerem · 2 comments
Owner

Originally created by @gaul on GitHub (May 4, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/186

Based on feedback from @kahing, s3fs can greatly benefit from -obig_writes -omax_write=$((1024 * 1024)). I benchmarked a 5 GB copy within AWS us-east with these options:

$ s3fs bucket mnt -o passwd_file="${HOME}/.passwd-s3fs" -osigv2 -oparallel_count=32
$ dd if=/dev/zero of=mnt/10GB bs=10M count=500
500+0 records in
500+0 records out
5242880000 bytes (5.2 GB) copied, 144.847 s, 36.2 MB/s
$ s3fs bucket mnt -o passwd_file="${HOME}/.passwd-s3fs" -osigv2 -oparallel_count=32 -obig_writes -omax_write=$((1024 * 1024))
$ time dd if=/dev/zero of=mnt/10GB bs=10M count=500
500+0 records in
500+0 records out
5242880000 bytes (5.2 GB) copied, 90.9526 s, 57.6 MB/s
Originally created by @gaul on GitHub (May 4, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/186 Based on feedback from @kahing, s3fs can greatly benefit from `-obig_writes -omax_write=$((1024 * 1024))`. I benchmarked a 5 GB copy within AWS us-east with these options: ``` $ s3fs bucket mnt -o passwd_file="${HOME}/.passwd-s3fs" -osigv2 -oparallel_count=32 $ dd if=/dev/zero of=mnt/10GB bs=10M count=500 500+0 records in 500+0 records out 5242880000 bytes (5.2 GB) copied, 144.847 s, 36.2 MB/s ``` ``` $ s3fs bucket mnt -o passwd_file="${HOME}/.passwd-s3fs" -osigv2 -oparallel_count=32 -obig_writes -omax_write=$((1024 * 1024)) $ time dd if=/dev/zero of=mnt/10GB bs=10M count=500 500+0 records in 500+0 records out 5242880000 bytes (5.2 GB) copied, 90.9526 s, 57.6 MB/s ```
kerem closed this issue 2026-03-04 01:42:10 +03:00
Author
Owner

@gaul commented on GitHub (May 6, 2015):

We can observe this effect more prominently using S3Proxy with the in-memory backend:

$ cat transient.conf 
s3proxy.endpoint=http://localhost:8081
s3proxy.authorization=aws-v2
s3proxy.identity=local-identity
s3proxy.credential=local-credential
jclouds.provider=transient
jclouds.identity=remote-identity
jclouds.credential=remote-credential

$ s3proxy --properties transient.conf
$ s3fs gaultest /mnt \
    -o createbucket \
    -o passwd_file=passwd-s3fs \
    -o sigv2 \
    -o url=http://127.0.0.1:8081 \
    -o use_path_request_style

$ dd if=/dev/zero of=/mnt/out bs=16M count=16
16+0 records in
16+0 records out
268435456 bytes (268 MB) copied, 5.81843 s, 46.1 MB/s
$ s3fs gaultest /mnt \
    -o createbucket \
    -o passwd_file=passwd-s3fs \
    -o sigv2 \
    -o url=http://127.0.0.1:8081 \
    -o use_path_request_style \
    -o big_writes \
    -o max_write=$((1024 * 1024))

$ dd if=/dev/zero of=gaulbackup/out bs=16M count=16
16+0 records in
16+0 records out
268435456 bytes (268 MB) copied, 2.96685 s, 90.5 MB/s
<!-- gh-comment-id:99552743 --> @gaul commented on GitHub (May 6, 2015): We can observe this effect more prominently using S3Proxy with the in-memory backend: ``` $ cat transient.conf s3proxy.endpoint=http://localhost:8081 s3proxy.authorization=aws-v2 s3proxy.identity=local-identity s3proxy.credential=local-credential jclouds.provider=transient jclouds.identity=remote-identity jclouds.credential=remote-credential $ s3proxy --properties transient.conf ``` ``` $ s3fs gaultest /mnt \ -o createbucket \ -o passwd_file=passwd-s3fs \ -o sigv2 \ -o url=http://127.0.0.1:8081 \ -o use_path_request_style $ dd if=/dev/zero of=/mnt/out bs=16M count=16 16+0 records in 16+0 records out 268435456 bytes (268 MB) copied, 5.81843 s, 46.1 MB/s ``` ``` $ s3fs gaultest /mnt \ -o createbucket \ -o passwd_file=passwd-s3fs \ -o sigv2 \ -o url=http://127.0.0.1:8081 \ -o use_path_request_style \ -o big_writes \ -o max_write=$((1024 * 1024)) $ dd if=/dev/zero of=gaulbackup/out bs=16M count=16 16+0 records in 16+0 records out 268435456 bytes (268 MB) copied, 2.96685 s, 90.5 MB/s ```
Author
Owner

@gaul commented on GitHub (Nov 15, 2018):

Fixed by #820.

<!-- gh-comment-id:438868310 --> @gaul commented on GitHub (Nov 15, 2018): Fixed by #820.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#103
No description provided.