[GH-ISSUE #1223] throttle max bandwidth or fuse I/O writes #654

Closed
opened 2026-03-04 01:47:35 +03:00 by kerem · 6 comments
Owner

Originally created by @emanuelelevo on GitHub (Jan 3, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1223

I'm looking for a way to throttle the bandwidth usage or alternatively the fuse I/O writes. Is there any combination of options that may help achieving such purpose?

Originally created by @emanuelelevo on GitHub (Jan 3, 2020). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1223 I'm looking for a way to throttle the bandwidth usage or alternatively the fuse I/O writes. Is there any combination of options that may help achieving such purpose?
kerem 2026-03-04 01:47:35 +03:00
  • closed this issue
  • added the
    need info
    label
Author
Owner

@gaul commented on GitHub (Jan 5, 2020):

Is there a generic way to do this for other file systems? It does not appear that ionice or blkio limits will work for network file systems. Perhaps you can limit a process's socket io? This is not exactly what you want but --parallel_count can limit the number of parallel writes.

<!-- gh-comment-id:570831748 --> @gaul commented on GitHub (Jan 5, 2020): Is there a generic way to do this for other file systems? It does not appear that `ionice` or `blkio` limits will work for network file systems. Perhaps you can limit a process's socket io? This is not exactly what you want but `--parallel_count` can limit the number of parallel writes.
Author
Owner

@emanuelelevo commented on GitHub (Jan 5, 2020):

I would use cgrous or systemd resource control for /dev/ , but this doesn't apply on fuse mounts. --parallel_count doesn't seem to help much. I could mark outgoing connections for the s3 endpoint and rate limit, this should work.

<!-- gh-comment-id:570832619 --> @emanuelelevo commented on GitHub (Jan 5, 2020): I would use cgrous or systemd resource control for /dev/ , but this doesn't apply on fuse mounts. --parallel_count doesn't seem to help much. I could mark outgoing connections for the s3 endpoint and rate limit, this should work.
Author
Owner

@gaul commented on GitHub (Jan 5, 2020):

Please report back what works or doesn't work. If we can document a generic solution it is better than s3fs implementing something custom. Thanks!

<!-- gh-comment-id:570835094 --> @gaul commented on GitHub (Jan 5, 2020): Please report back what works or doesn't work. If we can document a generic solution it is better than s3fs implementing something custom. Thanks!
Author
Owner

@emanuelelevo commented on GitHub (Jan 6, 2020):

Hi Andrew, I managed to throttle all upload traffic to the s3 region ip addresses with traffic control:

tc qdisc add dev ens5 root handle 1: htb default 30
tc class add dev ens5 parent 1: classid 1:1 htb rate $UPLOAD_LIMIT
tc class add dev ens5 parent 1: classid 1:2 htb rate $UPLOAD_LIMIT
tc class add dev ens5 parent 1: classid 1:3 htb rate $UPLOAD_LIMIT
tc filter add dev ens5 protocol ip parent 1:0 prio 1 u32 match ip dst $S3_REGION_IP_CIDR_1 flowid 1:1
tc filter add dev ens5 protocol ip parent 1:0 prio 1 u32 match ip dst $S3_REGION_IP_CIDR_2 flowid 1:2
tc filter add dev ens5 protocol ip parent 1:0 prio 1 u32 match ip dst $S3_REGION_IP_CIDR_3 flowid 1:3

However this is a workaround. All upload traffic to the s3 region is going to be shaped (and not just s3fs). Marking the s3fs pid with iptables and using tc on the market pid does not seem to work due to forking.
I believe it would be helpful to have a built-in max_bandwidth option for s3fs (aws cli implemented this option some time ago).

<!-- gh-comment-id:571080073 --> @emanuelelevo commented on GitHub (Jan 6, 2020): Hi Andrew, I managed to throttle all upload traffic to the s3 region ip addresses with traffic control: ``` tc qdisc add dev ens5 root handle 1: htb default 30 tc class add dev ens5 parent 1: classid 1:1 htb rate $UPLOAD_LIMIT tc class add dev ens5 parent 1: classid 1:2 htb rate $UPLOAD_LIMIT tc class add dev ens5 parent 1: classid 1:3 htb rate $UPLOAD_LIMIT tc filter add dev ens5 protocol ip parent 1:0 prio 1 u32 match ip dst $S3_REGION_IP_CIDR_1 flowid 1:1 tc filter add dev ens5 protocol ip parent 1:0 prio 1 u32 match ip dst $S3_REGION_IP_CIDR_2 flowid 1:2 tc filter add dev ens5 protocol ip parent 1:0 prio 1 u32 match ip dst $S3_REGION_IP_CIDR_3 flowid 1:3 ``` However this is a workaround. All upload traffic to the s3 region is going to be shaped (and not just s3fs). Marking the s3fs pid with iptables and using tc on the market pid does not seem to work due to forking. I believe it would be helpful to have a built-in max_bandwidth option for s3fs (aws cli implemented this option some time ago).
Author
Owner

@gaul commented on GitHub (Feb 2, 2020):

Could you test trickle?

<!-- gh-comment-id:581139110 --> @gaul commented on GitHub (Feb 2, 2020): Could you test [trickle](https://github.com/mariusae/trickle)?
Author
Owner

@gaul commented on GitHub (Jul 26, 2020):

Closing due to inactivity. Please reopen if symptoms persist.

<!-- gh-comment-id:663950999 --> @gaul commented on GitHub (Jul 26, 2020): Closing due to inactivity. Please reopen if symptoms persist.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#654
No description provided.