[GH-ISSUE #1611] [Question]multiple clients mounting the same bucket #847

Closed
opened 2026-03-04 01:49:19 +03:00 by kerem · 2 comments
Owner

Originally created by @GoTheExtraMile on GitHub (Mar 26, 2021).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1611

Hi !
in limitations your say :

`

  • non-AWS providers may have eventual consistency so reads can temporarily yield stale data (AWS offers read-after-write consistency since Dec 2020)
  • no coordination between multiple clients mounting the same bucket
    `
    Can I think that when using Amazon storage, it is highly consistent for multiple clients to mount the same bucket?
Originally created by @GoTheExtraMile on GitHub (Mar 26, 2021). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1611 Hi ! in limitations your say : ` - non-AWS providers may have eventual consistency so reads can temporarily yield stale data (AWS offers read-after-write consistency since Dec 2020) - no coordination between multiple clients mounting the same bucket ` Can I think that when using Amazon storage, it is highly consistent for multiple clients to mount the same bucket?
kerem closed this issue 2026-03-04 01:49:19 +03:00
Author
Owner

@gaul commented on GitHub (Mar 28, 2021):

s3fs has other sources of inconsistency due to caching. For example, if client1 calls stat on a file then client2 appends to it, client1 will see the old metadata until stat_cache_expire expires, by default 900 seconds. Even if you reduce this to 0 there is no coordination between the s3fs clients. Further s3fs has close-to-open semantics meaning that client1's writes will not be published to client2 until a close or fsync. In other network file systems like SMB2, a client can create leases to files which ensures that other clients cannot access them until the lease expires or is revoked.

<!-- gh-comment-id:808851233 --> @gaul commented on GitHub (Mar 28, 2021): s3fs has other sources of inconsistency due to caching. For example, if client1 calls stat on a file then client2 appends to it, client1 will see the old metadata until `stat_cache_expire` expires, by default 900 seconds. Even if you reduce this to 0 there is no coordination between the s3fs clients. Further s3fs has close-to-open semantics meaning that client1's writes will not be published to client2 until a close or fsync. In other network file systems like SMB2, a client can create leases to files which ensures that other clients cannot access them until the lease expires or is revoked.
Author
Owner

@GoTheExtraMile commented on GitHub (Mar 31, 2021):

thanks

<!-- gh-comment-id:810895137 --> @GoTheExtraMile commented on GitHub (Mar 31, 2021): thanks
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#847
No description provided.