[GH-ISSUE #846] i have some problems about how to checkout my backup #493

Closed
opened 2026-03-04 01:46:06 +03:00 by kerem · 2 comments
Owner

Originally created by @linux0x5c on GitHub (Oct 29, 2018).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/846

Issue1: I used to use s3fs mounts the bucket on my system, if I use cp or tar and other tools to backup my system, I don't know how to judge, my backup is complete transmission to the aws s3, is there any way I can go to judgment, such as through HTTP code or any other return value, the etag method seems not very common, for multiupload etag md5 value is not the object.

Issue2: I didn't understand max_stat_cache_size in the option, but what cache does that mean? Can you give me an example?

I am looking forward to your answer. Thank you very much.

Originally created by @linux0x5c on GitHub (Oct 29, 2018). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/846 Issue1: I used to use s3fs mounts the bucket on my system, if I use cp or tar and other tools to backup my system, I don't know how to judge, my backup is complete transmission to the aws s3, is there any way I can go to judgment, such as through HTTP code or any other return value, the etag method seems not very common, for multiupload etag md5 value is not the object. Issue2: I didn't understand max_stat_cache_size in the option, but what cache does that mean? Can you give me an example? I am looking forward to your answer. Thank you very much.
kerem closed this issue 2026-03-04 01:46:06 +03:00
Author
Owner

@gaul commented on GitHub (Feb 2, 2019):

Issue 1: could you just check the exit code of cp via echo $? It will be non-zero if an error occurred.

Issue 2: s3fs caches object metadata from S3 since fetching it can take a hundred milliseconds or more. Access is instantaneous when the entry is present in the cache. To limit memory use, s3fs limits the maximum number of entries. This cache is ephemeral and repopulates when s3fs restarts.

<!-- gh-comment-id:459928772 --> @gaul commented on GitHub (Feb 2, 2019): Issue 1: could you just check the exit code of `cp` via `echo $`? It will be non-zero if an error occurred. Issue 2: s3fs caches object metadata from S3 since fetching it can take a hundred milliseconds or more. Access is instantaneous when the entry is present in the cache. To limit memory use, s3fs limits the maximum number of entries. This cache is ephemeral and repopulates when s3fs restarts.
Author
Owner

@linux0x5c commented on GitHub (Feb 11, 2019):

I had getted it,thanks for your answer.

<!-- gh-comment-id:462205024 --> @linux0x5c commented on GitHub (Feb 11, 2019): I had getted it,thanks for your answer.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#493
No description provided.