[GH-ISSUE #1466] Implement auto restoration mechanism from glacier (or any cold storages) #773

Open
opened 2026-03-04 01:48:39 +03:00 by kerem · 1 comment
Owner

Originally created by @Bizzonium on GitHub (Oct 31, 2020).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1466

Additional Information

Since Scaleway officially supports s3fs and C14 Cold Storage supports object restore API it would be super handful to implement "smart/auto" restoring mechanism (https://www.scaleway.com/en/docs/object-storage-glacier/).

The state of the restoration would be indicated by the HTTP header x-amz-restore

$> aws s3api restore-object --bucket bucket --key object --restore-request Days=30
$> aws s3api head-object --bucket bucket --key object
{
    ...
    "Restore": "ongoing-request=\"true\"",
    "LastModified": "Wed, 09 Oct 2019 17:29:44 GMT",
    "StorageClass": "GLACIER"
}
[After some time...]
$> aws s3api head-object --bucket bucket --key object
{
    ...
    "Restore": "ongoing-request=\"false\", expiry-date=\"Fri, 8 Nov 2019 00:00:00 UTC\"",
    "LastModified": "Wed, 09 Oct 2019 17:30:47 GMT",
}

I image the mechanism to be like this:

  1. Add option to CLI to enable auto restore feature and default number of days to keep object restored
  2. After attempting to get remote file s3fs should get object metadata to check x-amz-StorageClass header.
  3. If x-amz-StorageClass: GLACIER then try to restore file.
  4. If object wasn't restored in reasonable amount of time then "send timeout" to file system (i don't know how it actually works) otherwise get object.

Or maybe there is an existing way for doing it for non Amazon S3 storages (https://aws.amazon.com/ru/blogs/storage/automate-restore-of-archived-objects-through-aws-storage-gateway/) ?

Version of s3fs being used (s3fs --version)

Compiled according to the Scaleway's guide.
Amazon Simple Storage Service File System V1.87 (commit:38e1eaa) with OpenSSL

Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)

2.9.9

Kernel information (uname -r)

5.4.0-1026-kvm

GNU/Linux Distribution, if applicable (cat /etc/os-release)

NAME="Ubuntu"
VERSION="20.04.1 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.1 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=foca

/etc/fstab entry, if applicable

bucket /var/www/s3-storage/ fuse.s3fs _netdev,allow_other,use_path_request_style,nonempty,ensure_diskfree=35000,default_acl=public-read,parallel_count=15,multipart_size=32,nocopyapi,use_cache="",endpoint=nl-ams,url=https://s3.nl-ams.scw.cloud 0 0
Originally created by @Bizzonium on GitHub (Oct 31, 2020). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/1466 ### Additional Information Since Scaleway officially supports s3fs and C14 Cold Storage supports object restore API it would be super handful to implement "smart/auto" restoring mechanism (https://www.scaleway.com/en/docs/object-storage-glacier/). The state of the restoration would be indicated by the HTTP header x-amz-restore ``` $> aws s3api restore-object --bucket bucket --key object --restore-request Days=30 $> aws s3api head-object --bucket bucket --key object { ... "Restore": "ongoing-request=\"true\"", "LastModified": "Wed, 09 Oct 2019 17:29:44 GMT", "StorageClass": "GLACIER" } [After some time...] $> aws s3api head-object --bucket bucket --key object { ... "Restore": "ongoing-request=\"false\", expiry-date=\"Fri, 8 Nov 2019 00:00:00 UTC\"", "LastModified": "Wed, 09 Oct 2019 17:30:47 GMT", } ``` I image the mechanism to be like this: 1. Add option to CLI to enable auto restore feature and default number of days to keep object restored 2. After attempting to get remote file s3fs should get object metadata to check `x-amz-StorageClass` header. 3. If `x-amz-StorageClass: GLACIER` then try to restore file. 4. If object wasn't restored in reasonable amount of time then "send timeout" to file system (i don't know how it actually works) otherwise get object. Or maybe there is an existing way for doing it for non Amazon S3 storages (https://aws.amazon.com/ru/blogs/storage/automate-restore-of-archived-objects-through-aws-storage-gateway/) ? #### Version of s3fs being used (s3fs --version) Compiled according to the Scaleway's guide. `Amazon Simple Storage Service File System V1.87 (commit:38e1eaa) with OpenSSL ` #### Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse) `2.9.9` #### Kernel information (uname -r) `5.4.0-1026-kvm` #### GNU/Linux Distribution, if applicable (cat /etc/os-release) ``` NAME="Ubuntu" VERSION="20.04.1 LTS (Focal Fossa)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 20.04.1 LTS" VERSION_ID="20.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=foca ``` #### /etc/fstab entry, if applicable ``` bucket /var/www/s3-storage/ fuse.s3fs _netdev,allow_other,use_path_request_style,nonempty,ensure_diskfree=35000,default_acl=public-read,parallel_count=15,multipart_size=32,nocopyapi,use_cache="",endpoint=nl-ams,url=https://s3.nl-ams.scw.cloud 0 0 ```
Author
Owner

@gaul commented on GitHub (Jun 30, 2021):

@Bizzonium I cleaned up the error handling which should allow implementing auto-restore. This should be straightforward just sending an RPC and retrying the read. Would you like to try implementing this? It would need some configuration to allow enabling/disabling, expedited/standard/bulk, etc.

<!-- gh-comment-id:871039377 --> @gaul commented on GitHub (Jun 30, 2021): @Bizzonium I cleaned up the error handling which should allow implementing auto-restore. This should be straightforward just sending an RPC and retrying the read. Would you like to try implementing this? It would need some configuration to allow enabling/disabling, expedited/standard/bulk, etc.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#773
No description provided.