mirror of
https://github.com/s3fs-fuse/s3fs-fuse.git
synced 2026-04-25 05:16:00 +03:00
[GH-ISSUE #272] Support Backblaze B2 Cloud Storage #140
Labels
No labels
bug
bug
dataloss
duplicate
enhancement
feature request
help wanted
invalid
need info
performance
pull-request
question
question
testing
No milestone
No project
No assignees
1 participant
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
starred/s3fs-fuse#140
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @tarikd on GitHub (Sep 22, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/272
Hi everyone,
Backblaze just announced B2 an S3 competitor 4 times cheaper. I would love that s3fs-fuse be compatible with this new cloud storage solution.
Cheers !
@gaul commented on GitHub (Sep 22, 2015):
B2 has a different API than S3:
https://www.backblaze.com/b2/docs/
It might be possible to add this to s3fs, but perhaps adding B2 support to S3Proxy would be more appropriate.
@pjz commented on GitHub (Nov 20, 2015):
I opened https://github.com/andrewgaul/s3proxy/issues/98 asking for support there.
@dreamflasher commented on GitHub (Jul 16, 2018):
As https://github.com/gaul/s3proxy/issues/98 is closed and s3proxy supports b2 -- does that mean this ticket here can be closed because s3fs is safely to be used with s3proxy?
@gaul commented on GitHub (Sep 11, 2018):
Can someone test this? s3fs stores directory metadata in objects ending in
/but B2 does not allow this:https://www.backblaze.com/b2/docs/files.html#fileNames
@gaul commented on GitHub (Apr 9, 2019):
Closing due to inactivity.
@dreamflasher commented on GitHub (Apr 9, 2019):
So does this mean this doesn't work?
@gaul commented on GitHub (Apr 10, 2019):
s3fs will never support Backblaze natively since its B2 protocol differs from S3. It is possible that using s3fs with S3Proxy works but there are some B2 restrictions with objects ending with
/. If this does not work I recommend trying b2_fuse instead.@JordanMagnuson commented on GitHub (Jul 21, 2019):
I also want to switch from using S3 to Backblaze B2 for my mounted file system. Can anyone confirm that that have found a dependable solution in either s3fs + s3proxy or b2_fuse?
@OJFord commented on GitHub (May 21, 2020):
B2 now offers an S3 compatible API - i.e.
aws --endpoint-url=s3.region.backblazeb2.com s3[api] ...should work.So, for anyone else looking to see if s3fs supports not-real-S3 S3-like storage, yep:
https://github.com/s3fs-fuse/s3fs-fuse/wiki/Non-Amazon-S3
That should now work with the S3 endpoint of your B2 bucket.
@gaul commented on GitHub (May 21, 2020):
Backblaze supports parts of the S3 API but their implementation of MPU copy part has some issues. I reported this to their team but have not heard back yet. I tested via:
Note that some of these tests failed due to litter from previous test failures.
@gaul commented on GitHub (May 23, 2020):
Most of these tests failed due to an issue in the test configuration. However,
test_multipart_mixstill fails due to a Backblaze S3 incompatibility. Adding the flag-o nomixuploadworks around this. I also added a section for Backblaze here:https://github.com/s3fs-fuse/s3fs-fuse/wiki/Non-Amazon-S3#backblaze
@JordanMagnuson commented on GitHub (May 23, 2020):
Thanks for your work on this, gaul! What does the nomixupload flag actually do? At first I thought it was no mluti-part upload, but I see that that is a different flag.
@gaul commented on GitHub (May 24, 2020):
With nomixupload, a small change to a file requires s3fs to upload (and possibly download) the entire object. With the mixupload optimization, s3fs can use multipart copy to avoid uploading most of the bytes. This is likely an oversight in the Backblaze implementation and I sent further debugging information to their team.
@malaysf commented on GitHub (May 29, 2020):
Hello, I'm a developer at Backblaze. Thank you for testing s3fs with our S3 compatible API and for letting us know about this issue. I looked into the test failures and they are caused by a known limitation described under the "Successive Calls" section in the S3 compatibility guide:
I forced a delay in the calls locally and the tests succeeded. This is on our roadmap to address but I don't have an estimate on when this work will be completed.
@gaul commented on GitHub (May 30, 2020):
Thanks for your investigation! Fixing #1013 would work around some but not all of these symptoms but this is a subtle change. Can you respond here when Backblaze fixes the semantics of overwrites?
@malaysf commented on GitHub (May 30, 2020):
Yes, I will post back here when this is fixed.
@HaleTom commented on GitHub (Feb 13, 2022):
@malaysf has there been any updates on this? I'd love to use this with my B2 account :)