[GH-ISSUE #272] Support Backblaze B2 Cloud Storage #140

Closed
opened 2026-03-04 01:42:33 +03:00 by kerem · 17 comments
Owner

Originally created by @tarikd on GitHub (Sep 22, 2015).
Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/272

Hi everyone,

Backblaze just announced B2 an S3 competitor 4 times cheaper. I would love that s3fs-fuse be compatible with this new cloud storage solution.

Cheers !

Originally created by @tarikd on GitHub (Sep 22, 2015). Original GitHub issue: https://github.com/s3fs-fuse/s3fs-fuse/issues/272 Hi everyone, Backblaze just announced B2 an S3 competitor 4 times cheaper. I would love that s3fs-fuse be compatible with this new cloud storage solution. Cheers !
kerem closed this issue 2026-03-04 01:42:33 +03:00
Author
Owner

@gaul commented on GitHub (Sep 22, 2015):

B2 has a different API than S3:

https://www.backblaze.com/b2/docs/

It might be possible to add this to s3fs, but perhaps adding B2 support to S3Proxy would be more appropriate.

<!-- gh-comment-id:142371356 --> @gaul commented on GitHub (Sep 22, 2015): B2 has a different API than S3: https://www.backblaze.com/b2/docs/ It might be possible to add this to s3fs, but perhaps adding B2 support to S3Proxy would be more appropriate.
Author
Owner

@pjz commented on GitHub (Nov 20, 2015):

I opened https://github.com/andrewgaul/s3proxy/issues/98 asking for support there.

<!-- gh-comment-id:158497761 --> @pjz commented on GitHub (Nov 20, 2015): I opened https://github.com/andrewgaul/s3proxy/issues/98 asking for support there.
Author
Owner

@dreamflasher commented on GitHub (Jul 16, 2018):

As https://github.com/gaul/s3proxy/issues/98 is closed and s3proxy supports b2 -- does that mean this ticket here can be closed because s3fs is safely to be used with s3proxy?

<!-- gh-comment-id:405303432 --> @dreamflasher commented on GitHub (Jul 16, 2018): As https://github.com/gaul/s3proxy/issues/98 is closed and s3proxy supports b2 -- does that mean this ticket here can be closed because s3fs is safely to be used with s3proxy?
Author
Owner

@gaul commented on GitHub (Sep 11, 2018):

Can someone test this? s3fs stores directory metadata in objects ending in / but B2 does not allow this:

https://www.backblaze.com/b2/docs/files.html#fileNames

<!-- gh-comment-id:420153750 --> @gaul commented on GitHub (Sep 11, 2018): Can someone test this? s3fs stores directory metadata in objects ending in `/` but B2 does not allow this: https://www.backblaze.com/b2/docs/files.html#fileNames
Author
Owner

@gaul commented on GitHub (Apr 9, 2019):

Closing due to inactivity.

<!-- gh-comment-id:481188021 --> @gaul commented on GitHub (Apr 9, 2019): Closing due to inactivity.
Author
Owner

@dreamflasher commented on GitHub (Apr 9, 2019):

So does this mean this doesn't work?

<!-- gh-comment-id:481331860 --> @dreamflasher commented on GitHub (Apr 9, 2019): So does this mean this doesn't work?
Author
Owner

@gaul commented on GitHub (Apr 10, 2019):

s3fs will never support Backblaze natively since its B2 protocol differs from S3. It is possible that using s3fs with S3Proxy works but there are some B2 restrictions with objects ending with /. If this does not work I recommend trying b2_fuse instead.

<!-- gh-comment-id:481541564 --> @gaul commented on GitHub (Apr 10, 2019): s3fs will never support Backblaze natively since its B2 protocol differs from S3. It is possible that using s3fs with S3Proxy works but there are some B2 restrictions with objects ending with `/`. If this does not work I recommend trying [b2_fuse](https://github.com/sondree/b2_fuse) instead.
Author
Owner

@JordanMagnuson commented on GitHub (Jul 21, 2019):

I also want to switch from using S3 to Backblaze B2 for my mounted file system. Can anyone confirm that that have found a dependable solution in either s3fs + s3proxy or b2_fuse?

<!-- gh-comment-id:513575443 --> @JordanMagnuson commented on GitHub (Jul 21, 2019): I also want to switch from using S3 to Backblaze B2 for my mounted file system. Can anyone confirm that that have found a dependable solution in either s3fs + s3proxy or b2_fuse?
Author
Owner

@OJFord commented on GitHub (May 21, 2020):

B2 now offers an S3 compatible API - i.e. aws --endpoint-url=s3.region.backblazeb2.com s3[api] ... should work.

So, for anyone else looking to see if s3fs supports not-real-S3 S3-like storage, yep:
https://github.com/s3fs-fuse/s3fs-fuse/wiki/Non-Amazon-S3

That should now work with the S3 endpoint of your B2 bucket.

<!-- gh-comment-id:631812576 --> @OJFord commented on GitHub (May 21, 2020): B2 now offers an [S3 compatible API](https://www.backblaze.com/b2/docs/s3_compatible_api.html) - i.e. `aws --endpoint-url=s3.region.backblazeb2.com s3[api] ...` should work. So, for anyone else looking to see if s3fs supports not-real-S3 S3-like storage, yep: https://github.com/s3fs-fuse/s3fs-fuse/wiki/Non-Amazon-S3 That should now work with the S3 endpoint of your B2 bucket.
Author
Owner

@gaul commented on GitHub (May 21, 2020):

Backblaze supports parts of the S3 API but their implementation of MPU copy part has some issues. I reported this to their team but have not heard back yet. I tested via:

$ S3_URL=https://s3.us-west-001.backblazeb2.com S3FS_CREDENTIALS_FILE=/path/to/.passwd make check -C test
$ cat test/test-suite.log
...
FAIL: test_mv_file
FAIL: test_list
FAIL: test_remove_nonempty_directory
FAIL: test_external_modification
FAIL: test_read_external_object
FAIL: test_multipart_copy
FAIL: test_multipart_mix
FAIL: test_extended_attributes
FAIL: test_content_type
FAIL: test_cache_file_stat
SUMMARY for ./integration-test-main.sh: 26 tests passed.  10 tests failed.

Note that some of these tests failed due to litter from previous test failures.

<!-- gh-comment-id:631817388 --> @gaul commented on GitHub (May 21, 2020): Backblaze supports parts of the S3 API but their implementation of MPU copy part has some issues. I reported this to their team but have not heard back yet. I tested via: ``` $ S3_URL=https://s3.us-west-001.backblazeb2.com S3FS_CREDENTIALS_FILE=/path/to/.passwd make check -C test $ cat test/test-suite.log ... FAIL: test_mv_file FAIL: test_list FAIL: test_remove_nonempty_directory FAIL: test_external_modification FAIL: test_read_external_object FAIL: test_multipart_copy FAIL: test_multipart_mix FAIL: test_extended_attributes FAIL: test_content_type FAIL: test_cache_file_stat SUMMARY for ./integration-test-main.sh: 26 tests passed. 10 tests failed. ``` Note that some of these tests failed due to litter from previous test failures.
Author
Owner

@gaul commented on GitHub (May 23, 2020):

Most of these tests failed due to an issue in the test configuration. However, test_multipart_mix still fails due to a Backblaze S3 incompatibility. Adding the flag -o nomixupload works around this. I also added a section for Backblaze here:

https://github.com/s3fs-fuse/s3fs-fuse/wiki/Non-Amazon-S3#backblaze

<!-- gh-comment-id:632955154 --> @gaul commented on GitHub (May 23, 2020): Most of these tests failed due to an issue in the test configuration. However, `test_multipart_mix` still fails due to a Backblaze S3 incompatibility. Adding the flag `-o nomixupload` works around this. I also added a section for Backblaze here: https://github.com/s3fs-fuse/s3fs-fuse/wiki/Non-Amazon-S3#backblaze
Author
Owner

@JordanMagnuson commented on GitHub (May 23, 2020):

Thanks for your work on this, gaul! What does the nomixupload flag actually do? At first I thought it was no mluti-part upload, but I see that that is a different flag.

<!-- gh-comment-id:633148938 --> @JordanMagnuson commented on GitHub (May 23, 2020): Thanks for your work on this, gaul! What does the nomixupload flag actually do? At first I thought it was no mluti-part upload, but I see that that is a different flag.
Author
Owner

@gaul commented on GitHub (May 24, 2020):

With nomixupload, a small change to a file requires s3fs to upload (and possibly download) the entire object. With the mixupload optimization, s3fs can use multipart copy to avoid uploading most of the bytes. This is likely an oversight in the Backblaze implementation and I sent further debugging information to their team.

<!-- gh-comment-id:633170955 --> @gaul commented on GitHub (May 24, 2020): With nomixupload, a small change to a file requires s3fs to upload (and possibly download) the entire object. With the mixupload optimization, s3fs can use multipart copy to avoid uploading most of the bytes. This is likely an oversight in the Backblaze implementation and I sent further debugging information to their team.
Author
Owner

@malaysf commented on GitHub (May 29, 2020):

Hello, I'm a developer at Backblaze. Thank you for testing s3fs with our S3 compatible API and for letting us know about this issue. I looked into the test failures and they are caused by a known limitation described under the "Successive Calls" section in the S3 compatibility guide:

When uploading multiple versions of the same file within the same second, the possibility exists that the processing of these versions may not be in order. Backblaze recommends delaying uploads of multiple versions of the same file by at least one second to avoid this situation.
Similarly, when hiding a file within the same second as uploading that file, it is possible that the file may not actually be hidden. To avoid such a situation, please delay such calls on the same file by at least one second.

I forced a delay in the calls locally and the tests succeeded. This is on our roadmap to address but I don't have an estimate on when this work will be completed.

<!-- gh-comment-id:636182154 --> @malaysf commented on GitHub (May 29, 2020): Hello, I'm a developer at Backblaze. Thank you for testing s3fs with our S3 compatible API and for letting us know about this issue. I looked into the test failures and they are caused by a known limitation described under the "Successive Calls" section in the [S3 compatibility guide](https://www.backblaze.com/b2/docs/s3_compatible_api.html): > When uploading multiple versions of the same file within the same second, the possibility exists that the processing of these versions may not be in order. Backblaze recommends delaying uploads of multiple versions of the same file by at least one second to avoid this situation. > Similarly, when hiding a file within the same second as uploading that file, it is possible that the file may not actually be hidden. To avoid such a situation, please delay such calls on the same file by at least one second. I forced a delay in the calls locally and the tests succeeded. This is on our roadmap to address but I don't have an estimate on when this work will be completed.
Author
Owner

@gaul commented on GitHub (May 30, 2020):

Thanks for your investigation! Fixing #1013 would work around some but not all of these symptoms but this is a subtle change. Can you respond here when Backblaze fixes the semantics of overwrites?

<!-- gh-comment-id:636251251 --> @gaul commented on GitHub (May 30, 2020): Thanks for your investigation! Fixing #1013 would work around some but not all of these symptoms but this is a subtle change. Can you respond here when Backblaze fixes the semantics of overwrites?
Author
Owner

@malaysf commented on GitHub (May 30, 2020):

Yes, I will post back here when this is fixed.

<!-- gh-comment-id:636342066 --> @malaysf commented on GitHub (May 30, 2020): Yes, I will post back here when this is fixed.
Author
Owner

@HaleTom commented on GitHub (Feb 13, 2022):

@malaysf has there been any updates on this? I'd love to use this with my B2 account :)

<!-- gh-comment-id:1038089946 --> @HaleTom commented on GitHub (Feb 13, 2022): @malaysf has there been any updates on this? I'd love to use this with my B2 account :)
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
starred/s3fs-fuse#140
No description provided.