Mount the low priced redundant and unlimited storage on your desktop with Mountain Duck. Backblaze B2 Cloud Storage works similar to Amazon S3 or Microsoft Azure. Allowing you to store unlimited data in the cloud. But does it for 1/4th the cost. Backblaze B2 Pricing. B2, on the other hand, has storage costs of $0.005 per GB per month, with the first 10GB of storage free. Meanwhile, download costs are $0.01 per GB, with 1GB of free. B2 cloud with Cloudflare. Hi:) Hope this is the right place to ask this question. I tried out BackBlaze the last days and backed up about 50gb of data. To check if everything works fine I downloaded a Backup of about 4GB with Chrome and unzipped it via Mac Unzip. Unfortunately, about 80% of.
- Backblaze B2 Cloudberry
- Backblaze B2 Cloud Storage
- Backblaze B2 Cloud Storage Review
- Backblaze B2 Cloud Storage Service
- Backblaze B2 Cloudflare
Backblaze B2
B2 is Backblaze's cloud storage system.
Paths are specified as remote:bucket
(or remote:
for the lsd
command.) You may put subdirectories in too, e.g. remote:bucket/path/to/dir
.
Here is an example of making a b2 configuration. First run
This will guide you through an interactive setup process. To authenticateyou will either need your Account ID (a short hex number) and MasterApplication Key (a long hex number) OR an Application Key, which is therecommended method. See below for further details on generating and usingan Application Key.
This remote is called remote
and can now be used like this
See all buckets
Create a new bucket
List the contents of a bucket
Sync /home/local/directory
to the remote bucket, deleting anyexcess files in the bucket.
Application Keys
B2 supports multiple Application Keys for different access permissionto B2 Buckets.
You can use these with rclone too; you will need to use rclone version 1.43or later.
Follow Backblaze's docs to create an Application Key with the requiredpermission and add the applicationKeyId
as the account
and theApplication Key
itself as the key
.
Note that you must put the applicationKeyId as the account
– youcan't use the master Account ID. If you try then B2 will return 401errors.
--fast-list
This remote supports --fast-list
which allows you to use fewertransactions in exchange for more memory. See the rclonedocs for more details.
Modified time
The modified time is stored as metadata on the object asX-Bz-Info-src_last_modified_millis
as milliseconds since 1970-01-01in the Backblaze standard. Other tools should be able to use this asa modified time.
Modified times are used in syncing and are fully supported. Note thatif a modification time needs to be updated on an object then it willcreate a new version of the object.
Restricted filename characters
In addition to the default restricted characters setthe following characters are also replaced:
Character | Value | Replacement |
---|---|---|
0x5C | \ |
Invalid UTF-8 bytes will also be replaced,as they can't be used in JSON strings.
Note that in 2020-05 Backblaze started allowing characters in filenames. Rclone hasn't changed its encoding as this could cause syncs tore-transfer files. If you want rclone not to replace then see the--b2-encoding
flag below and remove the BackSlash
from thestring. This can be set in the config.
SHA1 checksums
The SHA1 checksums of the files are checked on upload and download andwill be used in the syncing process.
Large files (bigger than the limit in --b2-upload-cutoff
) which areuploaded in chunks will store their SHA1 on the object asX-Bz-Info-large_file_sha1
as recommended by Backblaze.
For a large file to be uploaded with an SHA1 checksum, the sourceneeds to support SHA1 checksums. The local disk supports SHA1checksums so large file transfers from local disk will have an SHA1.See the overview for exactly which remotessupport SHA1.
Sources which don't support SHA1, in particular crypt
will uploadlarge files without SHA1 checksums. This may be fixed in the future(see #1767).
Files sizes below --b2-upload-cutoff
will always have an SHA1regardless of the source.
Transfers
Backblaze recommends that you do lots of transfers simultaneously formaximum speed. In tests from my SSD equipped laptop the optimumsetting is about --transfers 32
though higher numbers may be usedfor a slight speed improvement. The optimum number for you may varydepending on your hardware, how big the files are, how much you wantto load your computer, etc. The default of --transfers 4
isdefinitely too low for Backblaze B2 though.
Note that uploading big files (bigger than 200 MB by default) will usea 96 MB RAM buffer by default. There can be at most --transfers
ofthese in use at any moment, so this sets the upper limit on the memoryused.
Versions
When rclone uploads a new version of a file it creates a new versionof it.Likewise when you delete a file, the old version will be marked hiddenand still be available. Conversely, you may opt in to a 'hard delete'of files with the --b2-hard-delete
flag which would permanently removethe file instead of hiding it.
Old versions of files, where available, are visible using the--b2-versions
flag.
NB Note that --b2-versions
does not work with crypt at themoment #1627. Using--backup-dir with rclone is the recommendedway of working around this.
If you wish to remove all the old versions then you can use therclone cleanup remote:bucket
command which will delete all the oldversions of files, leaving the current ones intact. You can alsosupply a path and only old versions under that path will be deleted,e.g. rclone cleanup remote:bucket/path/to/stuff
.
Note that cleanup
will remove partially uploaded files from the bucketif they are more than a day old.
When you purge
a bucket, the current and the old versions will bedeleted then the bucket will be deleted.
However delete
will cause the current versions of the files tobecome hidden old versions.
Here is a session showing the listing and retrieval of an oldversion followed by a cleanup
of the old versions.
Show current version and all the versions with --b2-versions
flag.
Retrieve an old version
Clean up all the old versions and show that they've gone.
Data usage
It is useful to know how many requests are sent to the server in different scenarios.
All copy commands send the following 4 requests:
The b2_list_file_names
request will be sent once for every 1k filesin the remote path, providing the checksum and modification time ofthe listed files. As of version 1.33 issue#818 causes extra requeststo be sent when using B2 with Crypt. When a copy operation does notrequire any files to be uploaded, no more requests will be sent.
Uploading files that do not require chunking, will send 2 requests perfile upload:
Uploading files requiring chunking, will send 2 requests (one each tostart and finish the upload) and another 2 requests for each chunk:
Versions
Versions can be viewed with the --b2-versions
flag. When it is setrclone will show and act on older versions of files. For example
Listing without --b2-versions
And with
Showing that the current version is unchanged but older versions canbe seen. These have the UTC date that they were uploaded to theserver to the nearest millisecond appended to them.
Note that when using --b2-versions
no file write operations arepermitted, so you can't upload files or delete them.
B2 and rclone link
Rclone supports generating file share links for private B2 buckets.They can either be for a file for example:
or if run on a directory you will get:
you can then use the authorization token (the part of the url from the?Authorization=
on) on any file path under that directory. For example:
Standard Options
Here are the standard options specific to b2 (Backblaze B2).
--b2-account
Account ID or Application Key ID
- Config: account
- Env Var: RCLONE_B2_ACCOUNT
- Type: string
- Default: '
--b2-key
Application Key
- Config: key
- Env Var: RCLONE_B2_KEY
- Type: string
- Default: '
--b2-hard-delete
Permanently delete files on remote removal, otherwise hide files.
- Config: hard_delete
- Env Var: RCLONE_B2_HARD_DELETE
- Type: bool
- Default: false
Advanced Options
Here are the advanced options specific to b2 (Backblaze B2).
--b2-endpoint
Endpoint for the service.Leave blank normally.
- Config: endpoint
- Env Var: RCLONE_B2_ENDPOINT
- Type: string
- Default: '
--b2-test-mode
A flag string for X-Bz-Test-Mode header for debugging.
This is for debugging purposes only. Setting it to one of the stringsbelow will cause b2 to return specific errors:
- 'fail_some_uploads'
- 'expire_some_account_authorization_tokens'
- 'force_cap_exceeded'
These will be set in the 'X-Bz-Test-Mode' header which is documentedin the b2 integrations checklist.
- Config: test_mode
- Env Var: RCLONE_B2_TEST_MODE
- Type: string
- Default: '
--b2-versions
Include old versions in directory listings.Note that when using this no file write operations are permitted,so you can't upload files or delete them.
- Config: versions
- Env Var: RCLONE_B2_VERSIONS
- Type: bool
- Default: false
--b2-upload-cutoff
Cutoff for switching to chunked upload.
Files above this size will be uploaded in chunks of '--b2-chunk-size'.
This value should be set no larger than 4.657GiB ( 5GB).
- Config: upload_cutoff
- Env Var: RCLONE_B2_UPLOAD_CUTOFF
- Type: SizeSuffix
- Default: 200M
--b2-copy-cutoff
Cutoff for switching to multipart copy
Any files larger than this that need to be server-side copied will becopied in chunks of this size.
The minimum is 0 and the maximum is 4.6GB.
- Config: copy_cutoff
- Env Var: RCLONE_B2_COPY_CUTOFF
- Type: SizeSuffix
- Default: 4G
--b2-chunk-size
Upload chunk size. Must fit in memory.
When uploading large files, chunk the file into this size. Note thatthese chunks are buffered in memory and there might a maximum of'--transfers' chunks in progress at once. 5,000,000 Bytes is theminimum size.
- Config: chunk_size
- Env Var: RCLONE_B2_CHUNK_SIZE
- Type: SizeSuffix
- Default: 96M
--b2-disable-checksum
Disable checksums for large (> upload cutoff) files
Normally rclone will calculate the SHA1 checksum of the input beforeuploading it so it can add it to metadata on the object. This is greatfor data integrity checking but can cause long delays for large filesto start uploading.
- Config: disable_checksum
- Env Var: RCLONE_B2_DISABLE_CHECKSUM
- Type: bool
- Default: false
--b2-download-url
Custom endpoint for downloads.
This is usually set to a Cloudflare CDN URL as Backblaze offersfree egress for data downloaded through the Cloudflare network.Rclone works with private buckets by sending an 'Authorization' header.If the custom endpoint rewrites the requests for authentication,e.g., in Cloudflare Workers, this header needs to be handled properly.Leave blank if you want to use the endpoint provided by Backblaze.
- Config: download_url
- Env Var: RCLONE_B2_DOWNLOAD_URL
- Type: string
- Default: '
--b2-download-auth-duration
Time before the authorization token will expire in s or suffix ms|s|m|h|d.
The duration before the download authorization token will expire.The minimum value is 1 second. The maximum value is one week.
- Config: download_auth_duration
- Env Var: RCLONE_B2_DOWNLOAD_AUTH_DURATION
- Type: Duration
- Default: 1w
--b2-memory-pool-flush-time
How often internal memory buffer pools will be flushed.Uploads which requires additional buffers (f.e multipart) will use memory pool for allocations.This option controls how often unused buffers will be removed from the pool.
- Config: memory_pool_flush_time
- Env Var: RCLONE_B2_MEMORY_POOL_FLUSH_TIME
- Type: Duration
- Default: 1m0s
--b2-memory-pool-use-mmap
Whether to use mmap buffers in internal memory pool.
- Config: memory_pool_use_mmap
- Env Var: RCLONE_B2_MEMORY_POOL_USE_MMAP
- Type: bool
- Default: false
--b2-encoding
This sets the encoding for the backend.
See: the encoding section in the overview for more info.
- Config: encoding
- Env Var: RCLONE_B2_ENCODING
- Type: MultiEncoder
- Default: Slash,BackSlash,Del,Ctl,InvalidUtf8,Dot
Limitations
rclone about
is not supported by the B2 backend. Backends withoutthis capability cannot determine free space for an rclone mount oruse policy mfs
(most free space) as a member of an rclone unionremote.
See List of backends that do not support rclone aboutSee rclone about
Last updated November 06, 2020
Backblaze B2 Cloud Storage (B2) public and private buckets can be used as origins with Fastly.
TIP: Backblaze offers an integration discount that eliminates egress costs to Fastly when using Backblaze B2 Cloud Storage as an origin. In addition, Backblaze also offers a migration program designed to offset many of the data transfer costs associated with switching from another cloud provider to Backblaze. To ensure your migration has minimal downtime, contact support@fastly.com.
Before you begin
Before you begin the setup and configuration steps required to use B2 as an origin, keep in mind the following:
- You must have a valid Backblaze account. Before you can create a new bucket and upload files to it for Fastly to use, you must first create a Backblaze account at the Backblaze website.
- Backblaze provides two ways to set up and configure B2. B2 can be set up and configured using either the Backblaze web interface or the B2 command line tool. Either creation method works for public buckets. To use private buckets, however, you must use the B2 command line tool. For additional details, including instructions on how to install the command line tool, read Backblaze's B2 documentation.
- Backblaze provides two APIs for integrating with Backblaze B2 Cloud Storage. You can use the B2 Cloud Storage API or the S3 Compatible API to make your B2 data buckets available through Fastly. The S3 Compatible API allows existing S3 integrations and SDKs to integrate with B2. Buckets and their specific application keys created prior to May 4th, 2020, however, cannot be used with the S3 Compatible API. For more information, read Backblaze's article on Getting Started with the S3 Compatible API.
Using Backblaze B2 as an origin
To use B2 as an origin, follow the steps below.
Creating a new bucket
Data in B2 is stored in buckets. Follow these steps to create a new bucket via the B2 web interface.
TIP: The Backblaze Guide provides details on how to create a bucket using the command line tool.
- Log in to your Backblaze account. Your Backblaze account settings page appears.
- Click the Buckets link. The B2 Cloud Storage Buckets page appears.
Click the Create a Bucket link. The Create a Bucket window appears.
- In the Bucket Unique Name field, enter a unique bucket name. Each bucket name must be at least 6 alphanumeric characters and can only use hyphens (
-
) as separators, not spaces. - Click the Create a Bucket button. The new bucket appears in the list of buckets on the B2 Cloud Storage Buckets page.
- Upload a file to the new bucket you just created.
NOTE: Buckets created prior to May 4th, 2020 cannot be used with the S3 Compatible API. If you do not have any S3 Compatible buckets, Backblaze recommends creating a new bucket.
Uploading files to a new bucket
Once you've created a new bucket in which to store your data, follow these steps to upload files to it via the B2 web interface.
TIP: The Backblaze Guide provides details on how to upload files using the command line tool.
- Click the Buckets link in the B2 web interface. The B2 Cloud Storage Buckets page appears.
- Find the bucket details for the bucket you just created.
- Click the Upload/Download button. The Browse Files page appears.
- Click the Upload button. The upload window appears.
- Either drag and drop any file into the window or click to use the file selection tools to find a file to be uploaded. The name and type of file at this stage doesn't matter. Any file will work. Once uploaded, the name of the file appears in the list of files for the bucket.
- Find your bucket's assigned hostname so you can set up a Fastly service that interacts with B2.
Finding your bucket's assigned hostname
To set up a Fastly service that interacts with your B2, you will need to know the hostname Backblaze assigned to the bucket you created and uploaded files to.
Find your hostname in one of the following ways:
Via the B2 web interface when you're using the standard B2 Cloud Storage API. Click the name of the file you just uploaded and examine the Friendly URL and Native URL fields in the Details window that appears. The hostname is the text after the
https://
designator in each line that matches exactly.Via the command line and the B2 Cloud Storage API. Run the
b2 get-account-info
command on the command line and use the hostname from thedownloadUrl
attribute.Via the B2 web interface when you're using the S3 Compatible API. Click the Buckets link and find the bucket details for the bucket you just created. The hostname is the text in the Endpoint field.
Creating a Backblaze application key for private buckets
Your Backblaze master application key controls access to all buckets and files on your Backblaze account. If you plan to use a Backblaze B2 private bucket with Fastly, you should create an application key specific to the bucket.
NOTE: The Backblaze documentation provides more information about application keys. When creating application keys for your private bucket, we recommend using the least amount of privileges possible. You can optionally set the key to expire after a certain number of seconds (up to a maximum of 1000 days or 86,400,000 seconds). If you choose an expiration, however, you'll need to periodically create a new application key and then update your Fastly configuration accordingly each time.
Via the web interface
To create an application key via the B2 web interface:
- Click the App Keys link. The Application Keys page appears.
Click the Add a New Application Key button. The Add Application Key window appears.
- Fill out the fields of the Add Application Key controls as follows:
- In the Name of Key field, enter the name of your private bucket key. Key names are alphanumeric and can only use hyphens (
-
) as separators, not spaces. - From the Allow access to Bucket(s) menu, select the name of your private bucket.
- From the Type of Access controls, select Read Only.
- Leave the remaining optional controls and fields blank.
- In the Name of Key field, enter the name of your private bucket key. Key names are alphanumeric and can only use hyphens (
Click the Create New Key button. A success message and your new application key appear.
- Immediately note the keyID and the applicationKey from the success message. You'll use this information when you implement header-based authentication with private objects.
Via the command line
To create an application key from the command line, run the create-key
command as follows:
where <bucketName> <keyName>
represents the name of the bucket and key you created. For example:
The keyID and the applicationKey are the two values returned.
NOTE: Application keys created prior to May 4th, 2020 cannot be used with the S3 Compatible API.
Creating a new service
To create a new Fastly service, you must first create a new domain and then create a new Host and edit it to accept traffic for B2. Instructions to do this appear in our guide to creating a new service. While completing these instructions, keep the following in mind:
- When you create the new Host, enter the B2 bucket's hostname in the Hosts field on the Origins page.
- When you edit the Host details on the Edit this host page, confirm the Transport Layer Security (TLS) area information for your Host. Specifically, make sure you:
- secure the connection between Fastly and your origin.
- enter your bucket's hostname in the Certificate hostname field.
- select the checkbox to match the SNI hostname to the Certificate hostname (it appears under the SNI hostname field).
- Also when you edit the Host, optionally enable shielding by choosing the appropriate shielding location from the Shielding menu. When using B2 Cloud Storage, this means you must choose a shielding location closest to the most appropriate Backblaze data center. For the data centers closest to:
- Sacramento, California (in the US West region), choose San Jose (SJC) from the Shielding menu.
- Phoenix, Arizona (in the US West region), choose Palo Alto (PAO) from the Shielding menu.
- Amsterdam, Netherlands (in the EU central region), choose Amsterdam (AMS) from the Shielding menu.
- Decide whether or not you should specify an override Host in the Advanced options area:
- If you're using the S3 Compatible API, skip this step and don't specify an override Host.
- If you're not using the S3 Compatible API, in the Override host field in the Advanced options, enter an appropriate address for your Host (e.g.,
s3-uswest-000.backblazeb2.com
orf000.backblazeb2.com
).
Using the S3 Compatible API
Using the S3 Compatible API with public objects
To use the S3 Compatible API with public objects, you will need to make sure the Host
header contains the name of your B2 Bucket. There are two ways to do this, both of which require you to get your region name which will be the 2nd part of your S3 Endpoint. So if your S3 Endpoint is s3.us-west-000.backblazeb2.com
, this means your region will be us-west-000
.
- In the Origin you created set the Override host field in the Advanced options to
<bucket>.s3.<region>.backblazeb2.com
(e.g.,testing.s3.uswest-000.backblazeb2.com
) Create a VCL Snippet. When you create the snippet, select within subroutine to specify its placement and choose miss as the subroutine type. Then, populate the VCL field with the following code. Be sure to change specific values as noted to ones relevant to your own B2 bucket - in this case
var.b2Bucket
would be'testing'
andvar.b2Region
would be'uswest-000'
.
Using the S3 Compatible API with private objects
To use a Backblaze B2 private bucket with Fastly, you must implement version 4 of Amazon’s header-based authentication. You can do this using custom VCL.
Start by obtaining the following information from Backblaze (see Creating a Backblaze application key for private buckets):
Backblaze B2 Cloudberry
Item | Description |
---|---|
Bucket name | The name of your Backblaze B2 bucket. When you download items from your bucket, this is the string listed in the URL path or hostname of each object. |
Region | The Backblaze region code of the location where your bucket resides (e.g., uswest-000 ). |
Access key | The Backblaze keyID for the App Key that has at least read permission on the bucket. |
Secret key | The Backblaze applicationKey paired with the access key above. |
Once you have this information, you can configure your Fastly service to authenticate against your B2 bucket using header authentication by calculating the appropriate header value in VCL.
Start by creating a regular VCL snippet. Give it a meaningful name, such as AWS protected origin
. When you create the snippet, select within subroutine to specify its placement and choose miss as the subroutine type. Then, populate the VCL field with the following code (be sure to change specific values as noted to ones relevant to your own AWS bucket):
Using the B2 API
Public Objects
You'll need to make sure the URL contains your bucket name. There are two ways to do this.
- Using a Header object.
Click the Create header button again to create another new header. The Create a header page appears.
- Fill out the Create a header fields as follows:
- In the Name field, type
Rewrite B2 URL
. - From the Type menu, select Request, and from the Action menu, select Set.
- In the Destination field, type
url
. - From the Ignore if set menu, select No.
- In the Priority field, type
20
.
- In the Name field, type
- In the Source field type the
'/file/<Bucket Name>' req.url
. - Click the Create button. A new Authorization header appears on the Content page.
- Click the Activate button to deploy your configuration changes.
Alternatively create a VCL Snippet. When you create the snippet, select within subroutine to specify its placement and choose miss as the subroutine type. Then, populate the VCL field with the following code. Be sure to change the variable to the name of your own B2 bucket.
Private Objects
Backblaze B2 Cloud Storage
To use a Backblaze B2 private bucket with Fastly, you must obtain an
Authorization Token
. This must be obtained via the command line.- You'll now need to authorize the command line tool with the application key you obtained.
- You will now need to get an authorization token for the private bucket.
e.g
This will create a token that is valid for 86400 seconds (i.e 1 day), the default. You can optionally change the expiration time from anywhere between 1s and 604,800 seconds (i.e 1 week).
Take note of the generated token.
NOTE: You will need to regenerate an authorization token and update your Fastly configuration before the end of the expiration time. A good way to do this would be through Fastly's versionless Edge Dictionaries.
Passing a generated token to Backblaze
There are two ways you can pass the generated token to Backblaze. The first is using an Authorization
header. This is the recommended method.
Click the Create header button again to create another new header. The Create a header page appears.
- Fill out the Create a header fields as follows:
- In the Name field, enter
Authorization
. - From the Type menu, select Request, and from the Action menu, select Set.
- In the Destination field, enter
http.Authorization
. - From the Ignore if set menu, select No.
- In the Priority field, enter
20
.
- In the Name field, enter
- In the Source field, enter the Authorization Token generated in the command line tool, surrounded by quotes. For example, if the token generated was
DEC0DEC0C0A
, then the Source field would be'DEC0DEC0C0A'
- Click the Create button. A new Authorization header appears on the Content page.
- Click the Activate button to deploy your configuration changes.
Backblaze B2 Cloud Storage Review
Backblaze B2 Cloud Storage Service
Alternatively, the second way is to pass an Authorization
query parameter.
Click the Create header button again to create another new header. The Create a header page appears.
- Fill out the Create a header fields as follows:
- In the Name field, enter
Authorization
. - From the Type menu, select Request, and from the Action menu, select Set.
- In the Destination field, enter
url
. - From the Ignore if set menu, select No.
- In the Priority field, enter
20
.
- In the Name field, enter
In the Source field, enter the header authorization information using the following format:
Using the previous example, that would be:
- Click the Create button. A new Authorization header appears on the Content page.
- Click the Activate button to deploy your configuration changes.