Veeam Cloud Object Storage Deep Dive – Part Three, Benchmarks
Welcome to the ultimate section of our three-part collection on item and Veeam storage. In component one we viewed the similarities and distinctions between the huge three “hyperscale” cloud companies, after that in component two we viewed how Veeam could be configured to optimise your item storage utilisation and performance. Now partly three we’ll appearance at some benchmarks to supply real leads to help form your object storage styles. Helping you to accelerate your adoption of item storage within an effective manner.
Basics
To make sure this test was since fair as you possibly can, I created an individual VM, installed Home windows Server 2019 and shut it down. Then i created a backup work with some common features comprehensive below and cloned the back-up job 3 x. Within each one of these clones I just changed the block dimension and the repository (all separate folders in a NTFS 4k partition), making sure all the settings were identical.
Environmentally friendly settings configured were:
-
- Nearby repositories had been all configured to align back-up file information blocks.
-
- Each repository has been configured as a Scale-out Back-up Repository (SOBR) with another object storage space, for my assessment I created 4x storage space accounts, each one inside a different resource team within Microsoft Azure. The SOBR had been configured for duplicate mode in each example.
-
- All cloud repositories were warm tier, LRS storage space within exactly the same region (UK – Southern).
-
- Guest Digesting was disabled, with the VM shut to guarantee the blocks were identical per backup down.
-
- Compression Degree was set to optimum
-
- Inline Information Deduplication, Exclude Swap Document Blocks and Exclude deleted document blocks were all still left to defaults (allowed).
-
- Jobs had been configured as incremental
forward
-
- Job plan was disabled, making sure the VM data has been identical for every working job run.
Complete and incremental operate
With the operating jobs configured as above, I ran the original full backups and documented the API calls generated from Microsoft Azure and the document space consumed on the NTFS partition by the VBK documents only. I proceeded on top of that up the VM after that, download the most recent Veeam ISO to the storage space and shut the VM straight down once more then. We ran the incremental operates from each backup work then. The work runs were completed as soon as, I collated the full total results.
Outcomes
First, let’s check out the API requests:
Graph displaying the disk room consumed for back-up and incremental job operates relative to the storage space optimisation selected. See desk below for breakdown.
Block Type
Full Backup
Size (MB)
Incremental Backup
Size (MB)
Full Backup Size – In accordance with Local
Incremental Backup Size – In accordance with Local
Local Large
5842
9445
99.73%
103.43%
Local
5858
9132
100.00%
100.00%
LAN
5862
9055
100.06%
99.16%
WAN
5896
9081
100.65%
99.44%
When running our complete backup, small block sizes in fact gave us hook storage consumption penalty as well as the massive upsurge in API calls. If you’re wondering why we’ve LAN/WAN as focus on options with one of these results still, it’s because we’ll start to see the advantage within the incremental backups. At this stage we note that Local Big goes from becoming our overall preferred to suffering its very first loss, with an upsurge in storage space of over 3%, whereas we begin to discover LAN and WAN getting a lot more reserved with the storage space they might need for the incremental backups. Understand that for this illustration, the noticeable changes for incremental backup had been generated by copying files to the VM. The upsurge in storage used predicated on prevent size shall vary according to the workload type.
Price
Whilst it’s already been intriguing to examine all these numbers, what counts is what this can cost you really. So let’s have a look at the final final result, every month as prices vary, here’s a snapshot of what the expenses were at that time:
Storage space Per GB: £0.015 monthly
API Phone calls: £0.44 per 10k functions
Block Type | Total Storage Necessary (MB) | Month storage Cost Per | Total API Calls Required | API Costs | Total Cost: |
---|---|---|---|---|---|
Local Large | 15287 | £0.23 | 5270 | £0.02 | £0.25 |
Local | 14990 | £0.21 | 19590 | £0.09 | £0.30 |
LAN | 14917 | £0.21 | 38660 | £0.17 | £0.38 |
WAN | 14977 | £0.21 | 76140 | £0.33 | £0.54 |
As the test sizes listed below are small, they level. Collating this data collectively gives us this desk:
Block Type | API Calls Per MB |
---|---|
Local Large | 0.34 |
Local | 1.31 |
LAN | 2.59 |
WAN | 5.08 |
Veeam’s typical expectation varies out of this and is utilized in lieu of particular workload examining, if in doubt make reference to this desk:
Block Type
API Phone calls Per MB (Average)
Local Large
0.25
Local
1
LAN
2
WAN
4
You can now calculate an approximate price for developing a backup through the use of these tables and the formulation below:
CODE - Backup Size * Prevent Dimension (0.25 for Local Huge | 1 for Local | 2 for LAN | 4 for WAN)
Bottom line
We’ve reached the finish of part three, where these metrics have emerged by us used. To summarise, partly one we in comparison the “Big Three” major hyperscale cloud suppliers and reviewed what produced them similar and various, partly two we examined the settings accessible within Veeam to optimise our item storage utilisation and the various scenarios we should arrange for whenever using object storage. Finally, partly three we developed a benchmark against which we’ve realistic expectations regarding the costs from the clouds.
Much like all backup data models and cloud use situations, your mileage can vary greatly. This is a viewpoint for just one deployment as proven but offers you a perspective on what cloud storage may be used for Veeam backups.
Many thanks for reading and make sure you let us know for those who have any questions or remarks regarding this collection.
You must be logged in to post a comment.