The economics of cloud based DR solutions
Disaster Recovery Cloud Economics
This blog will help answer some questions associated with the economics of cloud based disaster recovery solutions. The first step is to determine the value of your systems and the impact a significant outage would have to your business. According to a recent study by Ponemon Institute 1, the average cost of a data center outage has steadily increased more than 38% over the last 7 years.
|Year||Outage Cost per Incident|
In the last blog, we discussed that there are many causes of outages; with a facilities event as the most likely cause and cyber-crime related outages seeing the largest year over year increase. The increased monetary impact to the business and amplified number of potential threats are the primary reasons disaster recovery projects are near the top of IT initiatives. However, disaster recovery solutions are sometimes seen as “insurance” which usually limits a disaster recovery solutions effectiveness, or kills the project before it even starts. The increased popularity of cloud based disaster recovery as a service (DRaaS) solutions, is making companies are take a fresh look at their DR strategies.
Disaster Recovery Location Decision
We often see clients trying to determine if they should build their own disaster recovery solution or leverage a cloud based DR solution. If build your own is a consideration then the decision becomes either use a co-location facility or an already existing secondary site. If you have a datacenter ready secondary site then that is a viable option but you must consider the corporation’s commitment to that site as well as the opportunity cost of that space for other business functions. Most companies do not have a datacenter ready secondary site and co-location space is very cheap so most use co-location. The financial analysis contained in this document will be to compare co-location to a custom cloud.
Co-location is multiple entities functioning from within the same physical location. Co-location facilities typically provide environmentals for space, power, cooling, and physical security for the server, storage, and networking equipment. The clients operate and own their equipment but leverage the economies of scale that a shared datacenter can provide. We see clients use co-location if they do not have a datacenter ready secondary site or the secondary site is too close to survive a regional event.
A custom cloud leverages aspects of co-location, public and private cloud. The fundamentals of a custom cloud are the ability to co-exist shared resource pools with dedicated assets for a total solution. The shared resource pools are infrastructure assets that have multiple client’s data and or applications residing on them. The sharing of assets across multiple clients is sometimes referred to as a multi-tenant resource pooling. As long as these assets are consistent with your current infrastructure, you are able to leverage them as part of your disaster recovery solution. Dedicated assets are ones that the cloud provider makes available to the client but does not have readily available as a shared service. Both the shared assets and the dedicated assets are rolled into a single monthly payment.
Custom Cloud Keys to Success
The key to a successful custom cloud deployment is the technical and operational integration of the shared and dedicated resources. Part of the engagement with your custom cloud provider is to map your solution set with their shared service offering. After the mapping exercise, you should have a clear picture of which cloud resources you can and cannot not use and what the gap is for a total solution.
The next step would be to determine gap resolution by either conforming to the solutions providers’ asset pool or determining the dedicated solution components. If a dedicated solution is needed the next thing to do is to perform a detailed physical and logical interoperability analysis. If the interoperability analysis comes back clean, then discussions can begin on the architecture and configuration requirements. Once there is agreement on a configuration, the next step is a dedicated asset physical planning process to locate the asset in the cloud provider’s datacenter.
The other critical aspect of the custom cloud solution is the operational integration of dedicated and shared resources. The cloud provider may have the skills to manage dedicated assets but sometimes they do not and if that is the case, dedicated asset management will be the client’s responsibility. Split administration of the target assets can and does work as long as there is trust, proper expectations and open communication. If you clearly define roles and responsibilities, work collaboratively, communicate often and trust your provider this will be a successful deployment.
- Database & Applications Servers – Virtual and Physical
- Primary Storage – Block
- Secondary Storage – File
- Back up Storage – High Capacity Disk
- LAN – Ethernet
- SAN – Fibre Channel
- WAN – MPLS
- Load / Voice – Load balancing & VoIP
- Software – Cold site (no cost)*
*Hot or Warm site could have potential costs
The server configuration is a combination of virtual and physical servers, primary storage is block, secondary storage is file and backup storage is high capacity disk. LAN/SAN/WAN are the usual connectivity protocols of Ethernet/Fiber Channel/MPLS, load and voice refers to load balancing as well as VOIP considerations. The software cost consideration is dependent on if you are running a hot site, warm site or cold site. In this particular example we a running a cold site so there are no cost considerations since the software will only be running at one location at a time. If you are architecting a hot or warm site, you need to engage SIS or your software vendors to determine any potential cost.
The environmentals for floor space, power and cooling were determined after we determined the final disaster recovery target configuration. Additional resources were allocated to handle the operational workload associated with running, patching, fixing and upgrading the target equipment and the bandwidth was budgeted at two dedicated circuits.
Here is a representative analysis we performed for a client comparing co-location versus a custom cloud solution. The main cost criteria to evaluate are the items outlined in the first column, which include infrastructure and connectivity cost, software considerations, environmentals and resources. Our recommendation is to look at cost over a 6-year timeframe or 1 year beyond your hardware refresh cycle. The reason for the 6-year look is that this will allow you to compare the relative cost of a hardware refresh cycle versus the 6th year cloud cost.
The cost included the server hardware, virtualization software, block storage, file storage, high capacity storage, backup software, connectivity, load balancing and voice connectivity all with 60 months of maintenance. Since this was a cold site, we did not have an application or database charge associate with this design. Co-location space is cheap as reflected in the cost and the resources were allocated as two fully burdened full time equivalents. The bandwidth (circuit cost) was less expensive than cloud since the distance between the two locations was less than the distance to the cloud facility.
The first year cost are high for the co-location solution since you are responsible for the procurement of hardware, software, and connectivity immediately. We shortened the table by removing year 2-5 from view to make the table easier to read. Years 2 through 5 reflect the on-going maintenance support cost for the target equipment and the totals are the summarization of years 1-5. In this example, we have a 5 year refresh cycle so in year 6 we will have to buy most of the infrastructure again and the 5-year process would repeat itself.
Custom Cloud Cost
The cost included a combination of pooled resources and dedicated assets. All of the server cost were from a pool of resources and the storage cost was a combination of pooled and dedicated assets. LAN/SAN were a mix of shared and dedicated asset with WAN, Load balancing and voice all dedicated assets.
Environmentals and FTE costs are bundled into the cost of the custom cloud service; we do show a small start-up fee in the line item cost. The bandwidth from this client to the cloud was more expensive than the co-location option since the distance was greater.
The cloud cost are rolled up monthly cost for all of the asset classes per year over the 5-year period. The cloud cost for year 6 is a continuation of the service with no initial investment or startup cost, which is why it is more in line with the year 2 cloud cost.
For this client the 5-year total savings was $1.5million and the sixth year financial savings was $2.4Million. Even though floor space and circuit cost were cheaper the co-location financials were considerable more expensive than the custom cloud option. One of the biggest financial hurdles facing a co-location solution is the first year hardware and software capital outlay as well as the on-going maintenance cost.
Another significant issue for co-location is the hardware/software refresh cycle cost. After you pay for and depreciate your target assets, you have to buy everything again at the end of that cycle. The chart below summarizes some of the key decision criteria that went into the co-lo/cloud decision. Cloud was less expensive over the 5-year period as well as at the year 6 refresh cycle; it was also more financially granular as you added capacity.
Cloud included a pool of disaster recovery resource, which made it less expensive than the co-location FTE requirement. Co-location did fair better for control and operational flexibility since they are your dedicated assets.
You have options for your disaster recovery target environment so please give careful consideration for all solution options. Take into consideration all cost for both solutions including the cost to refresh equipment after the depreciation cycle.
- Whitepaper available through Zerto; Cost of Data Center Outages, January 2016; Data Center Performance Benchmark Series