Supporting the cloud with data availability best practices

The cloud may be reliable, but businesses still need to ensure availability if anything does go wrong, says Ian Wells at Veeam Software

Guest
By Guest on 30 January 2015
Supporting the cloud with data availability best practices

This article was first published in the Winter 2014 issue of OnWindows

Gartner has predicted that cloud computing will compromise the bulk of new IT spend by 2016. This represents a sea change not only in how enterprises spend their money, but what they expect from their IT infrastructure. Initial expectations of the cloud were based on ideas of flexibility, consistency and permanent availability: all conditions that the modern, always-on business demands in order to access 24/7 IT services.

In the coming years, we are sure to see more and more applications hosted in the cloud in an effort to improve efficiency and flexibility. However, this rapid growth has also introduced another crucial question for IT departments adopting ­IT-as-a-service: what happens if the cloud fails?

Dark clouds ahead
The cloud is not indestructible, and there are a number of problems that might cause a cloud service to disappear for any amount of time. These can arise from human malice, acts of God, or the potential for human error when dealing with a complex web of data centre infrastructure. For instance, security breaches and DDoS attacks, natural disasters, or even a simple mistake during a software update could all potentially cause problems for cloud service providers. For the providers themselves, this can represent lost revenue and embarrassment for those that are judged on their service level agreements. For the end user, this could result in a critical gap in IT service availability and missed opportunities.

From multiple back-ups of applications and data, to ensuring that both back-up and recovery are as fast and consistent as possible to minimise downtime; cloud providers have multiple ways to ensure availability. Yet, although cloud providers try their hardest to prevent downtime, there have already been a number of instances where cloud services have failed, stranding the organisations that rely upon them. Since complete infallibility can never be guaranteed, organisations using cloud services must take action internally to ensure they will always have access to their critical applications.

If it ain’t broke
Moving IT to the cloud doesn’t change the basic principles of data availability that have served organisations so well to date. For instance, to ensure they have a failsafe, enterprises should ensure they are following the 3-2-1 approach to data availability to minimise the risk of losing all their data. In other words, they should maintain three copies of data, stored on two separate storage media, at least one of which is off site. This means that a single outage should never be able to affect every single instance of an organisation’s data. Whether it is in another format or another location, a backup can always be swiftly recovered, providing a huge amount of insurance for businesses. Ironically, as critical IT services are placed in the cloud, ‘off site’ is growing to mean that these back-ups are actually stored on the enterprise’s own premises.

Since part of the appeal of the cloud is removing the need for in-house IT infrastructure and management, replicating the entire cloud-based IT infrastructure in-house would be counterproductive. As a result, enterprises should take a tiered approach to protection: prioritising the most critical services and ensuring that these can be brought online as close to immediately as possible. Regular testing to ensure that any recovery will work as planned is also crucial to guaranteeing that the enterprise can keep running regardless of what happens in the cloud. The good news is that modern advances in virtualisation and automation mean that enterprises can back up, store, recover and test more and more servers each year. This means that increasingly greater proportions of an organisation’s IT services can be treated as mission-critical and brought back in the event of a near-total disaster.

A consistent approach to data availability in the cloud
In terms of data availability, moving to the cloud does not mean reinventing the wheel. It means maintaining best practices to ensure that a problem does not bring down IT services and cripple the business. Making sure all data is recoverable, ensuring that there are copies in at least three locations and being certain that any data can be recovered as quickly as possible can avoid disruption and meet the demands of the always on, 24/7 business. By maintaining best practices in data availability, the end of a cloud infrastructure won’t mean the end of the world.

Ian Wells is VP of North-West Europe at Veeam Software

Number of views (4405)/Comments (-)

Comments are only visible to subscribers.

Theme picker