The cloud provides the compute power, scalability and security that so many enterprises are looking for as they expand their business. But data back-up and recovery may not be as simple as it seems.
The first mistake that so many organisations make is assuming that their data is already backed up by the provider. In fact, the provider is really only responsible for the physical infrastructure – that is, the data centre, the servers and the network. In fact, data falls under the responsibility of the customer. It’s up to them to protect and manage their data, and ultimately back it up.
The second mistake is deploying multiple point solutions due to the proliferation of technologies which are now available. Many organisations still have legacy environments on premises, from Microsoft and others. When you add the cloud into the mix from multiple providers and locations it creates a lot of different data silos. Data becomes highly fragmented across different locations, which is difficult to manage and costly as you are paying to store anywhere from four to 16 copies of data. It’s clearly inefficient to have many different point solutions and this is what leads to mass data fragmentation.
The third mistake is relying too much on snapshots. Snapshots are point-in-time views of a system or its data. While they capture data from a specific moment in time, they are stored on a single piece of hardware. If the hardware fails, you lose your data. There are similar issues in the cloud. You can take a snapshot when you set up a virtual machine (VM) and store them in the cloud. It is very simple to do this, but as you scale with multiple cloud accounts, you must manage those snapshots separately. Over time, this too can become very difficult to manage.
The fourth mistake companies make is focusing on data back-up and forgetting about the recovery. In this situation, you only know if you are able to recover your data when things go wrong, at which point it’s often too late. Companies focus on back-up because there are only 24 hours in a day, but the amount of data to be backed up in this time has grown exponentially and will continue to grow. But back-up is not very useful unless you have the ability to find data quickly and recover it error-free and at the right granularity.
The fifth mistake companies make is not doing enough planning around potential disasters and how to recover from them. According to research from the Spiceworks technology community, 5% of enterprises don’t have a disaster recovery plan at all and 29% that do have plan have never tested it. They are just operating as normal and hoping that nothing bad happens. The reality is, planning and testing for disaster recovery can be a complex and manual process, so companies can do better with automation and orchestration tools.
So, this is where Cohesity comes in. We back up the cloud data that enterprises wrongly assume is already backed up, by consolidating all the silos from hybrid cloud and multi-cloud environments onto a single platform and user interface. We unify snapshots in the cloud to make them easier to manage. We also allow enterprises to design the back-up and recovery procedures that are right for them. We can recover from anywhere because we are a global platform with many capabilities around fast granular and mass restoration, so you can recover whatever you need, whenever you need it and at the specified granularity. We have also got data recovery testing covered; customers can use automated environments to orchestrate the whole data recovery process and ensure it works every time. With Cohesity taking care of your data management operations, enterprises and developers have more time to spend on innovating and using that data to provide valuable insights into business operations.
Douglas Ko is the director of product marketing at Cohesity
This article was originally published in the Winter 2019 issue of The Record. Subscribe for FREE here to get the next issues delivered directly to your inbox.
Share this story