This article was originally published in the Autumn 2019 issue of The Record. Subscribe for FREE here to get the next issues delivered directly to your inbox.
The cloud has evolved. Today’s architecture is dispersed, covers multiple geographies, has ‘soft edges’, is difficult to contain, runs on multiple cloud platforms, and is supplied by different vendors.
While many customers are enjoying the benefits of this new environment, they are managing it with an on-premises mentality. There are a number of reasons for this. Early management options in the cloud arrived in the form of familiar on-premises solutions. Security information and event management (SIEM) tools are an intricate part of data centre management posture, and so it seemed like a great idea to copy that posture to the cloud. However, to get the best out of the architectures that cloud providers offer, they need to evolve. Why, I hear you ask?
The fundamental answer to this is that the architecture of a cloud configuration is very different to that of the ones we are used to seeing in the data centres around the world. Therefore, the management of the cloud needs to follow suit.
On-premises solutions are contained, whereas the cloud is not. On-premises architectures don’t need to consider internal bandwidth charges or worry about placing agents in every corner of their data centre. Both of these are, however, important considerations in the cloud. On-premises management tools have the luxury of sitting outside the fences, calling in, and demanding every last scrap of data (in the form of large log files from all the players) in an attempt to collate this data to inform the users what they should do should any incident occur.
Cloud architecture is different. Control, management and data planes have been deliberately separated to enable a performance not possible in an on-premises alternative. The separation of planes is an area that the customer needs to appreciate regarding security. Remember the shared security model? Cloud vendors look after parts of the data and control plane, but not the data going through them. In addition, you are responsible for the management plane and having visibility into these is key.
In terms of management considerations, you don’t want to generate large log files and then wait for them to be passed to a tool that requires professional analysis. This is not the best use case, and the cloud offers much more flexibility in this field.
Instead, the cloud demands a cloud generation solution for management. Size doesn’t matter in the cloud: in fact, smaller is better. A new approach in this area is to utilise tools that have ‘insiders’ talking to end points via application programming interface (API) calls, gathering all the data available, and sifting through it closer to the source. Once complete, it can pass small chunks of pertinent data through to tools that don’t require professional intervention to decipher the information while offering the ability to automate remediation.
Some tools are now able to offer automatic remediation to the issues they find. This is a major step forward in the ability to monitor and manage your cloud infrastructure. Some tools can also integrate their reporting with existing SIEM tools to help with reporting and alerting. If you are a small to medium sized business, you may not have the resources to keep this type of skill set on the payroll, and so an extra step in the journey is required.
The journey from an on-premises environment to a cloud-based architecture may be daunting for some, but, if the latest tools are researched beforehand, it can be a smooth process. Security, as always, is key. This means security built in from the start and not treated as a ‘bolt on’ later. If you utilise the latest cloud management tools to make sure you stay in check, you can be sure to keep a solid eye on your journey and keep everything in shape once you’ve arrived.
Things happen quicker in cloud and, in the world of security, that has both positives and negatives.
The positives are clear. Threats are now on a zero-day schedule and you need to be prepared for the next threat and to deploy the next layer of defence. Failure to plan for this would make it highly likely that your organisation would suffer significant outages, financial pain, or worse: loss of data. So, a quick reaction is key and being on the inside will give you the advantage.
The negatives are not so clear. How do you make sure your new defences are deployed correctly? How do you get them to work with your existing defences? In addition, how do you train your workforce to report back on the latest threats, should it hit your organisation?
Because of the speed of change these days, you rarely witness a brand-new product replacing the previous version. What you tend to find is constant improvements to the existing foundation by layering extra features and protection to the base solution. This layered approach can be seen in email security products. Entry level protection is common, familiar, and similar across vendors supplying such services. However, as ongoing threats developed from spam and phishing to ransomware and spear phishing, additional layers of protection were added to the original stack. It’s unnecessary to create a new master solution that addresses all threats. The existing layers serve a purpose and additional ones are added to address new and more complex threats.
Cloud management tools are now evolving to offer extra strings to their bows. Having visibility into the whole deployment and possessing the ability to report from all endpoints without disrupting the latest cloud architectures is key to good management.
Chris Hill is regional vice president of public cloud at Barracuda Networks
Share this story