Layered security in the cloud
When designing your cloud architecture you may notice several differences between the cloud-computing environment and the “old world” of physical infrastructure. Two of the main differences are elasticity and dynamism, which are part of the cloud’s DNA.
The fact that security-related components can be easily tested, evaluated and deployed allow many companies – both existing and newly established start-ups – to launch their solutions also or solely in the public cloud. Moreover, I argue that by combining the tools supplied by a cloud provider with external third party solutions, higher levels of security – not to mention peace of mind – are achieved.
Best-of-breed
By thoroughly evaluating your architecture, you can highlight the main areas of concern that are derived from either a business need or regulatory requirement, so that you can find the optimal security solution to fit that specific need. The optimal solution is a very personal matter as each company has their own unique sets of requirements, financial capabilities and technical expertise.
An example for this can be Amazon’s EC2 Security Group. No one can argue that the personalized per component firewall is not a good solution. However, there are better ones that work just as good on AWS’s platform whilst giving improved functionality – for example, Dome9. Not only do these guys do what AWS does, but they have taken it a step further. They actively scan your security groups’ ports and email you alerts (configured easily via SNS) if any changes are made.
Additionally, they maintain a full comprehensive historical log of everything that occurred in your environment, which helps with regulation. However, the main highlight is Clarity, a visual tool showing all relationships between security groups.
Let’s say you have a small-to-medium environment that consists of 50 instances and about 35 different security groups. Are you 100 percent positive the relationships between all the security groups are correct? With this tool you don’t have to think about it or even review one security group after another: it’s all there, out-of-the-box. It’s like street-view for your AWS security:
Another example is disk encryption. AWS only rolled out their solution in May 2014, while other third party solutions have been around for longer. However, if you really think about it, what’s so special about volume encryption? I mean, everyone is doing it and everyone is somewhere in the area of AES-256 cryptographic algorithm. What’s your added value?
Take Porticor for example. There’s encryption and there’s smart encryption that can give you some advantage and added value as a business. In Porticor’s case, they basically teleported the “Swiss banker” approach to the cloud and offered patented split key encryption and homomorphic key management.
“Split key encryption acts like a Swiss banker and splits encryption keys to two parts,” explained Ariel Dan, Vice President at Porticor. “The first part is common to all data objects in the application, remains the sole possession of the company, and is unknown to Porticor or the cloud provider. The second part is different for each data object and is stored by the Porticor Key Management Service.”
Every time the application accesses the data store, a Porticor Virtual Appliance implemented in the customer’s cloud account uses both parts of the key to dynamically encrypt and decrypt data. Porticor’s virtual appliance combines high security with virtually no impact on application performance and application latency.
Porticor mitigates the threat of key theft both in storage and in use
Layered security
In the physical hub age, the issue of layered security was hardly felt as companies invested the bulk of their hub budget on the strongest firewall they could both afford and operate, a switch where they defined the different VLANs, and a good web application firewall. Other than these main tools, some investment was occasionally done in disk encryption capabilities, and perhaps also in one or two additional components.
In the Amazon cloud, companies can take full advantage of Virtual Private Cloud, subnets, multi-factor authentication methods (One Time Passwords), security groups, physical segregation using availability zones and regions, site-to-site VPN, etc.
Leverage
The bottom line is that companies can achieve better security at lower costs by migrating their infrastructure to a public cloud. This is obtained by leveraging the right solution at the right locations in your environments in an optimal manner. This means: whatever makes sense to you financially, operationally and regulatory.
The dynamic nature of the cloud ensures with that the industry will continue to deliver various solutions fitting the many existing and new needs that constantly arise.
CloudEndure is another example of third part solution tailored to customers running on public clouds and that have a challenging Service Level Agreement with their customers.
Significant production downtime can terminate businesses. The loss of direct revenue is almost negligible in the era of reputations and choice. Customers rely more heavily on a vendor’s reputation and their uptime. One can also measure the mean time between failures.
CloudEndure solves the problem of downtime in the cloud by delivering continuous replication of your entire cloud application stack. A single click creates an exact replica of the entire application stack at an alternate region within minutes, complete with instances, attached volumes, network topology, load balancers, security groups, firewalls, and more.
What are the alternatives? Keeping and maintaining fully operational active-passive or active-active correlation between two or more environments and by this ensuring that you have above 99% uptime in addition to guarantying that your costs are “just” doubled. I think this speaks for itself.
Summary
Because of the high costs associated with managing physical environments, companies try to avoid any change associated with their cabinets. This places a lot of emphasis on planning because the cost of a mistake is dire. For example, if a company deploys a firewall and realizes after some time that that particular firewall does not fit their needs, it will take months and months of re-planning, testing, purchasing, shipping, deploying and re-testing to deploy another one. The cost of men-hours, flights, and so on is normally underestimated and will ultimately serve as the most costly section of the project’s budget.
On the other hand, replacing even the most pivotal of systems in your cloud environment takes anywhere from hours to several days (including tests). The simplicity of launching the relevant instance type from a machine image in the right facility, which already includes the right operating system and application, is done in minutes.
Those that already moved to the cloud can probably agree with most – if not all – that I have written in this article. Someone once told me about the cloud that it’s not a question of IF it should be used, but WHEN. To those that have yet to take the leap of faith I would just like to say that it takes but a small test to convince yourself that cloud is the only sensible way to go.