Top disaster recovery issues
It is no surprise that disaster preparation is top of mind among people these days. The images and stories coming out of Japan following the devastating earthquake and tsunami and recently repeated in end-of-year reviews, inevitably lead to people wondering, “What would I do if the earth moved or water flooded my state, city or neighborhood?” The floods in Thailand in late 2011 which have resulted in major hard-drive shortages are another example of disasters which can affect business continuity.
From a business perspective, much of disaster planning revolves around all-important data back-up and recovery processes. Whether a disruption is the result of a cataclysmic event or a hardware malfunction, real business continuity cannot be maintained in this digital age without off-site backup. But offsite data back-ups are no magic solution for disaster recovery. There’s a lot more to the story.
There are many factors that can complicate the recovery process if businesses aren’t careful. Here are six of the biggest issues organizations may face as they plan for continuity.
Back up activation keys and licensing
Application data isn’t much good if your company has not backed up the activation keys and licensing necessary to restore its software licenses. Many organisations forget how important it is to keep duplicates of the activation and licensing information in safe and redundant locations to ensure a quick recovery after a disruption.
Securely store encryption keys
Similarly, encrypted backup tapes are about as useful as doorstops when an organisation loses its encryption key information. Recent public-sector data breaches have proven that organisations cannot get by with keeping backup data unencrypted. This makes it critical for you to plan and execute a key management strategy. You’ll need to think about how keys will be stored and recovered to assure a smooth data recovery process.
Account for application customizations
Many organisations will spend millions of pounds on consulting fees to create customised modules and settings for enterprise software only to see it all go up in smoke following a disaster. A big mistake businesses often make is to back up all of the appropriate application data but forget about duplicating customisation information. Remembering to fill this gap ahead of time can not only save money, but also prevent prolonged business disruption following an incident.
What if the super user is incapacitated?
The saga of Terry Childs, the former San Francisco city network administrator who refused to divulge key infrastructure passwords to his bosses, should teach all organisations the lesson that it is never wise to entrust all of your critical account login information to one superuser. This is a single point of failure in privileged identity that can be particularly painful if an individual who keeps critical login information in his or her head passes away. Childs eventually saw the light after some jail time, but if the keeper of your organisation’s password secrets is no longer alive intimidation won’t solve your problem. This scenario highlights the importance of distributed, fault-tolerant processes and tools that allow you to replicate accounts. Doing so will give your organisation the peace of mind that it can recover systems and accounts in the event of a major catastrophe or should key staff become incapacitated – or simply out of reach.
Don’t let VMs prevent partial recovery
The fact that most backup and recovery operations are “all or nothing” propositions didn’t pose much of a problem a few short years ago. But the wide-scale adoption of virtualisation has thrown a wrinkle into many disaster recovery plans. During the triage stage that often follows a disruption, organisations may need to immediately recover only a piece of the infrastructure – say, one critical virtualised server that handles email messaging. But the nature of VMs makes it impossible to restore one machine without many others. As organisations think about business continuity, they really need to plan for partial recovery of the most critical business services first.
The cloud can’t save you
Disaster recovery is often at the top of the laundry list of benefits touted by public cloud vendors. True, the flexibility and resiliency of the cloud can promote continuity for data contained within that architecture. But that’s the big catch. Much of your data is not contained in the cloud and never will move there. In fact, if your organisation faces any of today’s prevailing regulatory compliance mandates it is likely that your most sensitive data will remain in-house. So, any but the smallest organisation that pins its disaster recovery hopes on the cloud may need to rethink those assumptions.
When laying out their disaster recovery plans, organisations should repeatedly seek out their worst case scenarios. And they should be testing their recovery efforts. Businesses often struggle with recovering from a disaster because they’ve never actually practiced the procedures they’ve planned. Resource limitations and fear of downtime often keep organisations from achieving the ideal full recovery simulations, but at the very least organisations should perform repeat, limited recovery testing. Doing so can not only streamline your future recovery, but can also test how well your IT vendors stand behind their promises. You could be surprised how some vendors struggle to meet their SLAs, while others bend over backwards – even if it isn’t spelled out in the contract.