Back to Blog Home

CIO guide to disaster recovery: 5 lessons learned the hard way

The trouble with disaster recovery is you’re planning for events that are, by definition, unexpected. Nobody knows for certain when or where a hurricane will do the most damage. Power outages strike without warning, and ransomware attacks come from out of the blue.

If you want to sleep better at night, you need a sound disaster recovery (DR) program that protects your organization from becoming the next headline. The first step toward getting disaster recovery right is avoiding the most common mistakes. At OnX, a CBTS company, we find that the five most common DR mistakes are:

1. Underplanning: Being reactive instead of proactive.

An astonishing number of organizations lack a documented disaster recovery plan. A recent survey reported by CIOinsight.com found that 40 percent of the organizations queried admitted they did not have a documented disaster-recovery plan. The survey went out to more than 400 IT pros in firms with 50 to 1,000 employees across 23 industries and organizations.

Companies in this size range have enough at stake to protect their IT systems from critical downtime. The survey found that 65 percent lost customers because of IT downtime, for instance, and 26 percent suffered delays in the delivery of products and services.

By contrast, 78 percent of the companies that were confident about their disaster recovery strategy had a documented DR plan.

The variables in today’s IT—people, patches, cybercrime, and software updates—make it extremely difficult to recover systems without a written plan that spells out exactly what to do when downtime hits and the methodical procedures to recover as quickly as possible. And as companies become ever more dependent on IT, they need to think seriously about establishing a well-documented DR program.

2. Overplanning: Trying to do much, too soon.

Companies can tie themselves in knots trying to plan for every eventuality and bring too many apps, networks, and systems in scope that they become overwhelmed.

Proper DR planning starts with recovery time objectives (RTOs) and recovery point objectives (RPOs) and establishing an understanding of what is most critical to your organization’s ability to keep running without loss of revenue or impacting clients. The RTO establishes how soon critical systems must be back online, and the RPO establishes how far back in time the recovery must go.

Eventually, you want to recover all of your systems and data across the organization but by creating a tiering of top-down priorities you can focus your resources and provide the greatest protection where it is needed most. So, your disaster recovery plan needs tiers to establish priorities according to your RTOs and RPOs.
Download our FREE White Paper and Learn the 4 Best Practices for DRaaS Success
Download Now

3. Not enough testing: If you have not tested and proven your ability to recover, you most likely will fail.

Until it’s been tested, a disaster recovery plan is hypothetical—an educated guess on the processes and methodologies required to bring systems back online. You have to test your DR program a minimum of once a year, and preferably every quarter.

This is no small challenge. Recovery procedures must be well documented in a sequenced manner and the test must prove or disprove the correct steps to be taken. The first time you test you will invariably find changes and the recovery documentation must be updated and the test repeated to prove success.

At CBTS OnX, we’ve found that securing buy-in with stakeholders across the organization is crucial. When the leadership team is on board, it’s easier to get the cooperation of their staff, which you’ll need to coordinate the tests.

Our blog post on DR testing explains these challenges in greater detail.

4. Poor documentation: Leaving inadequate guidance on your DR testing.

Documenting the nuts and bolts of DR testing is a chore whose value is easy to overlook or underestimate.

Documentation provides a roadmap for navigating the complexities of DR testing. The whole point of creating a DR testing program is to have a standard operating procedure that multiple people can implement.

Too many IT operations place DR testing responsibilities on the shoulders of one person or a small team. If these people fail to have the resources, skill, sets or experience to recover, you likely are making sunken investments in a false promise of protection.

See our blog post on documenting DR testing for more details.

5. Creating a one-time event: DR is a living program that must continually evolve.

Your IT environment never stops evolving. Servers, switches, operating systems, virtual machines, and software platforms change by the hour, day, week, and more.

That’s why there’s no one point in time recovery. You need a plan that meshes your technologies with your business goals and recovery objectives as they continually evolve.

If your DR program collects dust over months or years and is not mirrored to your current production environment, it could prove close to worthless when trouble erupts. Make sure your DR plan aligns with the changes in your IT systems and as stated above is regularly tested.

A partner for your DR goals and recovery objectives

OnX, a CBTS company, helps companies from corporations to global enterprises develop robust disaster recovery programs. We can configure rapid failover and duplicate operating environments in the cloud to shrink downtime to a minimum in your most vital systems.

If you’d rather get out of the DR business, we provide Disaster Recovery as a Service (DRaaS) that configures holistic data protection, backup, and recovery. Our experts can design an optimal DR solution that meets your objectives and helps close gaps in the skills of your IT team.

How do we do it? Check out this case study describing our work with a Florida company that faced a direct hit from Hurricane Irma.


Hurricane Irma No Match for Wittock CPA’s Cloud Hosting and Business Continuity Plan

Read the case study