Your disaster recovery planning is overcomplicated. That’s actually to be expected – you have many data sets of everything from a file, to an application, to an entire server, to all of your network’s Tier 1 services, and each one requires handling a little differently when it comes to exactly how (and how quickly) you’ll need to recover. And when you add in true business continuity – where the expectation is, in essence, to never be down – the plan changes completely to one that ensures services are available instead of recoverable.
While there’s no real way around the complexities that often come with the business continuity and disaster recovery planning, you can avoid the associated failures that can accompany these kinds of plans using the backup rule of 3. If you’re not familiar with it, it acts as a guideline to define where a proper backup - regardless of the data set protected - should exist.
The backup rule of three is simple enough – and you may already be following some or all of it:
The premise here is to ensure you have recoverability. For those of you no longer on tape, remember when tapes failed during a restore? That’s what we’re trying to avoid. Even with the redundancy built into some really sophisticated storage, in your time of need even an entire disk array can fail, making recovery impossible. So, the first part of the rule is to have three copies of a given data set you wish to protect.
You already have the live copy of your data. The good news is that actually counts as one! Now you need two more copies. A great solution here is utilizing a hybrid-cloud backup and recovery solution that mirrors backups on-premises and in the cloud, creating the two copies you need.
Another method is to do continuous replication of a VM to an alternative site (which gives you 1 additional copy to the live server), and a backup of the VM image.
There are lots of ways to get to three; the point is to make sure your backups are redundant.
This part of the rule is there to make sure you aren’t defeating the first part by simply storing two copies together somewhere. Go back to my storage array example. Put two copies of a given backup on that same failing array, and you’re out for the count. So, the point here is to make sure it’s two different copies on completely different media. If you subscribe to either of the two examples above, you will have met this part of the rule without even trying.
This last part exists to ensure you can protect against a loss of an entire location. Say you’re really fighting this whole “cloud” thing and meet parts 3 and 2 by making multiple backups on separate storage arrays in the same building. And then there’s a massive fire. See what I mean? So part 1 keeps you in check, does sort of force you to use the cloud (or, at least a remote data center of some kind), but provides that needed layer of protection against any kind of loss.
If you subscribe to either of the backup scenarios I previously mentioned, this last part of the rule is already met. If you don’t, then looking into some way to maintain an offsite copy of backups, VM images or even standby servers will get your organization closer to business continuity, rather than simply focusing on recovering a backup.
Whether your organization’s plans revolve around business continuity or disaster recovery planning, the backup rule of three cradles your plans safely in the knowledge that should anything else fail during a recovery scenario – whether it be the backup, the building or anything in between – you know you have another backup source from which to ensure the business keeps running.
Get the latest MSP tips, tricks, and ideas sent to your inbox each week.