There is no denying the importance of protecting your client’s data through regular backups. It is equally important however, to establish a well thought out retention policy. After all, backups do little good if data is purged from the backups before that data ages to the point that it is truly no longer needed. Thankfully, there are some things that you can do to ensure that the data within your VMware environment is retained for an appropriate length of time.
The first step in establishing a solid retention strategy is to define the level of granularity that must be achieved in order to meet the organization’s data restoration requirements. In the old days data restoration was about recovering files. Today file data represents only one type of data that needs to be protected. Restoration operations may need to be performed at the file, database, application, VM, host server, or physical server level. Each of these levels will likely have its own data retention requirements. For example, application data may need to be retained for longer than file data. As such, a critical first step in building a retention policy is to define retention goals by data type.
A second step in developing a data retention strategy is to determine where the backup data will be stored. Although this is a seemingly simple decision, there are some important implications associated with your choice.
First, your backup target choice will dictate the volume of data that can be stored. Since disk based backup targets have a finite capacity and since new data is being created on a regular basis, your backup target choice will ultimately dictate how long data can be retained.
Another way of looking at your backup target choice is that your target choice plays a role in the recovery speed, reliability and duration. A cloud based target for instance might offer high capacity and isolation from regional disasters, but restorations are usually slow and are dependent on the availability of Internet connectivity. Ideally parallel backups should write data to both a local target and to a cloud based target.
The third step in establishing a solid retention strategy is to put into place a mechanism for creating multi-level retention policies. Suppose for a moment that the organization decides that file data should be retained for one year. That might be fine for most of the organization’s file data, but what happens if you have a department that needs to retain specific files for a longer period of time in order to comply with regulatory requirements? Adjusting the entire retention policy so that all file data is retained for a longer period of time would drive up storage costs. A better solution is to provide users with a mechanism that allows them to make adjustments to the retention period for their own data.
As previously mentioned, disk based backup targets have a finite capacity. That being the case, it has become common practice to use storage automation to move aging data from the backup target to low cost, high capacity archive storage.
The problem with this approach is that the data that exists within archive storage is often the only copy of that data. As such, you must implement some form of redundancy that will prevent the archive storage from becoming a single point of failure. After all, if a data loss event were to occur within your archives then your entire retention policy essentially becomes meaningless.
One last step in developing a retention policy is to make sure that your backup software has a good monitoring and reporting engine. On the surface this would seem to have nothing to do with data retention. Without proper monitoring and reporting however, it is very difficult to verify that your backup software is actually adhering to the retention policies that you have put into place.
Get the latest MSP tips, tricks, and ideas sent to your inbox each week.