Backup of a Backup – Best Practices

Backup of a Backup
Backup of a Backup

Generally, if properly managed, an online backup can be the lifeline that helps a business prevent major data loss from an unplanned disaster.  After all, online backups are offsite and impervious to fire, flood, theft, etc.

However, as in everything, there are some caveats.

One such caveat I’d like to write about about involves the “danger” of configuring your online backup to archive data created by another backup program (as opposed to backing up the data directly with the online backup software.)

Consider the case of a client who had a rather large ACT! contact database of all their business prospects. The (pretty savvy) end user configured the scheduler within the ACT! program to make a local backup of its database and the underlying support files each day – and to deposit that backup file into a folder under “My Documents”. The file containing the backup data was named “ACT.zip”.

A common practice for most Dr.Backup implementations is to routinely backup the entire My Documents folder – and that’s exactly how this client’s online backup was configured. Each night, like clockwork, Dr.Backup performed its backup of all data that had changed since the last time it ran.

However, unknown to anybody, was the existence of a problem with the internal ACT! scheduler. The end result being that an updated copy of ACT.zip was no longer being created in the user’s document folder. Nobody had noticed that this automated process was no longer working.

Of course you know the next chapter of this saga.

The customer had a hard disk failure and when their data was restored from the online backup, it contained an aged copy of the ACT.zip file – one that was several months old.

Needless to say, nobody was happy with that outcome.

Sometimes there is no choice but to backup a file/folder created by an application. But, best practices would dictate that we ALSO attempt to configure the online backup to take a “snapshot” of the actual database files themselves – and not to rely solely on an unmonitored application to make its own backup.

Had this approach been taken, the client would have had two independent backups – one maintained locally and a second completely distinct and independent backup kept offsite. The results of the recovery would been greatly improved.

So you might wonder, what was the reason for not making a second independent backup copy? All I can say is “it was economics, my dear Watson” and a lesson learned.

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *