A Step by Step Guide to File Backup | Secure Cloud Backup Software | Nordic Backup
File-level backups, or more simply “file backups” are exactly what they sound like: a backup of individual files. Files can be backed up to/from a desktop, server, network share, external drive, and so on. Any media or device with a file system can be backed up or restored to with a file backup.

They have many strengths and weaknesses, but file backups are one of the most common types of backups because of the familiarity that comes with being able to instantly recognize what is yours: your folder structure and files.
This is what makes file backups the preferred choice for beginners. File backups meet the requirements of many disaster scenarios and use cases. They are simple for just about anyone to understand on a basic level, even with no experience, because people are already familiar with navigating their own file system.
At a minimum, a good file backup will give you at least all of the following capability:

  • Backup to/from any kind of local storage, volume, folder, or file system you want, or cloud storage
  • Provide a means of scheduling the backup to run whenever you want
  • Retain a history of changes to your files for however long you want
  • Compress the backup data to conserve backup storage
  • Send you an email report of backup progress/status

While a great file backup will give you even more capability and flexibility:

  • Encrypt your backup data for certainty of privacy in the event that you use public or third party storage
  • Provide a means of executing custom programs, commands, or scripts before and after the backup
  • Provide a means of scheduling or automating restore procedures to new hardware or secondary site
  • Provide a means of indexing and searching the backup by name, date, type, or any criteria
  • Provide a (automated) means of checking the integrity of individual files and the index
  • Provide a centralized management where you can monitor and administer all of your backups in one place
As you might expect, a file-level restore is as simple a process as a file-level backup. It is essentially a more advanced form of a file copy. When you want your backup software to re- store a file or folder, select the version of the files you want, tell it where to restore the data, and when it finishes you will see a folder containing your files from that time.

A great advantage that comes from backing up files individually is the ability to keep very long histories without a linear increase in storage usage. Because a relatively small portion of an organization’s files change regularly, only one copy per version of a file is required.

More actively modified files will have more versions retained in the backup, giving you many points in time to choose from during restore, while more inactive files have no need to occupy any additional storage because there are fewer versions of that file. File backups can easily and inexpensively be adapted for archival purposes, allowing you to recover from isolated mishaps that may have gone unnoticed for a long time.

While file backups are an important pillar in every successful disaster recovery plan, they do not amount to a disaster recovery plan on their own. The trouble with file backups is that their simplicity can lure you into relying on them in situations where file restore performance would be unacceptably poor or would not work at all, making you wish you had tested your recovery plan and gone in another direction.

You cannot reconstruct a bootable, functioning system – not even a simple one – at the file level. You cannot restore a complex database application at the file level. You cannot restore a phone system, email system, or a Microsoft Active Directory at the file level. You cannot restore data types that are not represented by files.

While they are among the top 3 most important tools in the bag, file backups are simply too slow and make a poor substitute for other more comprehensive backups we will discuss.

The quick and easy user experience of file restores is misleading. While it is very simple and only takes a few clicks to setup a file restore operation, file restores perform at their best when they involve only a small amount of data.

The value of file-level restore lies in its simplicity, economy, and convenience, as they can be operated by regular users without involving upper-level IT staff. That doesn’t mean you get to fire the IT staff and pursue backup no further. The recovery value of file backup vanishes rapidly when the disaster scenario calls for urgent, bulk restore of a system or disk.

File backups and file restores are very serialized processes. The bigger they get, the slower they are. You need to keep your file backups organized for the same reason you need to keep your production file systems organized. Organization will always save space, time, performance, and sanity in the long run.

Compartmentalize your production file systems and their corresponding file backups in a way that makes sense to your organization and limits the scope of any one file backup. This should come naturally to your thought processes as you identify your production 10% of most critical data and formulate your disaster recovery plan and your restore priorities. Let the layout of your production storage and the arrangement of your file backups directly reflect those restore priorities.

You can compartmentalize by server, by file share, by application, by department, and so on. However deep you want to go is fine as long as it makes sense to your company and satisfies the order of operations in your recovery plan. The result should be that you have multiple file backups each dedicated to one purpose that can all backup and restore in parallel or in accordance with your restore priorities, keeping predictable performance and maintainability.

The only exception to this would be the case in which one applications’ data set is particularly vast, notoriously slow to work with, and impossible to divide. Sometimes a software developer makes a wrong choice early on in design and you are stuck maintaining a growing, cumbersome file set for an application. It’s not common but it does happen, particularly with custom-made software that was developed in-house by a small team.

In such a case, you know that this problem data is not going anywhere, probably getting worse, and as long as your business continues to rely on this application, you’ve got to accommodate a growing challenge in your disaster planning.
Backup data mirrors the positive and negative attributes of the production data it rep- resents. Consider 2 sets of files, each totaling 1 terabyte of data. The first terabyte of data is represented by a thousand 1 GB files, and the second is represented by 8 million 128 KB files. Uninterrupted transfer of fewer larger files are by far way more efficient than millions of single tiny files.

Even if you could use SSD storage for your file backups, you wouldn’t achieve anywhere near what you think your peak transfer rates should be with millions of small files. This is where large file-level restore leads to protracted downtime.

You must be particularly vigilant about identifying cumbersome file sets in your critical applications because you cannot overcome them with fast storage alone. The overhead of writing addresses to the file system tables, logs, journals, and so on becomes far greater than the data payload itself. The only way to circumvent this rapidly multiplying performance penalty is with a completely different technology: block-level restore, which we will cover in a future article.

Share This