Ferro Backup System - The best Backup Software
Network Backup & Restore Software Solution for SMBs
 
  EN EN   PL PL  


Article reference number: FS-FBS-20121009-I01
Last proofread : 9 October 2012
Version : 1.0

How to Increase Backup Performance

This article shows how to increase backup server performance. Following these tips allows you to easily back up servers and computers containing large numbers of files.


Even if you buy a state-of-the-art computer designated as a backup server, this may not be enough to ensure the smooth and trouble-free backup of a number of computers at the same time. For maximum performance, it is necessary to have the both operating system and the data backup software appropriately configured. Without such optimisation of settings, the computer may perform slowly, and working with it will be no joy. In the case of a backup server, the satisfaction of the user - the administrator - may not be of great importance, but certainly, failures to meet backup deadlines or slow generation of reports will be cause for complaint.

Below, we provide a list of tips (in order of importance) which can increase the performance of a backup server.


1. Anti-virus software

An anti-virus monitor, or resident protection, greatly slows down saving and reading data from a disk. This is important, because a badly-configured anti-virus program wastes the advantages afforded by even the most modern equipment or the tips found below. In order for backup to be performed smoothly and error-free, it is crucial to configure anti-virus and anti-spyware programs correctly. It is also important to not to forget about the Windows Defender software built in to the operating system. Information on how to configure the resident protection program, and whether it’s worth installing it on the backup server, is available in the article Anti-virus alert! How anti-virus software affect a computer.


2. Firewall

Some firewalls which analyse network traffic operate like anti-virus monitors. This type of software may reduce transfer rates, and cause network errors or workstations to be disconnected from the backup server. To improve performance, Ferro Backup System needs only one port for incoming connections – the TCP 4531 - to be open. All other ports on the backup server can be disabled. Network services which might be targeted for attack can also be shut off. With only one port open, protecting a computer is not difficult, and that’s why there is usually no need to employ additional security beyond what is built in to the operating system. Using a built-in firewall or IPSec rules will in most cases provide sufficient security, and will not impair backup system performance.


3. Network disks (NAS)

Using mapped network disks as the main location for storing backups is not a good solution, for two reasons. First, network disks are generally much slower than fixed drives directly connected to the backup server. Second, the connection with such a disk cannot be stable in the long run, which may lead to errors in reading and saving data. Network disks, then, should only be used as a secondary storage. More information on this topic can be found in the article Back-up to network disk (NAS).


4. USB disks

The situation is similar in the case of external drives connection through a USB port. This type of drive usually operates more slowly than a fixed drive, especially when using a USB 2.0 port or when a number of devices are connected to the same USB controller.

Another problem is the operating system automatically assign letters to disk drives. It can happen that, in the future, the system will assign a different letter to a USB drive than the one assigned at the moment the program was configured. As a result, the backup task will not be performed because the disk established as the target will not be available under the letter indicated. This discrepancy can be eliminated quite simply by correctly configuring the operation system, but it is important to remember to do so, because it is not a default setting.


5. File system format and cluster size

When preparing a hard drive for storing backups, the disk should be formatted in the NTFS system and the largest cluster size possible should be selected. Using the NTFS format is essential to be able to store backups in excess of 4 GB. Using a large cluster size makes it possible to achieve maximum IO operation performance - but it is not speed which is the main reason why cluster size should be large.

In an NTFS system, while a file is being saved to the Master File Table (MFT), information on the location of its individual fragments is recorded. When cluster size is small, a greater number of entries in the MFT is needed to describe the location of the file. Using a large cluster size reduces the possible backup fragmentation and eliminates the problem of restricting the maximum number of fragments per file.


6. Several disks instead of just one

During simultaneous backup of many computers, the server performs many overlapping save operations. If the total number of data recorded per second approaches the maximum transfer rate of a given disk (or controller), then the backup rate of individual computers declines. In order to increase backup performance, saving can be distributed over several hard drives. In the settings of individual workstations, all that is necessary is to redefine the location for backup storage, indicating various hard drives. The increase in performance thus achieved is similar to what can be gained by using ‘striping’, that is, a RAID 0 matrix.


7. Replication at a set time

Ferro Backup System comes equipped with a function for automatically replicating backups on an additional hard drive, tape drive or optical drive. Automatic replication is performed as soon as the backup is completed. If both backup and archive replication from another computer are going on at the same time, then the overlapping read and save operations may slow the backup down. For this reason, it is worth considering switching automatic replication to replication performed according to a time schedule. Replication is performed at a fixed time, which can be set for a period when the backup server and computer network have a lighter load.


8. Breaking tasks down

Often, there are a lot of files stored on a server - easily more than 500,000. If such a server is backed up using the differential or delta method, then serving backup copies can pose a big challenge to the backup server, especially if the Rotation backups function with the Optimize to save disk space option is enabled. This puts a big load on the processor, hard drives and RAM use. In the case of RAM, not only can there not be enough of it, but the likelihood of memory fragmentation increases. A good solution in such situations is to break a single, large backup task into several smaller ones so that the archives generated are smaller, and contain fewer files.


9. Periodic restart

Given the potential for RAM fragmentation as described above, and for hard drive fragmentation, it is a good idea to periodically restart the FBS Server program and to defragment the hard drive and compact the database.

The instructions for closing the program (taskkill, net stop), defragmenting the drive (defrag, contig) and compacting the database (compactdb) can be placed in a single script or in a batch file. Such a script can be automatically started by the Windows Task Scheduler at defined intervals. With low backup server workloads, one restart a month is absolutely enough. Where the workload is heaver, one restart per week is recommended.


See also

Home   Help   Where to Buy    Download    Contact Us   Partners   |  Printable version  |  Language: EN EN   PL PL

How to Speed Up a Backup Server & Increase Backup Performance
All rights reserved.
Copyright © 2000-2017 FERRO Software