Compress SQL Server Backup in Right Way
Databases have the nasty habit of growing to mammoth sizes with continued usage over years. And it is common knowledge that databases that grow too large become prone to problems such as corruption, mismanagement etc. It thus becomes important to keep tabs on their growing sizes before they reach the point of no return. That’s why organizations engage in frequent backing up and shrinking of entire databases (which is not a very safe option). However, just as the databases can grow large in size, database backups can also expand over years and need a way to be confined to certain limits. It is at this juncture that database backup compression finds its need.
Database backup compression traditionally comes at a hefty price, often being payable at the rate of per byte transfer in and out of servers. For an organization that has database backups going beyond many GBs in size, that could mean an expense of thousands of dollars just to compress the backups to manageable and safer forms.
This is the primary factor that prompts organizations to look for simpler, budget-friendlier ways to compress their SQL database backups. In this article, we’ll bring to light some neat ways to perform SQL server backup compression with minimum risk, expense and hassle.
First things first – why shrinking the database or its backup is not a good idea?
We often come across questions on forums where people are looking for ways to shrink their databases. While it is quite a popular technique used to reduce database size, it is not a very safe way to solve the problem.
When you initiate the shrink operation, you basically instruct the SQL server to remove any unused space from the database files. What it does is actually “shrink” the database or backup causing index fragmentation, data loss (which would require you to rope in SQL backup recovery software incurring additional costs) and adversely affecting performance. So while it may seem like a good immediate solution, in the long run it doesn’t prove to be fruitful.
So when you observe the database backup is reaching its size limit, opt for a safer compression rather than shrinking as explained in the next section.
Things to remember before compressing SQL server backups
Before you jump to the compression techniques kindly take note of these words of caution:
- Consider compressing your SQL server backup only if you have a second separate server that can act as a standby and that you can use to:
- Perform the restore of production backups
- Backup other smaller databases
- Decrease the size of the restored production backups
Having a second server is emphasized since if you use the techniques mentioned here to compress your backups without delegating the regular tasks to a standby server, you could end up with super-sluggish performance.
- You should perform thorough testing of your compressed backups regularly. This is to ensure that at the end of applying so much time and effort into compressing them, your backups don’t end up becoming unusable for restoring data. After all, being able to restore data is why backups are done in the first place right?
Compressing SQL Server Backups – The steps!
Before we detail the steps, just to make things clear we’d like to briefly point out the distinction between clustered and non-clustered indexes. Clustered indexes are just definitions of how a table is laid out on the disk, so they actually don’t acquire much disk space. Non-clustered indexes however, are additional copies of table parts and hence take up extra disk space.
The reason we mentioned that would be clear to you in the first step:
- Drop all Non-Clustered indexes
You read it right; you need to script out all the non-clustered indexes from the database, save those definitions to a stored procedure or table, create a stored procedure to loop through those definitions and recreate them later and execute the ‘DROP’ command.
The benefit of this step is that if the non-clustered indexes were occupying say 45% of the database space, the compressed backup will be 45% smaller in size. The drawback is that the database will take a little longer to become fully available since the stored procedure to recreate the indexes will be run first.
- Rebuild tables with 100% Fill Factor
Fill Factor is the default amount of space used on each page where SQL server stores data. Usually, when data is written to SQL server pages, some space is left out to accommodate records that need to be added later which otherwise would require a lot of shuffling to be added. To compress the backup, like in this scenario, we would need to cram up as much data on each page as possible to save space. For this, you’ll need to rebuild all clustered indexes with a 100% fill factor.
This will take quite a lot of IO operations and thus again, it should be undertaken only if you have a standby server available.
So it all just boils down to the 2 steps mentioned above. At the end of these steps, what you’ll have are backups that are almost half their original size. And you know what that means – half the expense of bandwidth, half the storage requirements in the long-run and half the time needed for SQL Server backup recovery. That established, we’d also like to point out that these techniques are effective but your individual results may vary depending upon the number of indexes you have and on your fill factor.
Author Bio: Priyanka Chouhan is a technical writer in Stellar Data Recovery with 5 years of experience and has written several articles on SQL server & SharePoint. In the spear time she loves reading and gardening.