Though compression is a very niche topic and not many people know about the internals of how it functions and does the whole process. Anyways, I have always felt, if not for anything else, the backup compression is a sure winner if you are having large databases and have to do maintenance in a short window of time. In the process of using compression sometimes you will want to debug and troubleshoot any possible problems – so here are certain tips for my readers !!!
Two performance monitor counters are added as part of the Data Compression feature. They are available under the SQL Server: Access Methods objects.
Pages compressed/sec: This counter provides the number of data pages that are compressed by using PAGE compression. This includes all objects in this instance of SQL Server.
Page compression attempts/sec: This counter provides the number of pages evaluated for page-level compression. This includes pages that were not compressed because significant savings could be achieved. This includes all objects in the instance of SQL Server.
You can consider these as aggregated information about all object specific information represented in sys.dm_db_index_operational_stats.
Since there is an overhead associated with compressing the data, users and customers will need a mechanism using which they can estimate the extent of space saving that can be attained using the different compression types on various indexes of a table. This is done by the sp_estimate_data_compression_savings stored procedure. For the purposes of estimation, this stored procedure creates two tables in tempdb database. One of the tables contained sampled data from the actual table that you want to compress. The other table is actually a compressed version of the first temporary table. The compression ratio is calculated by comparing the space used by these two temporary tables. The maximum number of pages that are sampled is 5000. There are some factors that can affect the compression ratios obtained during the estimation process and hence differ from the actual saving obtained.
- Since the process of estimation uses random sampling, actual compression ratios can vary depending on data distribution.
- If the original table or index is heavily fragmented, then the compression process will remove all this fragmentation and give a perception of higher compression ration when in-fact the additional space savings gained is merely due to de-fragmentation.
This system table contains a history of backups with columns describing each backup created. This table contains the backup_size and compressed_backup_size columns which can be used to determine how effective backup compression has been.
This system table contains a history of how backup files were formatted including the column is_compressed which tells if all of the files on the backup are compressed or not compressed.
This command returns various columns describing each backup on the file. The key columns for troubleshooting backup compression are:
- backupsize – this reports count of bytes of the backup including MTF headers.
- compressedbackupsize – this reports the compressed size of the backup. Dividing this value into the backupsize will give the compression ratio.
This command returns information about how the backup file was formatted. The IsCompressed column reports whether the backup file was created with or without compression enabled.
Hope you found this info useful and feel free to pass your comments !!!
This entry was posted on Wednesday, January 5th, 2011 at 09:25 and is filed under Uncategorized. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.