Posts Tagged ‘Counters’

SQL Server 2012: Memory Counter Changes January 15th, 2013

Vinod Kumar

Memory Counter

In performance tuning exercises I am used to querying and using Performance counters at large. This is one source of high-level troubleshooting that we advocate to our customers for a quick effective start. Once we know where the problem is, we get into detailed analysis using other tools. Several memory counters that were present in the Buffer Pool objects – for example, Buffer Manager, Buffer Partition and Buffer Node are removed in SQL Server 2012. New counters that reflect status of memory manager components are added to the system monitor objects: Memory Manager, Memory Node. It is key to know what was added because these help us in future troubleshooting.

Counters removed in SQL Server 2012

SQLServer:Buffer Manager

  1. Free pages
  2. Total pages
  3. Target pages
  4. Reserved pages
  5. Stolen pages
  6. AWE lookup maps/sec
  7. AWE stolen maps/sec
  8. AWE write maps/sec
  9. AWE unmap calls/sec
  10. AWE unmap pages/sec

SQLServer:Buffer Partition

  1. Free pages
  2. Free list requests/sec
  3. Free list empty/sec

SQLServer:Buffer Node

  1. Free pages
  2. Total pages
  3. Foreign pages
  4. Stolen pages
  5. Target pages

The following memory related counters have been added in SQL Server 2012:

SQLServer:Memory Manager

  1. Database Cache Memory (KB)
  2. Free Memory (KB)
  3. Reserved Server Memory (KB)
  4. Stolen Server Memory (KB)

SQLServer:Memory Node

  1. Database Node Memory (KB)
  2. Free Node Memory (KB)
  3. Foreign Node Memory (KB)
  4. Stolen  Memory Node (KB)
  5. Target Node Memory
  6. Total Node Memory

SQLServer:Buffer Node

  1. Local Node page lookups/sec
  2. Remote  Node page lookups/sec

These were important to know because during performance tuning, I have a tendency to use perfmon templates. In a recent testing, I found that some of these counters were different and were missing in my analysis. On closer debugging is when I understood, some of these counters have been removed because of the memory changes made to SQL Server.

Continue reading...


SQL Server Compression Troubleshooting January 5th, 2011

Vinod Kumar

Though compression is a very niche topic and not many people know about the internals of how it functions and does the whole process. Anyways, I have always felt, if not for anything else, the backup compression is a sure winner if you are having large databases and have to do maintenance in a short window of time. In the process of using compression sometimes you will want to debug and troubleshoot any possible problems – so here are certain tips for my readers !!!Compression

Two performance monitor counters are added as part of the Data Compression feature. They are available under the SQL Server: Access Methods objects.

Pages compressed/sec: This counter provides the number of data pages that are compressed by using PAGE compression. This includes all objects in this instance of SQL Server.

Page compression attempts/sec: This counter provides the number of pages evaluated for page-level compression. This includes pages that were not compressed because significant savings could be achieved. This includes all objects in the instance of SQL Server.

You can consider these as aggregated information about all object specific information represented in sys.dm_db_index_operational_stats.


Since there is an overhead associated with compressing the data, users and customers will need a mechanism using which they can estimate the extent of space saving that can be attained using the different compression types on various indexes of a table. This is done by the sp_estimate_data_compression_savings stored procedure. For the purposes of estimation, this stored procedure creates two tables in tempdb database. One of the tables contained sampled data from the actual table that you want to compress. The other table is actually a compressed version of the first temporary table. The compression ratio is calculated by comparing the space used by these two temporary tables. The maximum number of pages that are sampled is 5000. There are some factors that can affect the compression ratios obtained during the estimation process and hence differ from the actual saving obtained.

  1. Since the process of estimation uses random sampling, actual compression ratios can vary depending on data distribution.
  2. If the original table or index is heavily fragmented, then the compression process will remove all this fragmentation and give a perception of higher compression ration when in-fact the additional space savings gained is merely due to de-fragmentation.

Msdb.dbo.backupfile system table

This system table contains a history of backups with columns describing each backup created. This table contains the backup_size and compressed_backup_size columns which can be used to determine how effective backup compression has been.

Msdb.dbo.backupmediaset system table

This system table contains a history of how backup files were formatted including the column is_compressed which tells if all of the files on the backup are compressed or not compressed.

Restore Headeronly

This command returns various columns describing each backup on the file. The key columns for troubleshooting backup compression are:

  • backupsize – this reports count of bytes of the backup including MTF headers.
  • compressedbackupsize – this reports the compressed size of the backup. Dividing this value into the backupsize will give the compression ratio.

Restore Labelonly

This command returns information about how the backup file was formatted. The IsCompressed column reports whether the backup file was created with or without compression enabled.

Hope you found this info useful and feel free to pass your comments !!!

Continue reading...


SQL Server 2008 R2 – DMV Changes July 5th, 2010

Vinod Kumar

Majority of the folks over various conversations tell me there are not much of changes with the R2 version. Well, there are tons of little things that make the product better and better both for Developers and ITPro’s alike. In this list is some subtle additions to the DMV’s which make some interesting insights into the product itself. I will make another post to get some interesting insights with these DMV’s for you … Some of the DMV extensions made are around –


  • processor_group


  • processor_group


  • online_schduler_mask
  • processor_group


  • affinity_type
  • affinity_type_desc
  • process_kernel_time_ms
  • process_user_time_ms
  • time_source
  • time_source_desc


  • cpu_affinity_mask
  • online_scheduler_mask
  • processor_group

Surely, these don’t form the complete list but can be something to explore for the future.

Continue reading...


SQL Server – RECOMPILE doesn’t happen ? July 3rd, 2010

Vinod Kumar

This is a wonderful topic and very close to my heart in some way because it is part of  Performance optimization techniques. If you had a chance to read this whitepaper on TechNet on “Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005” then you will understand where I am really coming from. That whitepaper has a lot of interesting internal information hidden between the lines. This post is not to mimic or copy parts from there by any chance but take a slightly different route of when the RECOMPILE doesn’t happen. The whitepaper exhaustively outlines when it happens in detail but there are some conditions when it can get differed by the optimizer. Let me call out some of them here –

There are specific conditions when the recompilation gets short circuited.

– When the plan is a “trivial plan.” A trivial plan results when the query optimizer determines that given the tables referenced in the query and the indexes existing on them, only one plan is possible. Obviously, a recompilation would be futile in such a case. A query that has generated a trivial plan may not always generate a trivial plan, of course. For example, new indexes might be created on the underlying tables and so multiple access paths become available to the query optimizer.

– When a query contains “KEEPFIXED PLAN” hint, its plan is not recompiled for plan optimality-related reasons.

– When all of the tables referenced in the query plan are read-only, the plan is not recompiled.

So some of these are really interesting conditions in understanding the RECOMPILE architecture esp on heavy workloads. So one of the main readings I see on any perf-tuning exercise is the recompile perfmon counter. In SQL Server 2008 onwards we introduced the Stmt level recompile and it will surely form another long post some other day.

On the Performance Tuning topic – I also recommend reading this whitepaper. Though a bit dated, they are very relevant even today with SQL Server 2008 versions.

Continue reading...


Performance Monitoring with SQL 2008 January 29th, 2008

Vinod Kumar

Check my blog post on TechNet over Performance Monitoring with SQL Server 2008 here. Feel free to pass your comments.

Continue reading...