Archive for February, 2011

T-SQL Compound Assignment operators February 21st, 2011

Vinod Kumar

The SQL Server 2008 T-SQL language has been extended to allow compound assignment operators (e.g., addition assignment, subtraction assignment, etc). This can greatly reduce the amount of code and may boost productivity and code readability. Compound assignments may be used with SET, SELECT, and any other place where an assignment is used such as an UPDATE statement.

I personally love to see these getting used, but on a contrary also feel this might make our good old T-SQL champs into a C# programmer for SQL Server. But there is so much of elegance which can be used in this new era of programming.


Add the two values and assign


Subtract the value and assign


Multiply the values and assign


Divide by the value and assign


Modulo and assign


Bitwise AND and assign


Bitwise XOR and assign


Bitwise OR and assign

Some of the typical examples of usage are as below:

DECLARE @var1 VARCHAR(30) = ‘Hello’
SELECT @var1 += ‘ World’
SELECT @var1

SET    c1 += 10 

So how many of you use these syntax in your production code? Would love to hear from you.

Continue reading...


Performance Tuning–Data Type Mismatch February 16th, 2011

Vinod Kumar

I was recently at a customer discussion and something strange happened during the code review. As best practice, it is advised when using stored procedure not make mistakes of sending the data in wrong data types. These induce an implicit conversion that can cause performance bottlenecks inside SQL Server. These basics are often forgotten and missed as part of the process.

For illustration purposes I used a very simple table for example.

     col1 VARCHAR(10),
     col2 CHAR(200)

The stored procedure is also made simple. The catch here is the parameter as NVARCHAR whilst the column is VARCHAR. In many places this confusion is common. In the name of internationalization I have seen people just use the NVARCHAR and forget the same while writing the SPs.

CREATE PROCEDURE Proc_test_mismatch @MyParam NVARCHAR(10)
  FROM   test
  WHERE  col1 = @MyParam 

SET statistics profile ON
EXEC Proc_test_mismatch N’1000000′
SET statistics profile OFF 

Execution of the above SP creates an interesting Plan that one needs to keep an eye on. The hidden perf deterrent is the IMPLICIT CONVERT that is happening inside SQL Server.

       |–Clustered Index Scan(OBJECT:([tempdb].[dbo].[Test].[idx1]), WHERE:(CONVERT_IMPLICIT(nvarchar(10),[tempdb].[dbo].[Test].[col1],0)=[@MyParam]))

You might argue what is the biggest drawback because of this. Actually, behind the scenes there are two problems that can be anticipated.

  1. First, when SQL Server has to convert a column, it is no longer able to use index.  
  2. Additionally, cardinality estimate will be off as well.  It can no longer use statistics on the column being converted

So something that look as simple as not using the ā€œNā€ in front of the data types can help you manage and improve performance just because the data is designed like that. The solution is as simple as possible.

  1. If it’s a stored procedure, the easiest solution is to change the stored procedure parameters to match the data type of the column the parameter is comparing against.
  2. If the columns of two tables don’t match when joining, one of the column’s data type needs to be changed to match.

Once we do the simple steps, the execution plans will become clean. Check the SEEK operation on execution of the same SP now after the changes.

  |–Clustered Index Seek(OBJECT:([tempdb].[dbo].[Test].[idx1]), SEEK:([tempdb].[dbo].[Test].[col1]=[@MyParam]) ORDERED FORWARD)

So, just go around and check if you have such mistakes in your products code. And drop me a line here if you found one. All the Best !!!

Continue reading...