casinos

Posts Tagged ‘InterOp’

SQL Collations mismatch after Upgrade / Consolidation February 14th, 2013

Vinod Kumar

Recently I was contacted by one of the customers with an interesting query. They upgraded from previous version of SQL Server and the DBA didn’t take  necessary precaution for collation in their new server. Now they need to deal with a mismatch between SQL_Latin1_General_CP1_CI_AS and Latin1_General_CI_AS collations in their new database instance versus their old databases. This was a typical rare scenario and I had to help them.

Most DBA’s are oblivious to select an appropriate collation at the point of installation. This is a typical recipe for disaster and I thought to write about this in this blog post for people to use as reference.

What can go wrong here?

  1. Usage of Temp Tables is causing wrong results or worst is breaking now.
  2. In consolidation scenario, now 100’s of database will be in one instance there needs to be application change that needs to be enforced.
  3. If the collation was wrongly selected as case-sensitive then the stored procedure and application can go down easily as soon as the switch happens.

In my close to over a decade of working with SQL Server, I have never ever seen application developers use specific collation on their code. Under rare scenario with multiple localized content, I have seen people use COLLATE keyword in their code. That as I said is one in million sort of scenario. Would love to hear if you have used such code in your application for sure.

Do look at the bigger checklist for SQL Server 2012 if you are doing the same. The same list can be useful when moving from other versions too.

What went wrong? Why?

Most DBAs don’t pay attention to collation until it is too late. The crux of the issue is that SQL Server metadata collation follows the database collation unlike every other database – meaning the metadata or system tables are collated based on the database collation. So in a database with case-sensitive collation, the metadata is case-sensitive (object identifiers, names of tables, sorting of object names & so on).

In SQL Server 2012, we introduced the concept of a fixed catalog collation with partially contained database (CDB). So within the context of a partial CDB, the metadata collation is always fixed to a specific collation irrespective of the database collation or how the database was created on an instance with different defaults.

If you haven’t read about Contained DB, I highly recommend reading the same as it has multiple advantages like:

  1. Have case-sensitive (or other) collation behavior for user data without having to endure a case-sensitive catalog.
  2. Consolidate databases from servers with differing collations without having to rework the code to add explicit collation comparisons.
  3. And of course move a database from one server to another and have the TempDb collation behavior the same.

Know your collation

In SQL Server 2012, you can use the catalog_default keyword when defining your columns for temporary tables. This will prevent the collation conflict issue when joining temporary table with user table data or catalog view data. In older versions, you can use database_default keyword.

See example below (run below in plain database and partially contained to see difference in behavior):

DECLARE @myVar SQL_VARIANT = COLLATE catalog_default;

SELECT SQL_VARIANT_PROPERTY(@myVar, ‘collation’);

SET @myVar = COLLATE database_default;

SELECT SQL_VARIANT_PROPERTY(@myVar, ‘collation’);

Basically, in a partially contained database the catalog_default will return the fixed metadata collation and in plain database it will return database collation (COLLATE property of database). The database_default keyword always returns the database collation. If you use these keywords then you don’t need to specify the collation name explicitly in your code for the temporary table column definitions.

Using COLLATE database_default for column definitions in temporary tables will avoid the issue in older versions of mismatch. Or you can also use table variables instead where the column definition defaults to database collation if collate clause is absent.

Final words

Though we discuss the workarounds here. This is a postmortem work. I highly recommend that if you are an DBA, please ask what the collation should be. Ask what the application design requirements are. Ask if contained DB is being used. Worst case, if you have made the mistake of defaults – it is never too late to reinstall SQL Server afresh as an worst case scenario to avoid all these heart burns :). Please feel free to share your stories with me if you have anything interesting like this that did happen in your organization.

Continue reading...


 

SQL Server 2012: Filestream Multiple Containers February 7th, 2013

Vinod Kumar

Filestreams feature was introduced in SQL Server 2008. SQL Server 2012 introduces new improvement where we can now add more than one container in same filegroup (containing filestream). Now that we can have more than one directory from operating system be part of same filegroup for filestream use, it allows us to improve storage (physical database) design for a database while not having to partition a table creatively. This can improve IO, which can be spread across multiple directories / volumes / physical devices.

First let us create our database with Filestream. Below we are creating a single Filestream filegroup at location c:\data\FS1.

image

Here is the great enhancement. Let us add a second location to the same filegroup.

image

We will next create a table with filestream data. This is enabled via Varchar(Max) datatype.

image

Final step is to add rows into the table. We will insert some 10k rows using a while loop.

image

Filestream file data distribution algorithm is mainly round robin but with a proportional fill element in an effort to try to parallelize the IO activity. What it means is when a new empty container is added, it may be favored more for INSERTs (while still avoiding all INSERTs to be targeted to same). Once new container catches up with the old ones in terms of size, INSERTs would again be fair across containers.

Fineprints with Filestream

  1. You can now run DBCC SHRINKFILE with EMPTYFILE option against a file indicating a container to migrate all the data from that file to other files (containers) within same filegroup.
  2. sp_filestream_force_garbage_collection: in SQL Server 2008 and SQL Server 2008 R2 you could not force the garbage collection of filestream. A new system stored procedure called "sp_filestream_force_garbage_collection" has been added in SQL Server 2012, which can now force the garbage collection to run from a user connection.
  3. This would be especially useful in scenarios where SHRINKFILE was done on a filestream container but not all files have been garbage collected so you cannot remove it. In such cases, you could run GC for filestream manually to clean it up and then remove the logical file representing this directory from database.
  4. sys.database_files catalog view has a column named size which in case of FILESTREAM files indicates the size of the folder.

Continue reading...


 

MS Access functions inside SQL Server January 23rd, 2013

Vinod Kumar

Many a times an enterprise application has its roots in smaller application developed using a small-scale backend database services. Microsoft Access is a premier Rapid Application Development tool which a rich client application using JET engine. When such application needs to be migrated to use Microsoft SQL Server engine or SQL Azure, there is a challenge in porting the code. Since MS Access database provides access to certain expression calculation and built-in functions, it may become a non-trivial exercise to MS Access based application. To assist with this, SQL Server 2012 has introduced few built-in functions, which are syntactically and semantically same as MS Access.

CHOOSE

Returns the item at the specified index from a list of values.

Usage: CHOOSE ( index, val_1, val_2 [, val_n ] )

Post from SQLAuthority.

DATEFROMPARTS

DATETIME2FROMPARTS

DATETIMEFROMPARTS

DATETIMEOFFSETFROMPARTS

TIMEFROMPARTS

SMALLDATETIMEFROMPARTS

Returns a specific date or datetime data type value for the specified year, month, and day.

Usage: DATEFROMPARTS ( year, month, day )

  1. DATETIME2FROMPARTS ( year, month, day, hour, minute, seconds, fractions, precision )
  2. TIMEFROMPARTS ( hour, minute, seconds, fractions, precision )
  3. DATETIMEFROMPARTS ( year, month, day, hour, minute, seconds, milliseconds )
  4. DATETIMEOFFSETFROMPARTS ( year, month, day, hour, minute, seconds, fractions, hour_offset, minute_offset, precision )
  5. SMALLDATETIMEFROMPARTS ( year, month, day, hour, minute )

Post from SQLAuthority.

EOMONTH

Returns the last day of the month that contains the specified date, with an optional offset. The offset is month (and not date). This would be a good function for many Line Of Business and reporting applications. It would be quite easy to find out what was last day of month 3 months ago or 3 months from now.

Usage: EOMONTH ( start_date [, month_to_add ] )

Post from SQLAuthority.

FORMAT

Returns a value formatted with the specified format and optional culture. Recommended use of this function is to do a location (culture) aware conversion of strings to date time data types. For general conversion, CAST and CONVERT are still preferred built-in functions.

Usage: FORMAT ( value, format [, culture ] )

This function relies on existence of CLR being installed. If CLR is not installed (and or cannot be loaded in SQL Server 2012), the call to FORMAT function will fail. Also execution of this function will load CLR within SQL Server if not already loaded.

Details from SQLAuthority.

IIF

Returns one of two values, depending on whether the Boolean expression evaluates to true or false. IIF is a shorthand way for writing a CASE statement and is internally converted as such. All the limitations of CASE statement also apply to IIF.

Usage: IIF ( boolean_expression, true_value, false_value )

Post from SQLAuthority.

TRY_CONVERT

Returns a value cast to the specified data type if the cast succeeds; otherwise, returns null. The function can return error if casting to destination data type is explicitly not allowed. For example if you try using this function to cast an INT value to XML value, it will return an error (and not just return NULL).

Usage: TRY_CONVERT ( data_type [ ( length ) ], expression [, style ] )

Post from SQLAuthority.

LOG/LOG10

LOG returns the natural logarithm of the specified float expression, and optionally takes base as input. LOG10 returns the base-10 logarithm of the specified float expression.

Usage: LOG ( float_expression [, base ] )

LOG10 ( float_expression )

CONCAT

Returns a string that is the result of concatenating two or more string values. This is simply a string concatenation function. Return type is dependent on what input types are sent. Typically, return value is of highest precedent data type. For example, multiple varchar and nvarchar of less than 4000 bytes are sent, final output is of nvarchar of 4000 bytes in size (and may be truncated). On the other hand, if at least one value is nvarchar(max) or varchar(max), the final output is of nvarchar(max) size. Note, your arguments could be of non-string data type as well as long as they can be implicitly converted to string data type.

Usage: CONCAT ( string_value1, string_value2 [, string_valueN ] )

Details from SQLAuthority.

PARSE/TRY_PARSE

Returns the result of a string expression, translated to the requested data type. PARSE function will return error if cast/translation fails, while TRY_PARSE will return null if the cast fails. This can be useful function for location (culture) aware expression calculation.

Usage: TRY_PARSE ( string_value AS data_type [ USING culture ] )

PARSE ( string_value AS data_type [ USING culture ] )

Details from SQLAuthority.

Continue reading...


 

SQL Server 2012: Memory Counter Changes January 15th, 2013

Vinod Kumar

Memory Counter

In performance tuning exercises I am used to querying and using Performance counters at large. This is one source of high-level troubleshooting that we advocate to our customers for a quick effective start. Once we know where the problem is, we get into detailed analysis using other tools. Several memory counters that were present in the Buffer Pool objects – for example, Buffer Manager, Buffer Partition and Buffer Node are removed in SQL Server 2012. New counters that reflect status of memory manager components are added to the system monitor objects: Memory Manager, Memory Node. It is key to know what was added because these help us in future troubleshooting.

Counters removed in SQL Server 2012

SQLServer:Buffer Manager

  1. Free pages
  2. Total pages
  3. Target pages
  4. Reserved pages
  5. Stolen pages
  6. AWE lookup maps/sec
  7. AWE stolen maps/sec
  8. AWE write maps/sec
  9. AWE unmap calls/sec
  10. AWE unmap pages/sec

SQLServer:Buffer Partition

  1. Free pages
  2. Free list requests/sec
  3. Free list empty/sec

SQLServer:Buffer Node

  1. Free pages
  2. Total pages
  3. Foreign pages
  4. Stolen pages
  5. Target pages

The following memory related counters have been added in SQL Server 2012:

SQLServer:Memory Manager

  1. Database Cache Memory (KB)
  2. Free Memory (KB)
  3. Reserved Server Memory (KB)
  4. Stolen Server Memory (KB)

SQLServer:Memory Node

  1. Database Node Memory (KB)
  2. Free Node Memory (KB)
  3. Foreign Node Memory (KB)
  4. Stolen  Memory Node (KB)
  5. Target Node Memory
  6. Total Node Memory

SQLServer:Buffer Node

  1. Local Node page lookups/sec
  2. Remote  Node page lookups/sec

These were important to know because during performance tuning, I have a tendency to use perfmon templates. In a recent testing, I found that some of these counters were different and were missing in my analysis. On closer debugging is when I understood, some of these counters have been removed because of the memory changes made to SQL Server.

Continue reading...


 

SQL Server 2012: Database Tuning Advisor enhancements January 14th, 2013

Vinod Kumar

The most important change in Database Tuning Advisor (DTA) is, SQL Server 2012 will allow you to use the query plan cache as a DTA workload. By doing this, we can avoid having to manually create a workload from a script or trace file. When we specify the plan cache as the DTA workload, the Database Engine Tuning Advisor selects the top 1,000 events to use for analysis. Currently, the number of events cannot be changed via the DTA graphical interface (DTASHELL.EXE), but can be changed using the –n option at the command line of DTA.EXE.

image

  1. Launch Database Engine Tuning Advisor, and log into an instance of SQL Server.
  2. On the General tab, enter a name in Session name to create a new tuning session.
  3. Select Plan Cache as the workload option. Database Engine Tuning Advisor selects the top 1,000 events from the plan cache to use for analysis.
  4. Select the database or databases that you want to tune, and optionally from Selected Tables, choose one or more tables from each database. To include cache entries for all databases, from Tuning Options, click Advanced Options and then check the Include plan cache events from all databases check box.
  5. Check the Save tuning log check box to save a copy of the tuning log. Clear the check box if you do not want to save a copy of the tuning log.
  6. You can view the tuning log after analysis by opening the session and selecting the Progress tab.
  7. Click the Tuning Options tab and select from the options listed there.
  8. Click Start Analysis.

Final words

Personally, I am not a big fan of DTA tool but have felt a lot of people use this tool because they don’t understand how SQL Server works when it comes to creating indexes. In such cases, this is a great start and a good tool to guide us. Use the recommendation given from DTA, but analyze why this recommendation and look at your workload. Make sure the recommendation are not skewed because of month end reporting workload or some batch process that runs once in a while. I am sure with the plan cache option, there will be lesser friction to generate workload with SQL Server for DTA.

Continue reading...