casinos

Posts Tagged ‘InterOp’

Is SQL datetime2/datetimeoffset conversion deterministic? May 28th, 2013

Vinod Kumar

Recently in a customer code review I came across some interesting logic about Datetime2 and DatetimeOffset datatype. Here is the problem statement for your reference:

Customer was trying to create a view that contains a computed column that converts a datetime2(7) column to a datetimeoffset(7) column. The CAST function looks like:

CAST((CONVERT(varchar(255), DATEADD(hh, +8, RawLog.[Created]),126) + ‘+08:00′) AS datetimeoffset(7)) AS [Created_GMTPlus8]

This returned them an error when a index was created on this view:

Cannot create index on view "MyVideo..RawLogView". The view contains a convert that is imprecise or non-deterministic.

So the customer was asking, how to make this deterministic? Would love to learn if you have any other option.

Possible Solution

Here are couple of possible solutions to this problem. The root reason is that the cast string to datetimeoffset(7) is not deterministic,  we need use convert with style 126.  Let me show how we can rewrite this query to achieve the same. Note, in below example Option 2, we use SWITCHOFFSET built-in to convert a datetime UTC values to a datetimeoffset, and then SWITCH the timezone to +08:00. Let us see the code now:

USE tempdb

GO 

– If it exists Drop it.

DROP TABLE rawlog

GO 

CREATE TABLE rawlog (id INT IDENTITY(1,1) PRIMARY KEY,

            created DATETIME2(7))

GO

INSERT INTO rawlog VALUES(SYSDATETIME())

GO

We have created our dummy table for use with a value. Let us check if the string conversion returns the same result.

– Make sure SWITHOFFSET return the same value as we expected

SELECT created,CAST(CONVERT(VARCHAR(255), DATEADD(hh, +8, [Created]),126) + ‘+08:00′ as DATETIMEOFFSET(7)) CASTStringValue,

SWITCHOFFSET(CAST(created AS DATETIMEOFFSET),‘+08:00′)  AS result FROM dbo.rawlog

GO

Our output looks like:

image

Our values are exactly the same. Let us create the view next, and we will then check if it can be indexed as per our problem statement.

CREATE VIEW v1 WITH SCHEMABINDING AS

  SELECT CAST(CONVERT(VARCHAR(255), DATEADD(hh, +8, [Created]),126) + ‘+08:00′ as DATETIMEOFFSET(7)) AS result

  FROM dbo.rawlog

GO

–return 0 because of the case string to datetimeoffset is not deterministic

SELECT OBJECTPROPERTY(object_id(‘v1′),‘IsIndexable’)

GO

Option 1:

Next let us try to use the proper Conversion function and check. First add the offset with DATEADD and then convert it into DATETIMEOFFSET(7).

CREATE VIEW v2 WITH SCHEMABINDING AS

  SELECT CONVERT(DATETIMEOFFSET(7), CONVERT(VARCHAR(255), DATEADD(hh, +8, [Created]),126) + ‘+08:00′,126) AS result

  FROM dbo.rawlog

GO

–return 1 because convert with 126 is deterministic

SELECT ObjectProperty(object_id(‘v2′),‘IsIndexable’)

GO

Option 2:

Other than using convert with 126 style, we can also use SWITCHOFFSET function which is also deterministic

CREATE VIEW v3 WITH SCHEMABINDING AS

  SELECT SWITCHOFFSET(CAST(created as datetimeoffset),‘+08:00′)  AS result FROM dbo.rawlog

GO

– Return 1 because SWITHCHOFFSET is deterministic

SELECT OBJECTPROPERTY(OBJECT_id(‘v2′),‘IsIndexable’)

GO

These are just two of the methods to achieve the goal. Do let me know if you use Indexed views in your code today? What are the scenarios you use them? Have you encountered any of these deterministic restrictions in your coding anytime? Do let me know, would love to learn from you too.

Continue reading...


 

Contained Database and Windows User Group May 20th, 2013

Vinod Kumar

We discussed at length about what contained databases are, how to configure, steps to achieve the same etc in our previous post (Contained Databases inside SQL Server 2012). In this post let me show two concepts:

  1. Contained Database with Windows Users – most of the times I have seen people show contained database with SQL Authentication. In reality it applies to Windows Users too.
  2. Set a default Schema for group – This is an enhancement inside SQL Server 2012 where we can set an default schema for Windows User groups too.

Create Windows Group

Let us first create the Windows Group and add two users into this group. In our example, I am adding Balmukund and Pinal into this group. This will be done under Computer Management –> Local Users and Groups –> Groups location.

image

Create Contained Database

The steps are same as described in blog post Contained Databases inside SQL Server 2012.

SP_CONFIGURE ‘show advanced options’, 1

GO

RECONFIGURE

GO

SP_CONFIGURE ‘CONTAINED DATABASE AUTHENTICATION’, 1

GO

RECONFIGURE

GO

 

CREATE DATABASE cdb1 CONTAINMENT=Partial

GO

The above statements create the database CDB1.

Create Users and Set Default Schema

First step is to create the users and then give permissions to access our database. Change the same appropriately to suit your needs.

USE CDB1

 

– Create a database user for Windows Group

CREATE USER [MTC-Vinod\CDBUsers]

 

– Giving Permission to group

ALTER ROLE DB_DATAREADER ADD MEMBER [MTC-Vinod\CDBUsers];

ALTER ROLE DB_DATAWRITER ADD MEMBER [MTC-Vinod\CDBUsers];

Next step will be to create the schema and give permission to the group:

CREATE SCHEMA Users AUTHORIZATION [MTC-Vinod\CDBUsers];

 

– Set Default schema for a group!

ALTER USER [MTC-Vinod\CDBUsers]

WITH DEFAULT_SCHEMA = Users

Let us create a test table with some value to do a quick testing.

CREATE TABLE Users.testing(c1 int)

GO

INSERT INTO Users.testing VALUES (10)

Testing Contained Database

In our example, I have used a local user account for the demo. We can start SSMS using the “runas” command which will initialize SSMS in Balmukund’s user credentials. The typical command will look as:

image

In SSMS menu, select Advanced properties and don’t forget to change your default database to CDB1, else you are likely to get an error. Now the Object explorer shows as:

image

When you are connecting with Container Database user credentials, we can see ONLY this specific database. All other databases are hidden from the user.

Next step will be to query our tables. In the example below you can see we resolved the table “testing” to our default schema of “Users”.

image

Final Words

The above code confirms the two concepts we wanted to learn – a) Using Windows users and groups for Contained databases and b) Setting default schema for Windows Groups. The second concept is not specific to Contained databases but can be used for normal DB too. 

How many of you use contained database today? Are these cool additions with SQL 2012?Do let me know and would love to hear your feedbacks.

Continue reading...


 

TechEd India 2013: SQL Server and Architecture March 13th, 2013

Vinod Kumar

TechEdIndiaLogo2013

This time of the year that I get heads-down to bring some cool demo for TechEd and the exploration never stops. This year TechEd India 2013 happens in two cities – Bangalore and Pune. This time next week the Bangalore event would have got over and I am sure we will be in exhausted state to relive the past two days and staring at Pune to execute the same. There is always extra care, attention and preparation that goes behind for every TechEd. This year is no different. I am also excited because this will mark my 10th Year of TechEd on stage. The memories of doing this in 2003 is still fresh and the excitement never dies every single year.

The twitter hashtag to follow for this year TechEd is: #TechEdIn.

What is special about TechEd?

If you ask me there are a number of reasons to be at TechEd personally. My drivers for you will be:

  1. It is a place to socialize and at the same time feel alone. If you attend the tech sessions, you might feel there is so much more to learn and this sometimes alienates people in a good way. Get motivated and get to see what the future of technology is.
  2. Two days of pure learning and no work is something we will never get :). I know it is not a nice thing to say but I couldn’t resist from mentioning. Let me reiterate, it is not vacation time but learning time.
  3. It is a hub for networking. This is one of the key and most important reason to attend. Meet peers, speakers from Redmond, local speakers and many more.
  4. With close to 6+ tracks, HOLs – the choice of sessions are too many. It is always fun !!!
  5. It is also a place to make business contacts, find mentors, make new friends and all these will help you to make your future.

SQL Server – Security Session

For the past two years I have talked around Security in SQL Server. Each year I try to bring some unique perspective and what I find useful from a security point of view. This year it will be new additions to Security with SQL Server 2012. You will get to see:

  1. What has changed in Security
  2. How to use User Defined Roles and Default schema
  3. How to keep DBA out of database (access)
  4. Enhancements to Auditing
  5. Audit filtering and Audit resilience
  6. Contained Databases basics
  7. Using Logon Triggers with Contained Databases
  8. SQL Server 2012 – Service account changes
  9. and if time permits few more demo’s are available …

So don’t miss the action. I have infact given away what to expect out of my 1 hour session. Lots of demo and lots of detailing on how to use SQL Server security effectively.

SQL Server – Transaction Logs Session

Going back to the basics is important. So every TechEd there is one topic that I talk about which is about going back to basics. One question that I get back from customers almost every other session is around Transaction Logs functioning. I think it will be great to have a session around that.

  1. What is Transaction Logs? Why do we need them?
  2. Understanding what VLF’s are
  3. Creation of Logs and the basics
  4. Cyclic Logs and how they function
  5. What is written to Logs and how are they written
  6. What happens when Page Split happens with Logs
  7. How does minimal Logging work and what should we know
  8. Finally if time permits, how does recovery work …

This is just a teaser to what is in store for the session. We have many more interesting concepts to discuss and learn during the session – so don’t miss out.

Architecture Track – Evaluating Availability Options

This is going to be an interesting topic. As part of MTC I discuss about scale out to almost every single customer. Most of the times the discussion boils down to when to use what. We have solutions from Log shipping, Mirroring, Clustering, Replication and now with SQL Server 2012 – AlwaysOn. So the session is around:

  1. Why Scale out? Cant I just scale up?
  2. What changes do I need to do in my application?
  3. When to use what technology – Pre SQL 2012
  4. With SQL 2012 – What is AlwaysOn
  5. What are the 12+ different ways to deploy AlwaysOn for my environment?
  6. How do I decide and what will be the RPO and RTO for each?

This is a pure whiteboarding diagram based discussion. You will learn the theory and the possibilities when working with SQL Server 2012.

Conclusion

As I said, if you are in Bangalore or Pune – don’t forget to attend my session. Most importantly don’t forget to pass me a “Hi” after the session. I will be walking around and it will always great to make a new friend.

PS: If you are in Bangalore or Pune – the Usergroups have given some amazing discounts that you don’t want to miss. It is still not late, check out the Bangalore .NET and SQL Usergroup announcements.

Continue reading...


 

SQL Collations mismatch after Upgrade / Consolidation February 14th, 2013

Vinod Kumar

Recently I was contacted by one of the customers with an interesting query. They upgraded from previous version of SQL Server and the DBA didn’t take  necessary precaution for collation in their new server. Now they need to deal with a mismatch between SQL_Latin1_General_CP1_CI_AS and Latin1_General_CI_AS collations in their new database instance versus their old databases. This was a typical rare scenario and I had to help them.

Most DBA’s are oblivious to select an appropriate collation at the point of installation. This is a typical recipe for disaster and I thought to write about this in this blog post for people to use as reference.

What can go wrong here?

  1. Usage of Temp Tables is causing wrong results or worst is breaking now.
  2. In consolidation scenario, now 100’s of database will be in one instance there needs to be application change that needs to be enforced.
  3. If the collation was wrongly selected as case-sensitive then the stored procedure and application can go down easily as soon as the switch happens.

In my close to over a decade of working with SQL Server, I have never ever seen application developers use specific collation on their code. Under rare scenario with multiple localized content, I have seen people use COLLATE keyword in their code. That as I said is one in million sort of scenario. Would love to hear if you have used such code in your application for sure.

Do look at the bigger checklist for SQL Server 2012 if you are doing the same. The same list can be useful when moving from other versions too.

What went wrong? Why?

Most DBAs don’t pay attention to collation until it is too late. The crux of the issue is that SQL Server metadata collation follows the database collation unlike every other database – meaning the metadata or system tables are collated based on the database collation. So in a database with case-sensitive collation, the metadata is case-sensitive (object identifiers, names of tables, sorting of object names & so on).

In SQL Server 2012, we introduced the concept of a fixed catalog collation with partially contained database (CDB). So within the context of a partial CDB, the metadata collation is always fixed to a specific collation irrespective of the database collation or how the database was created on an instance with different defaults.

If you haven’t read about Contained DB, I highly recommend reading the same as it has multiple advantages like:

  1. Have case-sensitive (or other) collation behavior for user data without having to endure a case-sensitive catalog.
  2. Consolidate databases from servers with differing collations without having to rework the code to add explicit collation comparisons.
  3. And of course move a database from one server to another and have the TempDb collation behavior the same.

Know your collation

In SQL Server 2012, you can use the catalog_default keyword when defining your columns for temporary tables. This will prevent the collation conflict issue when joining temporary table with user table data or catalog view data. In older versions, you can use database_default keyword.

See example below (run below in plain database and partially contained to see difference in behavior):

DECLARE @myVar SQL_VARIANT = COLLATE catalog_default;

SELECT SQL_VARIANT_PROPERTY(@myVar, ‘collation’);

SET @myVar = COLLATE database_default;

SELECT SQL_VARIANT_PROPERTY(@myVar, ‘collation’);

Basically, in a partially contained database the catalog_default will return the fixed metadata collation and in plain database it will return database collation (COLLATE property of database). The database_default keyword always returns the database collation. If you use these keywords then you don’t need to specify the collation name explicitly in your code for the temporary table column definitions.

Using COLLATE database_default for column definitions in temporary tables will avoid the issue in older versions of mismatch. Or you can also use table variables instead where the column definition defaults to database collation if collate clause is absent.

Final words

Though we discuss the workarounds here. This is a postmortem work. I highly recommend that if you are an DBA, please ask what the collation should be. Ask what the application design requirements are. Ask if contained DB is being used. Worst case, if you have made the mistake of defaults – it is never too late to reinstall SQL Server afresh as an worst case scenario to avoid all these heart burns :). Please feel free to share your stories with me if you have anything interesting like this that did happen in your organization.

Continue reading...


 

SQL Server 2012: Filestream Multiple Containers February 7th, 2013

Vinod Kumar

Filestreams feature was introduced in SQL Server 2008. SQL Server 2012 introduces new improvement where we can now add more than one container in same filegroup (containing filestream). Now that we can have more than one directory from operating system be part of same filegroup for filestream use, it allows us to improve storage (physical database) design for a database while not having to partition a table creatively. This can improve IO, which can be spread across multiple directories / volumes / physical devices.

First let us create our database with Filestream. Below we are creating a single Filestream filegroup at location c:\data\FS1.

image

Here is the great enhancement. Let us add a second location to the same filegroup.

image

We will next create a table with filestream data. This is enabled via Varchar(Max) datatype.

image

Final step is to add rows into the table. We will insert some 10k rows using a while loop.

image

Filestream file data distribution algorithm is mainly round robin but with a proportional fill element in an effort to try to parallelize the IO activity. What it means is when a new empty container is added, it may be favored more for INSERTs (while still avoiding all INSERTs to be targeted to same). Once new container catches up with the old ones in terms of size, INSERTs would again be fair across containers.

Fineprints with Filestream

  1. You can now run DBCC SHRINKFILE with EMPTYFILE option against a file indicating a container to migrate all the data from that file to other files (containers) within same filegroup.
  2. sp_filestream_force_garbage_collection: in SQL Server 2008 and SQL Server 2008 R2 you could not force the garbage collection of filestream. A new system stored procedure called "sp_filestream_force_garbage_collection" has been added in SQL Server 2012, which can now force the garbage collection to run from a user connection.
  3. This would be especially useful in scenarios where SHRINKFILE was done on a filestream container but not all files have been garbage collected so you cannot remove it. In such cases, you could run GC for filestream manually to clean it up and then remove the logical file representing this directory from database.
  4. sys.database_files catalog view has a column named size which in case of FILESTREAM files indicates the size of the folder.

Continue reading...