casinos

Posts Tagged ‘InterOp’

Contained Database and Windows User Group May 20th, 2013

Vinod Kumar

We discussed at length about what contained databases are, how to configure, steps to achieve the same etc in our previous post (Contained Databases inside SQL Server 2012). In this post let me show two concepts:

  1. Contained Database with Windows Users – most of the times I have seen people show contained database with SQL Authentication. In reality it applies to Windows Users too.
  2. Set a default Schema for group – This is an enhancement inside SQL Server 2012 where we can set an default schema for Windows User groups too.

Create Windows Group

Let us first create the Windows Group and add two users into this group. In our example, I am adding Balmukund and Pinal into this group. This will be done under Computer Management –> Local Users and Groups –> Groups location.

image

Create Contained Database

The steps are same as described in blog post Contained Databases inside SQL Server 2012.

SP_CONFIGURE ‘show advanced options’, 1

GO

RECONFIGURE

GO

SP_CONFIGURE ‘CONTAINED DATABASE AUTHENTICATION’, 1

GO

RECONFIGURE

GO

 

CREATE DATABASE cdb1 CONTAINMENT=Partial

GO

The above statements create the database CDB1.

Create Users and Set Default Schema

First step is to create the users and then give permissions to access our database. Change the same appropriately to suit your needs.

USE CDB1

 

– Create a database user for Windows Group

CREATE USER [MTC-Vinod\CDBUsers]

 

– Giving Permission to group

ALTER ROLE DB_DATAREADER ADD MEMBER [MTC-Vinod\CDBUsers];

ALTER ROLE DB_DATAWRITER ADD MEMBER [MTC-Vinod\CDBUsers];

Next step will be to create the schema and give permission to the group:

CREATE SCHEMA Users AUTHORIZATION [MTC-Vinod\CDBUsers];

 

– Set Default schema for a group!

ALTER USER [MTC-Vinod\CDBUsers]

WITH DEFAULT_SCHEMA = Users

Let us create a test table with some value to do a quick testing.

CREATE TABLE Users.testing(c1 int)

GO

INSERT INTO Users.testing VALUES (10)

Testing Contained Database

In our example, I have used a local user account for the demo. We can start SSMS using the “runas” command which will initialize SSMS in Balmukund’s user credentials. The typical command will look as:

image

In SSMS menu, select Advanced properties and don’t forget to change your default database to CDB1, else you are likely to get an error. Now the Object explorer shows as:

image

When you are connecting with Container Database user credentials, we can see ONLY this specific database. All other databases are hidden from the user.

Next step will be to query our tables. In the example below you can see we resolved the table “testing” to our default schema of “Users”.

image

Final Words

The above code confirms the two concepts we wanted to learn – a) Using Windows users and groups for Contained databases and b) Setting default schema for Windows Groups. The second concept is not specific to Contained databases but can be used for normal DB too. 

How many of you use contained database today? Are these cool additions with SQL 2012?Do let me know and would love to hear your feedbacks.

Continue reading...


 

TechEd India 2013: SQL Server and Architecture March 13th, 2013

Vinod Kumar

TechEdIndiaLogo2013

This time of the year that I get heads-down to bring some cool demo for TechEd and the exploration never stops. This year TechEd India 2013 happens in two cities – Bangalore and Pune. This time next week the Bangalore event would have got over and I am sure we will be in exhausted state to relive the past two days and staring at Pune to execute the same. There is always extra care, attention and preparation that goes behind for every TechEd. This year is no different. I am also excited because this will mark my 10th Year of TechEd on stage. The memories of doing this in 2003 is still fresh and the excitement never dies every single year.

The twitter hashtag to follow for this year TechEd is: #TechEdIn.

What is special about TechEd?

If you ask me there are a number of reasons to be at TechEd personally. My drivers for you will be:

  1. It is a place to socialize and at the same time feel alone. If you attend the tech sessions, you might feel there is so much more to learn and this sometimes alienates people in a good way. Get motivated and get to see what the future of technology is.
  2. Two days of pure learning and no work is something we will never get :). I know it is not a nice thing to say but I couldn’t resist from mentioning. Let me reiterate, it is not vacation time but learning time.
  3. It is a hub for networking. This is one of the key and most important reason to attend. Meet peers, speakers from Redmond, local speakers and many more.
  4. With close to 6+ tracks, HOLs – the choice of sessions are too many. It is always fun !!!
  5. It is also a place to make business contacts, find mentors, make new friends and all these will help you to make your future.

SQL Server – Security Session

For the past two years I have talked around Security in SQL Server. Each year I try to bring some unique perspective and what I find useful from a security point of view. This year it will be new additions to Security with SQL Server 2012. You will get to see:

  1. What has changed in Security
  2. How to use User Defined Roles and Default schema
  3. How to keep DBA out of database (access)
  4. Enhancements to Auditing
  5. Audit filtering and Audit resilience
  6. Contained Databases basics
  7. Using Logon Triggers with Contained Databases
  8. SQL Server 2012 – Service account changes
  9. and if time permits few more demo’s are available …

So don’t miss the action. I have infact given away what to expect out of my 1 hour session. Lots of demo and lots of detailing on how to use SQL Server security effectively.

SQL Server – Transaction Logs Session

Going back to the basics is important. So every TechEd there is one topic that I talk about which is about going back to basics. One question that I get back from customers almost every other session is around Transaction Logs functioning. I think it will be great to have a session around that.

  1. What is Transaction Logs? Why do we need them?
  2. Understanding what VLF’s are
  3. Creation of Logs and the basics
  4. Cyclic Logs and how they function
  5. What is written to Logs and how are they written
  6. What happens when Page Split happens with Logs
  7. How does minimal Logging work and what should we know
  8. Finally if time permits, how does recovery work …

This is just a teaser to what is in store for the session. We have many more interesting concepts to discuss and learn during the session – so don’t miss out.

Architecture Track – Evaluating Availability Options

This is going to be an interesting topic. As part of MTC I discuss about scale out to almost every single customer. Most of the times the discussion boils down to when to use what. We have solutions from Log shipping, Mirroring, Clustering, Replication and now with SQL Server 2012 – AlwaysOn. So the session is around:

  1. Why Scale out? Cant I just scale up?
  2. What changes do I need to do in my application?
  3. When to use what technology – Pre SQL 2012
  4. With SQL 2012 – What is AlwaysOn
  5. What are the 12+ different ways to deploy AlwaysOn for my environment?
  6. How do I decide and what will be the RPO and RTO for each?

This is a pure whiteboarding diagram based discussion. You will learn the theory and the possibilities when working with SQL Server 2012.

Conclusion

As I said, if you are in Bangalore or Pune – don’t forget to attend my session. Most importantly don’t forget to pass me a “Hi” after the session. I will be walking around and it will always great to make a new friend.

PS: If you are in Bangalore or Pune – the Usergroups have given some amazing discounts that you don’t want to miss. It is still not late, check out the Bangalore .NET and SQL Usergroup announcements.

Continue reading...


 

SQL Collations mismatch after Upgrade / Consolidation February 14th, 2013

Vinod Kumar

Recently I was contacted by one of the customers with an interesting query. They upgraded from previous version of SQL Server and the DBA didn’t take  necessary precaution for collation in their new server. Now they need to deal with a mismatch between SQL_Latin1_General_CP1_CI_AS and Latin1_General_CI_AS collations in their new database instance versus their old databases. This was a typical rare scenario and I had to help them.

Most DBA’s are oblivious to select an appropriate collation at the point of installation. This is a typical recipe for disaster and I thought to write about this in this blog post for people to use as reference.

What can go wrong here?

  1. Usage of Temp Tables is causing wrong results or worst is breaking now.
  2. In consolidation scenario, now 100’s of database will be in one instance there needs to be application change that needs to be enforced.
  3. If the collation was wrongly selected as case-sensitive then the stored procedure and application can go down easily as soon as the switch happens.

In my close to over a decade of working with SQL Server, I have never ever seen application developers use specific collation on their code. Under rare scenario with multiple localized content, I have seen people use COLLATE keyword in their code. That as I said is one in million sort of scenario. Would love to hear if you have used such code in your application for sure.

Do look at the bigger checklist for SQL Server 2012 if you are doing the same. The same list can be useful when moving from other versions too.

What went wrong? Why?

Most DBAs don’t pay attention to collation until it is too late. The crux of the issue is that SQL Server metadata collation follows the database collation unlike every other database – meaning the metadata or system tables are collated based on the database collation. So in a database with case-sensitive collation, the metadata is case-sensitive (object identifiers, names of tables, sorting of object names & so on).

In SQL Server 2012, we introduced the concept of a fixed catalog collation with partially contained database (CDB). So within the context of a partial CDB, the metadata collation is always fixed to a specific collation irrespective of the database collation or how the database was created on an instance with different defaults.

If you haven’t read about Contained DB, I highly recommend reading the same as it has multiple advantages like:

  1. Have case-sensitive (or other) collation behavior for user data without having to endure a case-sensitive catalog.
  2. Consolidate databases from servers with differing collations without having to rework the code to add explicit collation comparisons.
  3. And of course move a database from one server to another and have the TempDb collation behavior the same.

Know your collation

In SQL Server 2012, you can use the catalog_default keyword when defining your columns for temporary tables. This will prevent the collation conflict issue when joining temporary table with user table data or catalog view data. In older versions, you can use database_default keyword.

See example below (run below in plain database and partially contained to see difference in behavior):

DECLARE @myVar SQL_VARIANT = COLLATE catalog_default;

SELECT SQL_VARIANT_PROPERTY(@myVar, ‘collation’);

SET @myVar = COLLATE database_default;

SELECT SQL_VARIANT_PROPERTY(@myVar, ‘collation’);

Basically, in a partially contained database the catalog_default will return the fixed metadata collation and in plain database it will return database collation (COLLATE property of database). The database_default keyword always returns the database collation. If you use these keywords then you don’t need to specify the collation name explicitly in your code for the temporary table column definitions.

Using COLLATE database_default for column definitions in temporary tables will avoid the issue in older versions of mismatch. Or you can also use table variables instead where the column definition defaults to database collation if collate clause is absent.

Final words

Though we discuss the workarounds here. This is a postmortem work. I highly recommend that if you are an DBA, please ask what the collation should be. Ask what the application design requirements are. Ask if contained DB is being used. Worst case, if you have made the mistake of defaults – it is never too late to reinstall SQL Server afresh as an worst case scenario to avoid all these heart burns :). Please feel free to share your stories with me if you have anything interesting like this that did happen in your organization.

Continue reading...


 

SQL Server 2012: Filestream Multiple Containers February 7th, 2013

Vinod Kumar

Filestreams feature was introduced in SQL Server 2008. SQL Server 2012 introduces new improvement where we can now add more than one container in same filegroup (containing filestream). Now that we can have more than one directory from operating system be part of same filegroup for filestream use, it allows us to improve storage (physical database) design for a database while not having to partition a table creatively. This can improve IO, which can be spread across multiple directories / volumes / physical devices.

First let us create our database with Filestream. Below we are creating a single Filestream filegroup at location c:\data\FS1.

image

Here is the great enhancement. Let us add a second location to the same filegroup.

image

We will next create a table with filestream data. This is enabled via Varchar(Max) datatype.

image

Final step is to add rows into the table. We will insert some 10k rows using a while loop.

image

Filestream file data distribution algorithm is mainly round robin but with a proportional fill element in an effort to try to parallelize the IO activity. What it means is when a new empty container is added, it may be favored more for INSERTs (while still avoiding all INSERTs to be targeted to same). Once new container catches up with the old ones in terms of size, INSERTs would again be fair across containers.

Fineprints with Filestream

  1. You can now run DBCC SHRINKFILE with EMPTYFILE option against a file indicating a container to migrate all the data from that file to other files (containers) within same filegroup.
  2. sp_filestream_force_garbage_collection: in SQL Server 2008 and SQL Server 2008 R2 you could not force the garbage collection of filestream. A new system stored procedure called "sp_filestream_force_garbage_collection" has been added in SQL Server 2012, which can now force the garbage collection to run from a user connection.
  3. This would be especially useful in scenarios where SHRINKFILE was done on a filestream container but not all files have been garbage collected so you cannot remove it. In such cases, you could run GC for filestream manually to clean it up and then remove the logical file representing this directory from database.
  4. sys.database_files catalog view has a column named size which in case of FILESTREAM files indicates the size of the folder.

Continue reading...


 

MS Access functions inside SQL Server January 23rd, 2013

Vinod Kumar

Many a times an enterprise application has its roots in smaller application developed using a small-scale backend database services. Microsoft Access is a premier Rapid Application Development tool which a rich client application using JET engine. When such application needs to be migrated to use Microsoft SQL Server engine or SQL Azure, there is a challenge in porting the code. Since MS Access database provides access to certain expression calculation and built-in functions, it may become a non-trivial exercise to MS Access based application. To assist with this, SQL Server 2012 has introduced few built-in functions, which are syntactically and semantically same as MS Access.

CHOOSE

Returns the item at the specified index from a list of values.

Usage: CHOOSE ( index, val_1, val_2 [, val_n ] )

Post from SQLAuthority.

DATEFROMPARTS

DATETIME2FROMPARTS

DATETIMEFROMPARTS

DATETIMEOFFSETFROMPARTS

TIMEFROMPARTS

SMALLDATETIMEFROMPARTS

Returns a specific date or datetime data type value for the specified year, month, and day.

Usage: DATEFROMPARTS ( year, month, day )

  1. DATETIME2FROMPARTS ( year, month, day, hour, minute, seconds, fractions, precision )
  2. TIMEFROMPARTS ( hour, minute, seconds, fractions, precision )
  3. DATETIMEFROMPARTS ( year, month, day, hour, minute, seconds, milliseconds )
  4. DATETIMEOFFSETFROMPARTS ( year, month, day, hour, minute, seconds, fractions, hour_offset, minute_offset, precision )
  5. SMALLDATETIMEFROMPARTS ( year, month, day, hour, minute )

Post from SQLAuthority.

EOMONTH

Returns the last day of the month that contains the specified date, with an optional offset. The offset is month (and not date). This would be a good function for many Line Of Business and reporting applications. It would be quite easy to find out what was last day of month 3 months ago or 3 months from now.

Usage: EOMONTH ( start_date [, month_to_add ] )

Post from SQLAuthority.

FORMAT

Returns a value formatted with the specified format and optional culture. Recommended use of this function is to do a location (culture) aware conversion of strings to date time data types. For general conversion, CAST and CONVERT are still preferred built-in functions.

Usage: FORMAT ( value, format [, culture ] )

This function relies on existence of CLR being installed. If CLR is not installed (and or cannot be loaded in SQL Server 2012), the call to FORMAT function will fail. Also execution of this function will load CLR within SQL Server if not already loaded.

Details from SQLAuthority.

IIF

Returns one of two values, depending on whether the Boolean expression evaluates to true or false. IIF is a shorthand way for writing a CASE statement and is internally converted as such. All the limitations of CASE statement also apply to IIF.

Usage: IIF ( boolean_expression, true_value, false_value )

Post from SQLAuthority.

TRY_CONVERT

Returns a value cast to the specified data type if the cast succeeds; otherwise, returns null. The function can return error if casting to destination data type is explicitly not allowed. For example if you try using this function to cast an INT value to XML value, it will return an error (and not just return NULL).

Usage: TRY_CONVERT ( data_type [ ( length ) ], expression [, style ] )

Post from SQLAuthority.

LOG/LOG10

LOG returns the natural logarithm of the specified float expression, and optionally takes base as input. LOG10 returns the base-10 logarithm of the specified float expression.

Usage: LOG ( float_expression [, base ] )

LOG10 ( float_expression )

CONCAT

Returns a string that is the result of concatenating two or more string values. This is simply a string concatenation function. Return type is dependent on what input types are sent. Typically, return value is of highest precedent data type. For example, multiple varchar and nvarchar of less than 4000 bytes are sent, final output is of nvarchar of 4000 bytes in size (and may be truncated). On the other hand, if at least one value is nvarchar(max) or varchar(max), the final output is of nvarchar(max) size. Note, your arguments could be of non-string data type as well as long as they can be implicitly converted to string data type.

Usage: CONCAT ( string_value1, string_value2 [, string_valueN ] )

Details from SQLAuthority.

PARSE/TRY_PARSE

Returns the result of a string expression, translated to the requested data type. PARSE function will return error if cast/translation fails, while TRY_PARSE will return null if the cast fails. This can be useful function for location (culture) aware expression calculation.

Usage: TRY_PARSE ( string_value AS data_type [ USING culture ] )

PARSE ( string_value AS data_type [ USING culture ] )

Details from SQLAuthority.

Continue reading...