casinos

Archive for January, 2011

Evangelist to MTC–Technology Specialist January 17th, 2011

Vinod Kumar

Stairs_viewSo here it is, as mentioned earlier during the start of year about my Blogs new home, at work I do get a new home too – MTC (Microsoft Technology Center). Today is my first day in this new home and am excited as always Smile. Just like how I joined Evangelism as my second home, this new home is also not my unknown territory. I have been assisting and doing tons of interesting MTC work for the last couple of years and now it just got official !!!

More about MTC and what we do at MTC is on the website and I will refrain from rehashing the words out here. I will still be based out of the Bangalore location, so not much of a change that way and MTC is just at this one location for whole of India.

direccionWhy now? Let me say candidly one thing – “Once an Evangelist, always an Evangelist”. So there is nothing to complain as I do Evangelism and I just love this job – infact it still remains as my dream job. I have done this for ~4.5 years inside MS and more than 3+ years outside of MS (unofficially) – so I have nothing to complain about. I just enjoy being in front of people and these feeling don’t change overnight. Best thing about being an Evangelist at MS is you surely can – stand up and stand tall if you do things right.

So don’t think I am out of sight, I will be out there every once in a while sharing what I have learnt and what I am learning day-in-day-out. So this bond will stay forever. MTC is a place which will test me and give interesting challenges which I feel will be a very good stab for me to take. Solving real tough customer problems will be something that is going to push me and I am looking forward for those challenges. For eg, I remember working with customers out here at MTC with 8+ TB of current data – so you can imagine the challenges. BTW this is also Evangelism, if you read my Part I.

Will be at this TechEd India 2011 (March 23-25th) on SQL Server tracks. So don’t forget to tune in. The website is up and if you are around, drop-in to say a “hi”.

It’s important to maintain a balance between going where you want to go, and being happy as you go there. Fortunately for me, this was made real easy as a lot of things I wish to do can still happen no matter where I am. Even for readers, remind yourself of this little tip – this will surely show you the doors to where you land and how you can keep yourself happy.

What am I going to do?

Well, it is a tough thing to answer – the list outlined here is not exhaustive. Predominantly, meet lots and lots of customers and solve their problems. I do plan on activities which involve:

  1. Performance Tuning exercises for interesting, large tough customers
  2. Architecture Design Sessions
  3. Strategy Briefing to Decision makers and CxO audiences

Tough my focus and experiences around SQL Server will be used heavily for most of these activities, I do have plans to work and expand my expertise wider to Application Platform, SharePoint, Office Products and moving to Cloud – Azure. Yes, you heard it right – who cant run away from this. Buzz of the industry and hope I can help customers make the right choices while moving to the Cloud.

What I am still committed:

When new starts happen, reset on your activities also happen. But one thing I am personally committed is to have this lovely friends of my community still connected. So my appearances at the Bangalore .NET Usergroup and Bangalore SQL Usergroup will be regular as in the past. And I will continue to do the same with other UGs whenever I get to your part of the world. So don’t forget, if you wish for guest appearances, I will surely try to make that happen.

My Facebook, twitter and blog work don’t stop and these will still continue to have parts of what I am – nothing changes. Each time when I go through these transformations, I do make changes to my style of work and these are conscious decisions. Till now I did refrain from sharing, but there are exceptions and I thought I will share a portion of my thoughts here in this blog post.

Mindset changes:

Special Thanks: Last year I did a complete make-over of my home and I am super excited about the final output. Yes, it did get overboard on the estimated financials but you learn certain things in life when you travel the path all by yourself. And for the herculean task I did, I need to attribute its success to my dad and wife. They were of great support and help as I did so many changes all over the place.

So here are things I don’t want to categorize as new-year resolutions but want to do these consciously henceforth – period.

Personally: Enjoy little pleasures of life - From Food, sunsets, water my plants, read good book, listen to favorite music, long drives with family (did one to Chennai recently) and many more. You could probably make a big list of these right now, things you enjoy that you could find every day even as you drive to office. I want to keep these simple, enjoyable and memorable. Time can go by extremely quickly. Before you know it, your life has passed you by. So instead of dwelling in the past or thinking about the future, practice being in here and now – Be present, its your present.

Officially: Zero inbox and clean desk - This might take a little while to do at first, but once you’ve emptied your inbox and cleared off your desk, it doesn’t take long to keep them clear from then on. I have tried to practice this for most part of my work life, but no one is perfect. It’s a simple rewarding habit that I get great pleasure from. Makes one feel really in control of ones own life – esp when you see zero mails waiting for attention. I recommend you give it a try too.

There are many more, but those are for some other day. Most importantly, I wanted to share the goodness around and thanks a ton for reading this far Smile. Thanks again !!!

Continue reading...


 

SQL Server: Policy, History and Metadata January 13th, 2011

Vinod Kumar

Policy Management is an interesting topic and lesser known when we start drilling into the internals and the implementation. In this post let me walk through some of the objects that you might want to keep an eye on msdb and various other DBs when creating, managing and listing Policy Based Management. Let us start with Policy History part -

When a policy executes it stores information on its execution history in the syspolicy tables in msdb. Under the management folder under the policies folder, you can right click a policy and “View History” for a policy or a group of policies. This will provide information on execution date, policy name, targets, details and messages.

  • The execution date column is the date and time that the policy started execution.
  • The policy name column tells you which policy was executing.
  • The target column tells you which targets the condition of the policy were evaluated against.
  • The details column is an xml representation of the expected value versus the actual value of the condition for the target.

The target and detail information is only displayed when the policy has a violation.

  • The message column will display information on exceptions that occurred when the policy is executed.
  • If the Check On Change (prevent out of compliance) rolls back the transaction to prevent the out of compliance no record is displayed.

The Policy History uses the following views to obtain this information.

  1. msdb.dbo.syspolicy_policies
  2. msdb.dbo.syspolicy_conditions
  3. syspolicy_policy_categories
  4. msdb.dbo. syspolicy_target_sets
  5. msdb.dbo.syspolicy_policy_execution_history
  6. msdb.dbo.syspolicy_policy_execution_history_details

Syspolicy Tables and Views in MSDB Database

All metadata for Policy Based Management is stored in tables in the msdb database. There are tables to store information on facets, conditions, policies, targets, subscriptions, health state, and execution history.

The syspolicy_execution_internal table has an instead of trigger for inserts (syspolicy_execution_trigger). Within the trigger, sp_syspolicy_execute_policy is executed.

Tables

  1. syspolicy_management_facets
  2. syspolicy_facet_events
  3. syspolicy_conditions_internal
  4. syspolicy_policy_groups_internal
  5. syspolicy_policies_internal
  6. syspolicy_target_sets_internal
  7. syspolicy_target_sets_levels_internal
  8. syspolicy_policy_group_subscriptions_internal
  9. syspolicy_system_health_state_internal
  10. syspolicy_policy_execution_history_internal
  11. syspolicy_policy_execution_history_details_internal
  12. syspolicy_execution_internal

The syspolicy_policy_health_state and syspolicy_policy_execution_history views can be used to see information on failed policy violations. The syspolicy_policy_execution_history has columns on start_date and end_date for a policy execution. You can use a datediff function to determine the length of time for a policy execution. There is also information on the target, target expression, condition results. Below are some sample queries that you can use to pull information out of the views and tables.

Views

  1. syspolicy_conditions
  2. syspolicy_policy_groups
  3. syspolicy_policies
  4. syspolicy_target_sets
  5. syspolicy_target_sets_levels
  6. syspolicy_policy_group_subscriptions
  7. syspolicy_system_health_state
  8. syspolicy_policy_execution_history
  9. syspolicy_policy_execution_history_details

Sample Queries

/* What policies exist on the server? */

SELECT p.name AS ‘policy_name’,
       p.policy_id,
       p.is_enabled,
       p.execution_mode,
       c.name AS ‘condition_name’,
       c.facet,
       p.date_created,
       p.created_by,
       p.modified_by,
       p.date_modified
FROM   msdb.dbo.syspolicy_policies p
       INNER JOIN msdb.dbo.syspolicy_conditions c
         ON p.condition_id = c.condition_id
ORDER  BY p.name 

/* What conditions exist on the server? */

SELECT c.name AS ‘condition_name’,
       c.condition_id,
       c.facet,
       c.DESCRIPTION,
       c.expression,
       c.date_created,
       c.created_by,
       c.modified_by
FROM   msdb.dbo.syspolicy_conditions c
ORDER  BY c.name 

/* What categories and subscriptions exist on the server? */

SELECT *
FROM   msdb.dbo.syspolicy_policy_categories

SELECT *
FROM   msdb.dbo.syspolicy_policy_category_subscriptions 

/* What is the execution history of policy violations */

SELECT h.history_id,
       pp.name                                AS ‘policy_name’,
       pp.policy_id,
       pp.execution_mode,
       h.result,
       h.start_date,
       h.end_date,
       Datediff(ss, h.start_date, h.end_date) AS ‘exec_time_ss’,
       d.target_query_expression,
       CAST(d.result_detail AS XML)           AS ‘result_detail’,
       d.EXCEPTION,
       d.exception_message
FROM   msdb.dbo.syspolicy_policies AS pp
       INNER JOIN msdb.dbo.syspolicy_policy_execution_history AS h
         ON h.policy_id = pp.policy_id
       INNER JOIN msdb.dbo.syspolicy_policy_execution_history_details AS d
         ON d.history_id = h.history_id
ORDER  BY h.history_id 

/* What exceptions have occurred? */

SELECT h.history_id,
       p.name AS ‘policy_name’,
       p.policy_id,
       h.start_date,
       h.end_date,
       h.result,
       h.EXCEPTION,
       h.exception_message
FROM   msdb.dbo.syspolicy_policies AS p
       INNER JOIN msdb.dbo.syspolicy_policy_execution_history h
         ON p.policy_id = h.policy_id
WHERE  Datalength(h.EXCEPTION) > 1
ORDER  BY h.start_date DESC,
          h.end_date DESC 

Cleaning up syspolicy execution history tables

Most of the times when Policy Based Management is created manually, lot of DBAs find it difficult to delete the policies made via this mechanism. There is a stored procedure called sp_syspolicy_delete_policy_execution_history. This stored procedure can be used to delete records from the syspolicy_policy_execution_history_internal table either by date or policy id.

Continue reading...


 

Sparse Columns – Need and Limitation January 10th, 2011

Vinod Kumar

Sparse columns are ordinary columns that have been designed for optimized storage of NULL values. Sparse columns are by nature expected to hold NULL values for most of the rows in the containing table. The benefit of sparse columns is that they reduce the space requirements for NULL values. However, this benefit comes at the cost of increased processing overhead to retrieve non-null values.

Sparse columns address the business/technical problem that arose from the increasing need to store large amount of semi-structured data. Such a need emerged with “Web 2.0” systems used for business collaboration, Wikis, Blogs, discussion forums, etc. Sparse column can be successfully applied in the following examples. Consider the variety of documents, photos, videos, GPS data, emails, contacts, etc. that exists on the Internet today. While many of these contents are stream-oriented, all of them contain a rich set of metadata properties, both standardized sets (e.g. Name, Size, Date) as well as custom-defined sets (e.g. LocationDescription, ViewableBy, Color, Event). Given the nature and variety of content, these property sets are critical for organizing, categorizing and searching these rich semi-structured objects. A large portion of these data is currently stored in file systems, but it seems logical to store such contents in relational databases taking advantage of the extensive querying, transactional and manageability facilities built into these systems. What is key to observe about the semi-structured data is that not all properties apply to all objects. For example Event may only be relevant to the video you captured on a trip to Paris, while Latitude is only relevant to GPS data. Therefore, a relational system would need to provide flexibility in the way such data is organized and managed.

There are some restrictions on what columns that can be marked as Sparse. The following datatypes cannot be marked as Sparse Columns -

  1. Geography
  2. Geometry
  3. Image
  4. Ntext
  5. Text
  6. Timestamp
  7. User-defined data types (UDT)

Columns that cannot be marked as sparse

  1. Computed columns (a computed column can however contains a sparse column)
  2. Identity
  3. RowGuid
  4. Filestream

Other restrictions:

  1. A sparse column cannot have a default value.
  2. A sparse column cannot be bound to a rule.
  3. A sparse column cannot be part of a clustered index or a unique primary key index. However, both persisted and nonpersisted computed columns that are defined on sparse columns can be part of a clustered key.
  4. A sparse column cannot be used as a partition key of a clustered index or heap. However, a sparse column can be used as the partition key of a nonclustered index.
  5. A sparse column cannot be part of a user-defined table type, which are used in table variables and table-valued parameters.

Sparse columns are beyond the limitations, the details on using the same will be something we will discuss for a different post. Hope these will be in your minds while using the same.

Continue reading...


 

SQL 7.0 support ends – Jan 11th January 7th, 2011

Vinod Kumar

Not sure if there are production environments still working on the legacy of SQL Server 7.0 version. But it is worthwhile to start considering to move ahead. The extended support also finally ends now. Support for SQL Server 7.0 will end on January 11, 2011.  Microsoft is ending support for this product as part of our Support Lifecycle policy, found in http://support.microsoft.com/lifecycle. SQL Server 7.0 will no longer receive assisted support or security updates from Microsoft after the end of support date.  Self-Help online support will continue to be available for a minimum of 12 months after the product reaches the end of support.  Self-Help options include public knowledge base articles, FAQs, troubleshooting tools which are typically available from http://support.microsoft.com and the Microsoft Download Center.

For more information, refer to http://blogs.msdn.com/b/sqlreleaseservices/archive/2010/09/29/end-of-support-for-sql-server-7-0.aspx.

Also, in 4 months (Apr 2011), mainstream support for SQL Server 2005 (RTM version) will end (this means no more hotfixes/Cumulative Updates after that unless you have an Extend Hotfix Support agreement). Review the lifecycle link: http://support.microsoft.com/lifecycle/?p1=2855 for additional information.

Continue reading...


 

SQL Server Compression Troubleshooting January 5th, 2011

Vinod Kumar

Though compression is a very niche topic and not many people know about the internals of how it functions and does the whole process. Anyways, I have always felt, if not for anything else, the backup compression is a sure winner if you are having large databases and have to do maintenance in a short window of time. In the process of using compression sometimes you will want to debug and troubleshoot any possible problems – so here are certain tips for my readers !!!Compression

Two performance monitor counters are added as part of the Data Compression feature. They are available under the SQL Server: Access Methods objects.

Pages compressed/sec: This counter provides the number of data pages that are compressed by using PAGE compression. This includes all objects in this instance of SQL Server.

Page compression attempts/sec: This counter provides the number of pages evaluated for page-level compression. This includes pages that were not compressed because significant savings could be achieved. This includes all objects in the instance of SQL Server.

You can consider these as aggregated information about all object specific information represented in sys.dm_db_index_operational_stats.

sp_estimate_data_compression_savings

Since there is an overhead associated with compressing the data, users and customers will need a mechanism using which they can estimate the extent of space saving that can be attained using the different compression types on various indexes of a table. This is done by the sp_estimate_data_compression_savings stored procedure. For the purposes of estimation, this stored procedure creates two tables in tempdb database. One of the tables contained sampled data from the actual table that you want to compress. The other table is actually a compressed version of the first temporary table. The compression ratio is calculated by comparing the space used by these two temporary tables. The maximum number of pages that are sampled is 5000. There are some factors that can affect the compression ratios obtained during the estimation process and hence differ from the actual saving obtained.

  1. Since the process of estimation uses random sampling, actual compression ratios can vary depending on data distribution.
  2. If the original table or index is heavily fragmented, then the compression process will remove all this fragmentation and give a perception of higher compression ration when in-fact the additional space savings gained is merely due to de-fragmentation.

Msdb.dbo.backupfile system table

This system table contains a history of backups with columns describing each backup created. This table contains the backup_size and compressed_backup_size columns which can be used to determine how effective backup compression has been.

Msdb.dbo.backupmediaset system table

This system table contains a history of how backup files were formatted including the column is_compressed which tells if all of the files on the backup are compressed or not compressed.

Restore Headeronly

This command returns various columns describing each backup on the file. The key columns for troubleshooting backup compression are:

  • backupsize – this reports count of bytes of the backup including MTF headers.
  • compressedbackupsize – this reports the compressed size of the backup. Dividing this value into the backupsize will give the compression ratio.

Restore Labelonly

This command returns information about how the backup file was formatted. The IsCompressed column reports whether the backup file was created with or without compression enabled.

Hope you found this info useful and feel free to pass your comments !!!

Continue reading...