Archive for September, 2010

SQL Server – Webcast Series September 28th, 2010

Vinod Kumar

Continuing with our promise of providing you in-depth insights into latest Microsoft Technologies, Microsoft brings to you a series of exclusive webcasts on SQL Server from 29th September to 5th October, 2010. Join these demystifying sessions by industry veterans to get rare insights on the best practices that help you better leverage cool features of this technology.

Date & Time

Webcast Topic

29th Sept., 2010
(2:30 pm – 3:45 pm)

Plan Caching and Recompilation with SQL Server
By Amit Bansal


Performance tuning is a broad area and SQL Server has many hidden features which are lesser explored by many. One such concept is around plan caching which was introduced with SQL Server 2005. This feature has been enhanced and made more accessible to developers and admins alike. Join this demystifying session to know more about Plan Caching and understand how the recompilations affect performance if any.
Click here on the day of the event to join the session!

30th Sept., 2010
(2:30 pm – 3:45 pm)

High Availability options Explored with SQL Server
By Balmukund Lakhani


The High Availability options with SQL Server are plenty. Most of us use various options like Logshipping, Mirroring, Clustering, Replication, and yet are not sure when to use one over the other. This session is about best practices in using these HA options and what ‘don’ts’ need to be followed while using certain options.
Click here on the day of the event to join the session!

1st Oct., 2010
(2:30 pm-3:45 pm)

SQL Server Reporting Services Performance Monitoring and Tuning
By Arun Balachandran


Tuning SQL Server Reporting Services is one of the many challenges every BI deployment person is faced with. This session talks about some of the tuning aspects of Reporting services introduced with SQL Server 2008 R2. The session will provide insights into On-Demand rendering engine that enhances the behavior of SSRS with the R2 release. Do join in to know more about Renderer and use of Execution Log for performance troubleshooting.
Click here on the day of the event to join the session!

4th Oct., 2010
(2:30 pm – 3:45 pm)

Practical Performance Troubleshooting – Best Practices
By Praveen Srivatsa


Database performance tuning is one of the top issues in the minds of any DBA or developer. This session talks about some of the practical tips ranging from normal standard out-of-the-box tuning to some interesting internals which a seasoned DBA must be aware off. We will talk about Index Maintenance, VLF Usage Verification, and more. This is something you don’t want to miss so as to prepare yourself against any mishap happening on your production environments.
Click here on the day of the event to join the session!

5th Oct., 2010
(2:30 pm – 3:45 pm)

Achieve Enterprise-wide Compliance Standard using PBM
By Venkatesan Prabu Jayakantham


Nowadays, maintaining security standards and adhering to compliance has become a challenging task. This session provides a great hands-on experience to deal with policy creation and evaluating the company standards using the newly created policies. Let’s take a practical problem and solve it straight away with our in-built policy and also let’s get an in-depth experience of creating our own user-defined policy. This session is flavored with three different angles of policy creation to fulfill compliance standard in enterprise wide servers.
Click here on the day of the event to join the session!

Continue reading...


Parallel DBCC– SQL Server September 27th, 2010

Vinod Kumar

SQL Server 2005+ Enterprise Edition gives you the added boost to verify your data quickly and efficiently with multiprocessor support. Database console commands (DBCC) such as DBCC CHECKDB, DBCC CHECKFILEGROUP, and DBCC CHECKTABLE check the allocation and integrity of the database objects. In short, these commands are used to ensure that the database is free from corruption—pages are linked properly, the data is valid, and page offsets are correct. It is important to execute these checks against your healthy system to be sure that any internal database problems are corrected early, any hardware or database issues that cause corruption are detected and corrected, and that your database is responding in the appropriate manner to application requests. When you execute DBCC CHECKDB, DBCC CHECKFILEGROUP, and DBCC CHECKTABLE against a large database or table stored in the Enterprise Edition of SQL Server, the Database Engine may check multiple objects in parallel if system resources are available and the Database Engine deems the load would benefit from parallel processing. The query processor reevaluates and automatically adjusts parallelism with each table or batch of tables checked.

noteParallel DBCC should typically be left enabled, but can be disabled by using trace flag 2528.

Checking database objects in parallel reduces maintenance and operational costs and improve database availability. Large databases may require a long maintenance window to complete validation, which can affect operations during normal business hours. By using multiple processors to complete maintenance operations, SQL Server 2005 Enterprise Edition completes database validation more quickly, which will free up system resources and tempdb, and in some cases reduce locking during your vital production hours.

Digg This

Continue reading...


Scalable Shared Databases –SQL Server September 24th, 2010

Vinod Kumar

The scalable shared database feature provides a solution to scale out a read-only reporting database. Using commodity hardware for servers and volumes, multiple SQL Server instances attach to a single copy of the reporting database stored on the Storage Area Network (SAN). This equates to a single copy of the reporting data files, which reduces storage requirements across your environment.

The reporting database must reside on a set of dedicated, read-only volumes whose primary purpose is hosting the database. After the reporting database is built on a set of reporting volumes, the volumes are marked as read-only and mounted to multiple reporting servers. On each reporting server, the reporting database is then attached to an instance of Microsoft SQL Server and becomes available as a scalable shared database. Once established as a scalable shared database, a reporting database can be shared by clients that use different reporting servers. To query the database, a user or application can connect to any server instance to which the database is attached. For a given version of a reporting database, clients on different servers obtain an identical view of the reporting data, making query results consistent across servers.

BenefitsA scalable shared database presents a number of benefits.

  • Introduces workload scale-out of reporting databases that are using commodity servers. A scalable shared database is a cost-effective way of making read-only data stores or data warehouses accessible to multiple-server instances for reporting purposes, such as running queries or using SQL Server 2005 Reporting Services.

  • Provides workload isolation. Each server uses its own memory, CPU, and tempdb database. This prevents a runaway query from monopolizing all resources and affecting other queries. This also benefits reporting environments that make heavy use of work tables and other tempdb objects.

  • Guarantees an identical view of reporting data from all servers. All attached reporting applications use an identical snapshot of the data, which improves consistent reporting across the enterprise. This assumes that all of the server instances are configured identically. For example, all servers would use a single collation.

Only SQL Server Enterprise Edition supports this cost-effective scale-out solution to support your most demanding reporting requirements. The relational engine was added as part of SSD in SQL Server 2005 edition and we extended the support to SQL Server Analysis Services from the 2008 editions.

Highly recommend reading this KB article on SSD implementation details.

Digg This

Continue reading...


P&P: Developing Applications for the Cloud September 22nd, 2010

Vinod Kumar

Tailspin is a fictitious startup ISV company of approximately 20 employees that specializes in developing solutions using Microsoft® technologies. The developers at Tailspin are knowledgeable about various Microsoft products and technologies, including the .NET Framework, ASP.NET MVC, SQL Server®, and Microsoft Visual Studio® development system. These developers are aware of Windows Azure but have not yet developed any complete applications for the platform.

The Surveys application is the first of several innovative online services that Tailspin wants to take to market. As a startup, Tailspin wants to develop and launch these services with a minimal investment in hardware and IT personnel. Tailspin hopes that some of these services will grow rapidly, and the company wants to have the ability to respond quickly to increasing demand. Similarly, it fully expects some of these services to fail, and it does not want to be left with redundant hardware on its hands.

The Surveys application enables Tailspin’s customers to design a survey, publish the survey, and collect the results of the survey for analysis. A survey is a collection of questions, each of which can be one of several types such as multiple-choice, numeric range, or free text. Customers begin by creating a subscription with the Surveys service, which they use to manage their surveys and to apply branding by using styles and logo images. Customers can also select a geographic region for their account, so that they can host their surveys as close as possible to the survey audience.

The architecture of the Surveys Application is straightforward and one that many other Windows Azure applications use. The core of the application uses Windows Azure web roles, worker roles, and storage. It also highlights how the application uses SQL Azure™ technology platform to provide a mechanism for subscribers to dump their survey results into a relational database to analyze the results in detail.

"The Tailspin Scenario" introduces you to the Tailspin company and the Surveys application. It provides an architectural overview of the Surveys application; the following chapters provide more information about how Tailspin designed and implemented the Surveys application for the cloud. Reading this chapter will help you understand Tailspin’s business model, its strategy for adopting the cloud platform, and some of its concerns.

"Hosting a Multi-Tenant Application on Windows Azure" discusses some of the issues that surround architecting and building multi-tenant applications to run on Windows Azure. It describes the benefits of a multi-tenant architecture and the trade-offs that you must consider. This chapter provides a conceptual framework that helps the reader understand some of the topics discussed in more detail in the subsequent chapters.

"Accessing the Surveys Application" describes some of the challenges that the developers at Tailspin faced when they designed and implemented some of the customer-facing components of the application. Topics include the choice of URLs for accessing the surveys application, security, hosting the application in multiple geographic locations, and using the Content Delivery Network to cache content.

"Building a Scalable, Multi-Tenant Application for Windows Azure" examines how Tailspin ensured the scalability of the multi-tenant Surveys application. It describes how the application is partitioned, how the application uses worker roles, and how the application supports on-boarding, customization, and billing for customers.

"Working with Data in the Surveys Application" describes how the application uses data. It begins by describing how the Surveys application stores data in both Windows Azure tables and blobs, and how the developers at Tailspin designed their storage classes to be testable. The chapter also describes how Tailspin solved some specific problems related to data, including paging through data, and implementing session state. Finally, this chapter describes the role that SQL Azure™ technology platform plays in the Surveys application.

"Updating a Windows Azure Service" describes the options for updating a Windows Azure application and how you can update an application with no interruption in service.

"Debugging and Troubleshooting Windows Azure Applications" describes some of the techniques specific to Windows Azure applications that will help you to detect and resolve issues when building, deploying, and running Windows Azure applications. It includes descriptions of how to use Windows Azure Diagnostics and how to use Microsoft IntelliTrace™ with applications deployed to Windows Azure.

Click here to download this release.

An extension to this scenario is being developed for mobile users using Windows Phone 7 devices. Early versions of this are available here:

Continue reading...


Mirrored Backups with SQL Server September 20th, 2010

Vinod Kumar

The Enterprise Edition of SQL Server 2005+ introduces the concepts – “mirroring of backup media sets” to provide redundancy of your critical database backups. Mirroring a media set increases backup reliability by reducing the impact of backup-device malfunctions. These malfunctions are very serious because backups are the last line of defense against data loss. SQL Server Standard Editions supports only a single backup copy during your backup operations. Depending on your requirements, Enterprise Edition allows you to create up to four mirrored media sets. An unique yet a powerful option.

Each disk file in the backup must have a corresponding file in the mirror media set. A crisis that required a database restore may have occurred. During the restore, if a the Backup file returned a critical error. Since the mirror files contain identical content, you are able to restore Backup easily. Without the mirrored files, your restore operation may have required database and log backups from many days or weeks earlier, which would significantly increase the time to restore and introduce more risk to the process.

Benefits: Mirrored backup media sets improve availability by minimizing downtime during restore operations. A damaged backup could result in a longer restore time, or a restore failure. As databases grow, the probability increases that the failure of a backup device or media will make a backup unrestorable. Restoring a database is time sensitive, and mirrored backup media sets give you added protection to get your application fully functional more quickly.

Continue reading...