casinos

Archive for the ‘Technology’ Category

Virtualization Journey with SQL Server December 16th, 2013

Vinod Kumar

With the proliferation of applications in everyday use, enterprises have an increasing need for more instances of databases such as SQL Server. Most applications have their own set of database requirements, leading to the use of multiple versions of databases and to significant costs related to the databases and related hardware resources. More importantly, the hardware deployed for databases is not fully utilized or there is a demand to scale up hardware resources to meet peak utilization on certain databases. Therefore, it has become important to control the cost of databases  and related hardware resources, and to optimize and scale the use of hardware resources in different scenarios to provide better flexibility and maintain service level agreements (SLAs). An ideal solution to this problem is virtualization.

Virtualization is fairly common now. Many organizations worldwide have moved beyond the nascent stage into being more advanced users of server virtualization, specifically. These organizations have gained benefits in the areas of costs, efficiency, operations, availability, agility, and resiliency.

Microsoft SQL Server 2012 and Windows Server 2012 provide a host of new features that can be used to effectively virtualize demanding database workloads that previously were not considered for virtualization. Let me take baby steps in this understanding of Virtualization and what one needs to know in this topic area – this is most likely to be multiple part post, let me start on some basics.

Note: This is an important topic because close to 50% enterprises I get to talk with have already deployed their Tier-2 application already in an virtualized environments and close to 25% or more are planning to deploy in the immediate near future too. So these discussions I have with folks are always around the best practices.

Why Virtualize?

Organizations today want the ability to consistently and coherently develop, deploy, and manage their services and applications across on-premises and cloud environments. New and enhanced capabilities in Windows Server 2012 Hyper-V can help customers reduce costs, while at the same time providing increased agility and flexibility.

Performance and Scalability: Because SQL Server can use an operating system’s maximum processor and memory capacity, Windows Server 2012 provides support for up to 640 logical processors (cores) over 64 sockets and up to 4 TB of memory (RAM).

The NIC Teaming feature in Windows Server 2012 helps improve the reliability of the networking system and ensures high availability of the network. It also improves the network throughput for SQL Server data. (These days, I recommend this feature to all my customers with this).

Windows Server 2012 features improvements in Server Message Block (SMB), including the ability to store SQL Server database files on remote file shares, it also offers multiple deployment options for database server storage. More about storage in future blogs.

Availability: Windows Server 2012 supports running SQL Server 2012 on Windows Server Core. This helps to reduce the surface area for attack and reduces the need to apply patches at the operating system level.

With the Cluster Aware Update feature of Windows Server 2012, maintenance of SQL Server cluster nodes can be automated, reducing downtime and improving reliability. With dynamic quorum management and the SQL Server AlwaysOn features, SQL Server clusters can achieve extremely high availability.

By combining Windows Server 2012 with System Center 2012, administrators can comprehensively manage demanding applications (such as SQL Server 2012 workloads) as well as the infrastructure – including physical and virtual resources—in an integrated and unified manner. Some of the benefits I talk with customers are:

  1. Higher capacity vCPUs (up to 64), Memory (up to 1  TB), and virtual machine density (up to 8,000 per cluster).
  2. Hyper-V support on Non-Uniform Memory Access (NUMA) and Fibre Channel.
  3. Faster and simultaneous live migration and dynamic quorum support in SQL
    Server AlwaysOn cluster.
  4. Same management tool (System Center) for SQL Server virtual machines in both private and public clouds.

Now that I have put the fundamental building blocks or reasons to virtualize, I will blog in future posts around:

  1. Hardware innovations
  2. Compute considerations
  3. Storage considerations
  4. Networking considerations
  5. SQL Configurations etc

As this is an important topic, let me know if you want me to write on specific topic around Virtualization and SQL Server. And also let me also know if your organizations are running SQL Server inside a virtualized environments? Are there problems that you wished were explained anywhere? Let me know and I will cover them in future posts.

Continue reading...


 

SQL Server AlwaysOn–Adding node fails December 11th, 2013

Vinod Kumar

Last week I got a call from our internal deployment teams that when they were adding a node to Windows Cluster for SQL Server AlwaysOn it was timing out and it was some sort of latency issues. It was interesting because the latencies between the two datacenters which were far apart (Delhi and Hyderabad) was around 28ms. The best way for me to understand what was going on was to understand what is happening behind the scenes.

First, I asked what their current setup was. They explained they were planning to achieve the architecture as shown below. They were able to setup on the Primary DC1 the two servers SRV1 and SRV2. Now on the Windows Cluster they were trying to add the third node (SRV3) and it was throwing an error. The same was the case when they tried to add SRV4 node.

The usual ping, tracert etc yielded no big information for us. Infact there was no problem in connectivity between the sites when this operation was performed. At this point in time, it got clear it is not an SQL Server AlwaysOn problem but more of a Windows Cluster Problem that we had in our hands.

The next step was to actually get the Cluster Error Logs and most importantly we wanted to see the actual error message.

Failed to access remote registry on ‘OLTPDB.POC.MYDOMAIN.IN’. Ensure that the remote registry service is running, and have remote administration enabled.

The error message was interesting because it didn’t mention anything about latencies and the deployment partner was actually troubleshooting in the wrong direction.

Actual Solution

It is interesting to find what could have gone wrong.

  1. Firewall is blocking Remote Registry connection between SRV1 and SRV3/SRV4
  2. Though Cluster port 3343 is open, Remote Registry is needed.
  3. Remote Registry works on RPC which uses Dynamic Ports + Port 135

Once we figured out this problem, we had to involve the Network Team to open Firewall for Dynamic Ports or Open for all communications for the specifically Cluster nodes on both sides.

If you are interested in knowing about Remote Registry and how to query the ports, use the Port Query Tool.

It is worth to mention the ports assignment for RPC:

 Service Name

UDP

TCP

HTTP

80, 443, 593

80, 443, 593

Named Pipes

445

445

RPC Endpoint Mapper

135

135

RPC Server Programs

<Dynamically assigned>

<Dynamically assigned>

On customer side the port analysis revealed something interesting. We had to open the communication port of 445 in this specific case.

Port

Protocol

Mode

Status

1433

TCP

Two way

Listening

5022

TCP

Two way

Listening

3343

TCP/UDP

Two way

Listening

135

TCP

Two way

Listening

137

UDP

Two way

Listening

445

UDP/TCP

Two way

Not Listening

The issue was closed as soon as the team worked with the networking team. This was an important learning for us in multiple ways:

  1. SQL Server AlwaysOn is just not about SQL Server, we need to have our Windows Team also involved.
  2. Sometimes a Windows Error message can lead us to Network and Infrastructure requirements that we normally oversee off.
  3. It is a team work of ALL parties (SQL Administrator, Windows Administrator and Network Administrator).

Finally, wanted to mention – Learning is a journey and I seem to be learning something every single day.

This post is inspired from Book content: SQL Server 2012 AlwaysOn Joes 2 Pros®: A Tutorial for Implementing High Availability and Disaster Recovery using AlwaysOn Availability Groups” (Paperback, Kindle).

Continue reading...


 

Viewing Windows Azure Storage using SSMS December 2nd, 2013

Vinod Kumar

Moving to cloud enabled services is inevitable and in my day job I get a chance to work on various technologies including Windows Azure. One of the tools I use big time is the Azure Storage Explorer. This is an awesome tool when we need to view or storage account. When playing around with SQL Server SSMS, I found it interesting to see we can query and look at Azure Storage using SSMS itself (lesser known to many – available even with SQL 2012). This was an interesting hidden gem which I thought was worth sharing with all.

To start this journey, the first step is to get the details of your storage account. On your Azure portal, Select “Storage”, Select the “Storage account” (mtcvinod in my example) and Select “Manage Access Keys”.

You will be presented with the Access Keys to your storage account. I have greyed out the values for obvious reasons you can guess :) … Select the “Copy” Button (before the regenerate button and close the dialog.

Now we have all the required information to start our SQL Server Management Studio. Goto Object Explorer – > Connect –> Azure Storage … This is shown below.

We will be presented with the dialog to what Storage account. Key-in your storage account and the Account Key you just copies from the above step. Click on Connect and we are good.

Now you will see the tree view structure on your SQL Server Management studio to all the blobs and objects inside that storage account.

I know this is just a start to viewing the Azure Storage account but the Azure Storage Explorer does a lot more than what SSMS can do. But given the quick access to this can always prove to be handy. Do let me know if you have every used this feature in your environments.

Continue reading...


 

SQL Server: Finding IO load distribution November 15th, 2013

Vinod Kumar

This blog post is in continuation to the various scripts I am putting online. This script is based on a customer requirement where they were buying few SSDs and were wanting to know what files / databases need to be moved into this fast drives. We need to start somewhere and the first query gives you a high-level of all the IO reads/writes across all the databases.

SELECT *
FROM   sys.DM_IO_VIRTUAL_FILE_STATS (NULL, NULL)

This is a great set of information to assimilate but if you want to look at all the drives inside a given server and want to move specific files from slowly/stressed drives to this new drive, this below query will surely help you.

SELECT LEFT(MF.physical_name, 1)     AS DRIVE_LETTER,
sample_ms,
SUM(VFS.num_of_writes)        AS TOTAL_NUM_OF_WRITES,
SUM(VFS.num_of_bytes_written) AS TOTAL_NUM_OF_BYTES_WRITTEN,
SUM(VFS.io_stall_write_ms)    AS TOTAL_IO_STALL_WRITE_MS,
SUM(VFS.num_of_reads)         AS TOTAL_NUM_OF_READS,
SUM(VFS.num_of_bytes_read)    AS TOTAL_NUM_OF_BYTES_READ,
SUM(VFS.io_stall_read_ms)     AS TOTAL_IO_STALL_READ_MS,
SUM(VFS.io_stall)             AS TOTAL_IO_STALL,
SUM(VFS.size_on_disk_bytes)   AS TOTAL_SIZE_ON_DISK_BYTES
FROM   sys.master_files MF
JOIN sys.DM_IO_VIRTUAL_FILE_STATS(NULL, NULL) VFS
ON MF.database_id = VFS.database_id
           
AND MF.file_id = VFS.file_id
GROUP  BY LEFT(MF.physical_name, 1),
         
sample_ms 

As part of drives look for IO stalls and based on higher value look at moving load from the stressed drive to another.

Continue reading...


 

SSRS Tuning Query-Part III November 14th, 2013

Vinod Kumar

This is in continuation to SSRS tuning Query–Part II series that I have been writing. This particular query is to parse the AdditionalInfo XML data for the TOP 10 longest duration reports from our Reporting Server environments.

SELECT TOP 10
COALESCE(C.path, ‘Unknown’)
AS ReportPath,
Datediff(s, timestart, timeend)
AS ReportExecutionDuration_Seconds,
els.[rowcount],
CASE els.source
 
WHEN 1 THEN ‘Live’
 
WHEN 2 THEN ‘Cache’
 
WHEN 3 THEN ‘Snapshot’
 
WHEN 4 THEN ‘History’
 
WHEN 5 THEN ‘AdHoc’
 
WHEN 6 THEN ‘Session’
 
WHEN 7 THEN ‘Rdce’
 
ELSE ‘Unknown’
END
AS Source,
CASE( reportaction )
 
WHEN 1 THEN ‘Render’
 
WHEN 2 THEN ‘BookmarkNavigation’
 
WHEN 3 THEN ‘DocumentMapNavigation’
 
WHEN 4 THEN ‘DrillThrough’
 
WHEN 5 THEN ‘FindString’
 
WHEN 6 THEN ‘GetDocumentMap’
 
WHEN 7 THEN ‘Toggle’
 
WHEN 8 THEN ‘Sort’
 
ELSE ‘Unknown’
END
AS ReportAction,
CASE
els.additionalinfo.value(‘(/AdditionalInfo/ProcessingEngine)[1]‘, ‘integer’
)
               
WHEN 1 THEN ‘SQL 2005′
               
WHEN 2 THEN ‘On-Demand’
               
ELSE ‘Unknown’
             
END
             
AS ProcessingEngine,
             
els.timerendering,
             
els.timeprocessing,
els.additionalinfo.value(‘(/AdditionalInfo/ScalabilityTime/Pagination)[1]‘,
‘integer’)
AS
PaginationScalabilityTime,
els.additionalinfo.value(‘(/AdditionalInfo/ScalabilityTime/Rendering)[1]‘,
‘integer’)
AS
RenderingScalabilityTime,
els.additionalinfo.value(‘(/AdditionalInfo/ScalabilityTime/Processing)[1]‘,
‘integer’)
AS
ProcessingScalabilityTime,
els.additionalinfo.value(‘(/AdditionalInfo/EstimatedMemoryUsageKB/Pagination)[1]‘, ‘integer’) AS PaginationMemoryKB,
els.additionalinfo.value(‘(/AdditionalInfo/EstimatedMemoryUsageKB/Rendering)[1]‘, ‘integer’)  AS RenderingKB,
els.additionalinfo.value(‘(/AdditionalInfo/EstimatedMemoryUsageKB/Processing)[1]‘, ‘integer’) AS ProcessingKB
FROM   executionlogstorage els
INNER JOIN dbo.catalog c
ON c.itemid = els.reportid
ORDER  BY reportexecutionduration_seconds DESC

This search for newer and more interesting queries will continue and I will keep sharing some of these interesting queries with you from time to time as I work with multiple customer environments. Hope you are finding it interesting.

Continue reading...