casinos

Posts Tagged ‘Tips and Tricks’

Quick Comparison on RAID Levels August 21st, 2013

Vinod Kumar

This blog post is inspired based on tons of customers I meet daily as part of my job. Many customers often ask me which RAID levels to use with SQL Server. Well, the basics and best practices vary between types of files and how the access happens using SQL Server. As a quick recap, data files needs random data access while Log files need sequential access. And then we need tempdb on the fastest drive. All these are general rules of thumb and not something written on stone.

Here is the quick recap of what I suggest based on various dimensions for each of the RAID levels.

RAID Levels

RAID-0

RAID-1

RAID-5 or

RAID-6

RAID-10

Reliability

Lowest

No fault tolerance results in data loss

Very good

Even better with duplexing

Good

Can tolerate single machine fault

Excellent

Storage Efficiency

100%

50%

>50% <100%

50%

Random Read

Excellent

Fair

Worst of the RAID Levels

Fair

Very good

Random Write

Excellent

Fair

Very good

Very good

Sequential Read

Excellent

Very Good

Comparable to a single drive

Good

Generally, better with smaller stripe sizes

Good

Sequential Write

Excellent

Very Good

Better than other RAID levels

Fair

Good

Cost

Lowest

Moderate

Relatively high cost due to redundant drives

Moderate

High

These are important considerations before we build our very own RAID levels as an Administrator. Obviously, the choice always is between RAID-5 or RAID-10. Please use the above table as a quick reference and do let me know if you found more dimensions.

Continue reading...


 

Basics of SQL Server Execution Plans August 19th, 2013

Vinod Kumar

Chess was invented in 16th century, and is still the most popular board game currently, and in history. It has been played by millions of players, and continues to rise in popularity despite competition from all sorts of technologically advanced games. Each player gets eight pieces to make a total of sixteen pieces. There is a king, a queen, two rooks, two knights, two bishops, and eight pawns. The board is made up of the same black and white squares no matter where you play it, and every player has the same pieces. And yet out of all these games of chess, of the millions upon millions that have been played, no two have been identical.

How is this possible? Because chess is about strategy. It’s about being at your best, smart execution, planning your moves. All games of chess are different because different minds are playing them, and there are so many variables – 16 pieces! These multiple pieces need to work together and have a strategy to lead to one final execution plan.

Now let’s link this concept to SQL Server. There are many pieces that play a part in SQL, and they tend to stay the same from day to day and even from business to business. The main components are data, the CPU, memory, the SQL Server application, and the hard drive. However, as any DBA knows, this doesn’t mean SQL Server is ever the same from day to day. Why is this?

SQL changes all the time because while you’re working with the same “chess men.” Every execution plan will behave different because there will be different CPU cycles, different kind of joins, a different IO pattern, different memory.

Here is where the metaphor ends: there is a way to win at chess, but there are only ways to improve SQL Server, there is no trophy at the end. The only way to “win” is to use the best execution plan that has been drawn.

Some HIstory or background 

Query Optimization is often considered as a new subject but if you look back in time many of the present time best practices were discovered over 30 years ago. Here are few of the landmark research papers.

1978 – Nonprocedural Query Processing for Databases with Access Paths. Nancy D. Griffeth. ACM SIGMOD Conference 1978.

1978 – Distributed Query Processing in a Relational Data Base System. Robert S. Epstein, Michael Stonebraker, Eugene Wong. ACM SIGMOD Conference 1978

1979 – Parallel Processing of Relations: A Single-Assignment Approach. Joachim W. Schmidt. VLDB 1979.

1980 – A Methodology for Interpreting Tree Queries Into Optimal Semi-Join Expressions. D. M. Chiu, Y. C. Ho. ACM SIGMOD Conference 1980.

1980 – On Retrieval from a Small Version of a Large Data Base. Adrian Walker. VLDB 1980.

1981 – QUIST: A System for Semantic Query Optimization in Relational Databases. Jonathan J. King. VLDB 1981.

1981 – Optimal Semijoin Schedules For Query Processing in Local Distributed Database Systems. Mohamed G. Gouda, Umeshwar Dayal. ACM SIGMOD Conference 1981.

1982 – Query Processing for Distributed Databases Using Generalized Semi-Joins. Yahiko Kambayashi, Masatoshi Yoshikawa, Shuzo Yajima. ACM SIGMOD Conference 1982.

1983 – Estimating Block Transfers and Join Sizes. Stavros Christodoulakis. ACM SIGMOD Conference 1983.

The amount of research related to Query Processing has grown exponentially in last 30 years. History demonstrates that initially Joins were the point of the interests for early years Query Tuning Wizards.

Execution Plans

SQL Server execution plans are broken into two parts: query or compiled plans and execution contexts. Query/compiled plans are sharable by many users, whereas execution contexts are not. Query plans are stored in sys.dm_exec_cached_plans.

There are never more than two copies of the query plan in memory. There is one copy for all serial executions and there is possibly a second for all parallel executions. The parallel copy covers all parallel executions, regardless of their degree of parallelism.

The Query plan is generated at compile time and is reused until a recompile happens (due to a service restart, configuration change or user request). Query plan does not have any user or runtime information associated with it. The execution contexts contain runtime information like parameter values and are specific to the one session by the executing user.

Note The execution context data structures are reusable. When multiple users execute a stored procedure concurrently, each user will be given a separate execution context.

Multiple execution contexts can exist, especially in high concurrency environments. If a new user executes a procedure, and one of the existing execution contexts is not in use, it is reinitialized with the context for the new user.

Note The execution context maintains state information, such as the values of local variables/parameters for that execution that is specific to a given invocation of a query.

To improve plan stability, one deliberate choice from SQL Server 2005 was to keep the compiled plans in cache as long as possible. The architecture was changed so that compiled plans can keep pointers to the application execution context(s) for that plan.

Under memory pressure, the execution contexts are removed first since they can easily be regenerated. The compiled plans are only removed when there is still insufficient memory after removing the execution contexts. Hence in environments with high ad-hoc workloads, the plan cache can grow to a higher than expected value.

Query Plans are Fingerprints

The query plan should be read from right to left and top to bottom. A query plans shows:

  1. How data is accessed – which indexes are used; was seek or scan performed.
  2. How data is joined – order of tables joined and algorithm to join tables.
  3. How data is aggregated – aggregation algorithm used if query contains sum, avg etc
  4. Sequence of operations
  5. Use of temporary worktables – optimizer may create temporary work tables for example if the query contains a sort clause.
  6. Estimated rowcounts – how many rows the optimizer predicted would be returned by the query?
  7. Actual rowcounts – number of rows actually returned by the query.
  8. Use of parallelism – whether or not a parallel query plan was used.
  9. Operator level warnings from SQL Server 2012 and above.

Query and plan fingerprints are new from SQL Server 2008. The query fingerprint is generated by taking the SQL statement and running it through a hash function. If the SQL statements hash to the same value, they are probably the same SQL statement that only differ by literals.

The query fingerprint is exposed as the query_hash column in sys.dm_exec_requests, and sys.dm_exec_query_stats.

Note The query fingerprint is also exposed as the queryhash attribute of the XML query plan.

The query hash can be used to determine if the plan cache contains similar queries which cannot be auto-parameterized. This situation can occur more frequently when an application issues ad-hoc SQL statements.

SQL Server 2008 has added 2 DMVs to identify expensive procedures and triggers. To identify expensive procedures use sys.dm_exec_procedure_stats. This dmv can be used to identify expensive TSQL stored procedures, CLR stored procedures or Extended stored procedures. The type and type_desc columns indicate which type of procedure it is.

To identify expensive triggers use sys.dm_exec_trigger_stats. This DMV is functionally equivalent to the new sys.dm_exec_procedure_stats but it provides the data about the overall time taken inside of triggers only. This information is available for both SQL and CLR triggers.

Note Use sp_executesql when a single user might use the same batch multiple times and the parameters are known.

In SQL Server, using parameters or parameter markers in Transact-SQL statements increases the ability of the relational engine to match new SQL statements with existing, unused execution plans. If a SQL statement is executed without parameters, SQL Server parameterizes the statement internally to increase the possibility of matching it against an existing execution plan and this process is called as auto-parameterization.

Use auto-parameterization only for applications that cannot be easily modified. Auto-parameterization is useful in a very limited number of scenarios. Avoid designing applications that rely on auto-parameterization.

When the PARAMETERIZATION option (at Database settings) is set to FORCED, any literal value that appears in a SELECT, INSERT, UPDATE or DELETE statement is converted to a parameter during query compilation. Some queries may not be able to be parameterized.

Forced parameterization might improve performance for environments where high volumes of concurrent adhoc SQL statements are submitted like from an ERP or CRM systems.

Final words

I have been wanting to write on these concepts for a while and I hope this can be a good start to learn. These are fundamental notes on Execution plans and are these notes are a starting point. Keep learning and reading.

Continue reading...


 

Excel Functions: ROMAN, ARABIC and FORMULATEXT August 16th, 2013

Vinod Kumar

If you a power user of Excel, then there is no way you could have lived without tons of awesome functions available with Excel. In this blog post, let me talk about three such awesome functions which are simple yet powerful. These are not the traditional function’s but are used for special purposes. The functions are as below:

ROMAN Function

How many have wanted in creating your kids homework to convert numbers into Roman numerals? Here is the magic function ROMAN. Below are the options available with ROMAN function. Do play around with them to know the differences.

In the below example, I have used the “0 – Classic” option and the output can be seen below.

ARABIC Function

This is an interesting function. This is the reverse of what we did before. Assume the situation where-in you want to convert from Roman numerals back to normal numeric? Now that is the function to take a look. It is ARABIC, pass it the Roman numeral and you will get the integer value.

Formulatext Function

Now this is not part of the ROMAN and ARABIC function. But when playing around with Excel, sometimes I need to show the formula that is behind the cell. Many a times I use F2, copy the formula and paste it with an apostrophe (‘). There is a even better way to do this – it is via the function FORMULATEXT. A typical use of FORMULATEXT is shown below.

I plan to write many more Excel formula in future. Do let me know if you want me to cover any of them in particular. Will line them up !!! Thanks for reading this far.

Continue reading...


 

Excel Tip: Shortcut with Tables / Regions August 12th, 2013

Vinod Kumar

Last week I wrote about Copy Values from Formula cells and a number of folks had expressed interest in knowing more shortcuts when working with Tables. This blog post I will show some of these commonly used shortcuts for Tables inside Excel. I am sure there are many more available with excel that you would like to share with our readers, please feel free to drop a line under comments section.

For this example, I am going to use a simple table as shown above. Now our cursor is in C4 cell and we will play with a number of shortcuts next.

Select Cells one-by-one

This is a basic shortcut, If you need to select cells adjacent to C4 one-by-one. Use SHIFT + Arrow. We can use the up arrow, down arrow or the side arrows to select in the directions we want to select cells.

Select Cells till the edge of region

Moving cell-by-cell can be time consuming. So if you want to select till the edge of the row. We can use SHIFT+CTRL+Arrow key. Below we used the right arrow key to select till the edge.

Select current Line

If above was a neat trick, then what will be the shortcut to select the complete row. That would be SHIFT+Spacebar. From C4 cell you can see we have selected the row edge-to-edge in the example below.

Select current column

Now to select the complete column edge-to-edge of values. the shortcut would be CTRL+Spacebar. From C4, the selection would look like below.

Select current REGION

Now for a strange reason you want to select the complete region. We can combine, the shortcut of row selection and then column to get the same effect. Even better combination is CTRL+A.

Hope these shortcuts will be useful when you play with Tables / regions inside Excel next time. Do let me know if you have alternate shortcuts for the same.

Continue reading...


 

Excel Tip: Copy Values from Formula cells August 7th, 2013

Vinod Kumar

I show a number of Excel Productivity to my customers every single day. One of the task includes getting some random number to show some charts or some formulae functions. I have used the RANDBETWEEN() function in the below example as an dataset. Assume we have a this range of numbers and now we want to copy ONLY the values from this range?

If you try to do anything with this Excel sheet, the random number is automagically generated. Not something we wanted :). Now the task is to copy ONLY the values into a different column.

First step is to Select the range of cells we want to copy. This can be any range in your workbook to start with. In our example we are using B2 to B6 as shown in figure below.

With the range selected. Move your mouse to the left side of the box. Do you see the “Cross Arrow” in the above figure? That is the trick.

Once you see the Cross Arrow, with your mouse in that over the box –> Right Click and Drag your mouse. You will see a range now where the values will land. In the below figure you see the D2:D6 for example. Now is the time to release the mouse.

Voila, now you will get an awesome popup inside Excel to play with. This gives us interesting options. Feel free to play with them as you wish. For our example, we wanted ONLY values, so I choose the 3rd option.

Now that lands the values, without the formulae into these cells. As you can see the values are copied, since we used the RANDBETWEEN() function – the values in our old cells are now regenerated. This is by design.

Our final output looks like above. You can use this technique in ANY range that contains range with formula. Do let me know if you used this trick anywhere in your Excel explorations.

Continue reading...