Thursday, August 19, 2010

SQL Server Parameter Sniffing

Today on the MSDN TSQL forums I was asked about a performance problem and to me the problem seemed to be directly related to parameter sniffing.  The poster then stated that he is not using stored procedures, so it cannot be a parameter sniffing .  Truth be told there are a lot of misconceptions surrounding parameter sniffing.  The best way to understand parameter sniffing is to understand why it happens. 

Parameter sniffing occurs when a parameterized query uses cached cardinality estimates to make query plan decisions.  The problem occurs when the first execution has atypical parameter values.  For each subsequent execution the optimizer is going to assume the estimates are good even though the estimates may be way off.  For example, say you have a stored procedure that returns all id values between 1 and 1000.  If the stored procedure is executed with this large range  of parameter values, the optimizer is going to cache these atypical values, which indirectly causes the optimizer to under estimate cardinality.  The problem is a typical execution may only return a few rows.  This “sniffing” can cause queries to scan a table oppose to seek because the optimizer is assuming inaccurate cardinality estimates.  The easiest way to tell if this problem is occurring in your environment, is to look at the query plan XML. Inside the query plan XML, you will see something similar to the code snippet below:

<ColumnReference Column="@1" ParameterCompiledValue="(1000)" ParameterRuntimeValue="(10)" />
<ColumnReference Column="@0" ParameterCompiledValue="(1)" ParameterRuntimeValue="(1)" />

In the snippet above, the query plan is assuming that column@1 has a value of 1000 and column @0 has a value of 1, while the actual runtime values are 10 and 1 respectively. 

There are three different methods to incorporate parameterization in SQL Server, auto/simple parameterization, stored procedures, and dynamic TSQL (executed with sp_executesql).  One of the most common misconceptions I have seen surrounding parameter sniffing is thinking that it is limited to stored procedures.   Now that we know more about parameter sniffing, lets have a look at an example.  I will be using the AdventureWorks database for my example.  In this example, I will select a few rows from the Sales.SalesOrderHeader table and then issue the same query, but return a lot more rows.

Code:

SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between 1 and 10
SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between 1 and 500

Query Plan:

image

As you can see, the query plan changes based on the number of rows returned.  The reason being is in this case is the optimizer hit a tipping point where the cost of the key lookup is greater than an index scan.  Let’s see what happens when a parameter sniffing problem occurs.

DBCC freeproccache
GO
EXEC sp_executesql N'SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between @Start and @End;',N'@Start INT,@End INT',@Start=1,@End=500
GO
EXEC sp_executesql N'SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between @Start and @End;',N'@Start INT,@End INT',@Start=1,@End=10
GO

Query Plans:

image

The execution plans are identical for both queries even though the number of rows greatly decreased.  This is a parameter sniffing problem. This problem occurs because we executed and cached the atypical execution that is returning customerid values between 1 and 500.  We can look into the execution plan and see the compiled parameter values and we can look at the execution plan estimated rows to validate.

<ColumnReference Column="@End" ParameterCompiledValue="(500)" ParameterRuntimeValue="(10)" />
<ColumnReference Column="@Start" ParameterCompiledValue="(1)" ParameterRuntimeValue="(1)" />

What can you do to solve the parameter sniffing problem?  You have a few options that you can use to solve the parameter sniffing problem.  You can use a local variables, this makes the optimizer use the density of the table to estimate cardinality, option recompile, or use the optimize for hint. 

--Declare local variables
EXEC sp_executesql N'declare @dynStart INT,@dynEnd INT; SET @dynStart=@Start; SET @dynEnd=@End;SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between @dynStart and @dynEnd;',N'@Start INT,@End INT',@Start=1,@End=500
EXEC sp_executesql N'declare @dynStart INT,@dynEnd INT; SET @dynStart=@Start; SET @dynEnd=@End;SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between @dynStart and @dynEnd;',N'@Start INT,@End INT',@Start=1,@End=10

--Solution Using option(recompile)
EXEC sp_executesql N'SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between @Start and @End OPTION(RECOMPILE);',N'@Start INT,@End INT',@Start=1,@End=500
EXEC sp_executesql N'SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between @Start and @End OPTION(RECOMPILE);',N'@Start INT,@End INT',@Start=1,@End=10

--Solution Using OPTIMIZE FOR HINT
EXEC sp_executesql N'SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between @Start and @End OPTION(OPTIMIZE FOR (@Start=1,@End=10));',N'@Start INT,@End INT',@Start=1,@End=500
EXEC sp_executesql N'SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between @Start and @End OPTION(OPTIMIZE FOR (@Start=1,@End=10));',N'@Start INT,@End INT',@Start=1,@End=10

--Solution Using OPTIMIZE FOR UNKNOWN (SQL 2008 only)
EXEC sp_executesql N'SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between @Start and @End OPTION(OPTIMIZE FOR (@Start UNKNOWN,@End UNKNOWN));',N'@Start INT,@End INT',@Start=1,@End=500
EXEC sp_executesql N'SELECT * FROM Sales.SalesOrderHeader WHERE CustomerID between @Start and @End OPTION(OPTIMIZE FOR (@Start UNKNOWN,@End UNKNOWN));',N'@Start INT,@End INT',@Start=1,@End=10

Now each of the above methods will help alleviate some of the problems associated with parameter sniffing, but that does not mean it will give you an optimal query plan.  You should test each of the methods to see which makes the most sense for your environment.  If none of these options perform well, another option is to use control flow logic to execution different variations of the TSQL or stored procedure, allowing for more control over which execution plan gets used.  The thing to remember here, is you have to cater to your customers and their usage patterns to ultimately decide which solution is best for your environment.

Until next time happy coding.

Saturday, August 7, 2010

TSQL Challenge 35 Available

TSQLChallenges.com recently released challenge 35, “Find the total number of 'Full Attendees' in each 24 HOP Session”.  For those not familiar with TSQL Challenges,  TSQL Challenges is a website that creates and evaluates SQL Server puzzles each and every week.  The goal of TSQL Challenges is to increase TSQL best practice awareness and to showcase solutions to common and sometimes uncommon TSQL problems, using set based programming logic. Not only do you compete in challenges, but more importantly TSQL Challenges gives you the opportunity to interact with your peers.  Essentially it is a mechanism to give back to and learn from the SQL Server community.  If you haven’t had a chance to stop by and checkout TSQL Challenges, I highly recommend you do so, TSQLChallenges.com.

So…. What is the challenge?  The challenge should you choose to accept it is to count the number of attendees that fully watched each session at 24 hours of PASS.  Note:  this data is artificial and does not reflect real 24 hours of PASS metrics.  If you love puzzles, TSQL, and PASS this challenge is for you. 

Good luck and happy coding.

Saturday, July 17, 2010

Order By Does Not Always Guarantee Sort Order

A week or so ago, I saw an interesting question on the MSDN SQL Server forums and I thought it would make a great blog post.  The forum question asked about an Order By clause that does not guarantee sort.  The query was really two queries merged together via a UNION.  The OP noticed that when UNION ALL was used the sort order was different than the same query using UNION, even though an ORDER BY clause was used.  If you are familiar with UNION and UNION ALL, you know that UNION has to perform a distinct sort and remove duplicates, while UNION ALL does not.  The query plan between the two queries is identical, other than a sort vs. a distinct sort.

Here is a small scale repro of the problem.

SET NOCOUNT ON;
GO

IF OBJECT_ID('tempdb.dbo.#t') IS NOT NULL
DROP TABLE #t;
GO

CREATE TABLE #t(id INT, col CHAR(1),col2 BIT);

INSERT INTO #t VALUES (1,'a',1)
INSERT INTO #t VALUES (1,'a',0)
GO

SELECT id, col, col2
FROM #t 

UNION ALL

SELECT id,col,col2
FROM #t 
ORDER BY id, col
GO

SELECT id,col,col2
FROM #t 

UNION

SELECT id,col,col2
FROM #t 
ORDER BY id, col
GO

/*
id          col  col2
----------- ---- -----
1           a    1
1           a    0
1           a    1
1           a    0


id          col  col2
----------- ---- -----
1           a    0
1           a    1
*/

As you can see that the order of col2 is not the same between the two queries. The root of this problem is I am using columns that contain duplicates in the ORDER BY clause and col2 is not included in the ORDER BY clause.  I can never guarantee the order of the “duplicate” rows because I cannot guarantee how the optimizer will build and execute the query plan.  In this example, the UNION query sorts by all columns in the select list, which includes col2, while the UNION query does not. You can guarantee that the order will be  id, col, but the col2 value order may vary between executions.  You will need to add col2 to the ORDER BY clause to guarantee the sort.

SELECT id, col, col2
FROM #t 

UNION ALL

SELECT id,col,col2
FROM #t 
ORDER BY id, col, col2
GO

SELECT id,col,col2
FROM #t 

UNION

SELECT id,col,col2
FROM #t 
ORDER BY id, col, col2
GO

/*
id          col  col2
----------- ---- -----
1           a    0
1           a    0
1           a    1
1           a    1


id          col  col2
----------- ---- -----
1           a    0
1           a    1
*/

I thought this was a good reminder to all that even with an ORDER BY clause specified, the order of the rows may not be what you expect.  You have to use an ORDER BY clause and make sure all the columns you want to sort by are listed in the ORDER BY.

Until next time happy coding.

Thursday, July 8, 2010

Breaking the Print character limit

I got some grief regarding my SQL Meme post about PRINT. I specifically stressed that I believe PRINT needs a make over because its inability to handle max data types, http://jahaines.blogspot.com/2010/05/sql-meme-tagged-5-things-sql-server.html.  I know I am not the only person out there that feels this functionality is a bit antiquated. In this post, I will provide a great alternative to PRINT.  I have been using this method for the past year or so to print really long dynamic SQL.  The concept is very simple.  Instead of printing the dynamic SQL to the messages tab, I will be converting the dynamic SQL to XML.  XML is a great alternative because it keeps the formatting and can hold up to 2 GB of data.  The key component here is naming the column [processing-instruction(x)].  This column name [processing-instruction(x)]sends special XML instruction allowing the text to be converted, along with any special characters.   It should be noted that whatever value you put in parenthesis will be incorporated in the XML tags, in my case “x”.

Let’s have a look at how this works.

DECLARE @sql VARCHAR(MAX)
SET @sql =
    CAST(REPLICATE('a',5000) + CHAR(13) AS VARCHAR(MAX)) +
    CAST(REPLICATE('b',5000) + CHAR(13) AS VARCHAR(MAX)) +
    CAST(REPLICATE('c',5000) + CHAR(13) AS VARCHAR(MAX)) + 
    'd'
    
SELECT [processing-instruction(x)]=@sql FOR XML PATH(''),TYPE

image

Pretty simple right!!! There really is not much to this technique.  It is very simplistic and gets the job done.  If you find yourself getting aggravated with makeshift PRINT solutions, come on over to the dark side and get your XML on.

Until next time, happy coding.

Friday, June 11, 2010

Why Are Transactions Blocked All Of A Sudden?

Have you ever had a query that runs perfectly fine one day and the then all of a sudden starts getting bombarded with blocking transactions?  Believe it or not this is not than uncommon an occurrence and more interestingly can occur when no changes occur in the schema at all!  Unbeknownst to most, you are susceptible to an imaginary data distribution tipping point that can go south at any point in time, if your application creates a specific type of workload.   Let’s dig deeper to find out what causes this problem.

I will start off by creating some sample data.

USE [tempdb]
GO
IF EXISTS(SELECT 1 FROM sys.tables WHERE NAME = 'TestData')
BEGIN
    DROP TABLE dbo.[TestData];
END
GO

CREATE TABLE dbo.TestData(
RowNum INT PRIMARY KEY CLUSTERED,
SomeBit INT,
SomeCode CHAR(2)
);
GO

INSERT INTO dbo.TestData
SELECT TOP 5000 
    ROW_NUMBER() OVER (ORDER BY t1.NAME) AS RowNumber,
    CASE WHEN ROW_NUMBER() OVER (ORDER BY t1.NAME) %5 = 0 THEN 1 ELSE 0 END AS SomeBit, 
    'A' AS SomeCode
FROM 
    Master.dbo.SysColumns t1,
    Master.dbo.SysColumns t2
GO

Nothing new here, just a table with some data.  Now I will begin a transaction and run a simple UPDATE statement.

BEGIN TRANSACTION

UPDATE dbo.TestData
SET SomeCode = 'B'
WHERE somebit = 0

Now run a simple select statement against the table in a new query window.

SELECT RowNum FROM dbo.TestData WHERE RowNum = 1000

/*
RowNum
-----------
1000
*/

The query returned a resultset, just as we thought it would.  What I wanted to show here is that we currently do not have a blocking problem and users can still access the rows that do not have a SomeBit value of 0.   SQL Server will try to take the lowest or most granular lock possible when satisfying a query, such that other queries can still do what they need to.  Obviously there are limitations to this and SQL Server reacts differently based on system responsiveness and pressures.    You can verify that you cannot access a row with a SomeBit value of 0 by changing the predicate to a number that is not divisible by 5.

Okay…. big deal…. you probably already know this, but lets suppose that your manager tells you to migrate data from an existing system into this table.    The flat file has a measly 3000 rows is it, so its overall impact should really have no implications on our system right???? Let’s find out.  Please note that this problem can manifest itself by hitting a tipping point of data also… Meaning it does not take a huge influx of data to cause this problem, and this is why this problem can appear seemingly out of nowhere.

I will load the data with the same insert statement to mimic our data migration.

INSERT INTO dbo.TestData
SELECT TOP 3000 
    CASE WHEN ROW_NUMBER() OVER (ORDER BY t1.NAME) %5 = 0 THEN 1 ELSE 0 END AS SomeBit, 
    'A' AS SomeCode
FROM 
    Master.dbo.SysColumns t1,
    Master.dbo.SysColumns t2
GO

Now let’s run our same update transaction again.

UPDATE dbo.TestData
SET SomeCode = 'B'
WHERE somebit = 0

Now run the query below in a new window.

SELECT RowNum FROM dbo.TestData WHERE RowNum = 1000

What you should see is the query is now blocked.  Keep in mind that nothing changed on this server, except new data was inserted into the table.  Any ideas why this problem is now occurring?  If you haven’t figured it out yet, this problem is caused by lock escalation, http://msdn.microsoft.com/en-us/library/ms184286.aspx.  When SQL Server meets certain thresholds or memory pressure exists, SQL Server will escalate locks.  Lock escalation unfortunately goes from very granular locks to not so granular locks.  Lock escalation will go straight to a table lock from a rid/key or page lock.  What does this mean?  It means that SQL Server can save memory by acquiring a less granular lock, oppose to a lot of granular locks.  You can look at the transaction locks for each of the UPDATE statements to verify lock escalation is occurring.

Note: I removed intent locks as those locks from the resultset.

SELECT * 
FROM sys.[dm_tran_locks] 
WHERE [request_session_id] = xx
AND [request_mode] = 'X'
AND [request_mode] NOT LIKE 'I%'

 

Initial Update Query

image 

Lock Escalated Update Query

image

If you run a lot of big DML transactions in your environment and still require concurrency, you may want to pay careful attention to lock escalation; otherwise, you may experience an abnormally large number of blocks.  While lock escalation  is great in most cases, in others it is less than ideal.

Here are the thresholds as described by BOL, http://msdn.microsoft.com/en-us/library/ms184286.aspx

Lock Escalation Thresholds


Lock escalation is triggered when lock escalation is not disabled on the table by using the ALTER TABLE SET LOCK_ESCALATION option, and when either of the following conditions exists:

  • A single Transact-SQL statement acquires at least 5,000 locks on a single nonpartitioned table or index.

  • A single Transact-SQL statement acquires at least 5,000 locks on a single partition of a partitioned table and the ALTER TABLE SET LOCK_ESCALATION option is set to AUTO.

  • The number of locks in an instance of the Database Engine exceeds memory or configuration thresholds.

If locks cannot be escalated because of lock conflicts, the Database Engine periodically triggers lock escalation at every 1,250 new locks acquired.

Now that we have identified the problem, what can we do to fix it?  There are a number of options that can be used  to solve this problem.  One solution is to use the nolock hint in your query or the read uncommited isolation level.  This particular solution is not recommend for all OLTP environments and should only be implemented with careful consideration.  The nolock hint and the read uncommitted isolation level can return inconsistent data.  If and only if this is okay, should you consider this as a solution.  Another solution is to use the read committed snapshot isolation level or the snapshot isolation level.  Both of these solutions require tempdb overhead, but do return transactional consistent data.  You can read more about these isolation levels here, http://msdn.microsoft.com/en-us/library/ms189122.aspx.  The other approach is to remove lock escalation.  You can remove lock escalation at the instance level (trace flags 1211 and 1224) or at the table level in SQL Server 2008, using the ALTER TABLE statement.  Obviously, removing lock escalation should be carefully thought out and tested.  For more information on these trace flags, please visit the storage engine’s blog http://blogs.msdn.com/b/sqlserverstorageengine/archive/2006/05/17/lock-escalation.aspx.

There you have it. I have shown you how simply adding data to your table can literally make a fast application a blocking nightmare overnight.  Before you go off and start adding nolock hints or changing your isolation level, please understand that you should only take these steps if you are experiencing this problem.  The reality is an OLTP system should not be holding onto more than 5000 (a lock escalation tipping point) locks because transactions should be short and efficient.  If you are experiencing this problem, your database is probably OLAP, you are missing indexes, or you queries are not typical OLTP transactions.  For example, you probably have large DML transactions and users trying to query the table concurrently.  

Until next time happy coding.

Tuesday, May 11, 2010

SQL Meme: Tagged: 5 things SQL Server should drop

I have been tagged by Denis Gobo (Blog/Twitter) in a SQL meme regarding the top 5 things I would drop from SQL Server, Denis’s post. I am sure some of you could spit out a list a mile long, but I am going to focus on my 5 biggest pet peeves, well ones that have not been listed yet :^).

sp_msforeachdb and sp_msforeachtable

First off these two stored procedures are undocumented, so they can be deprecated or the functionality may change. In my opinion, these two stored procedures are useless. If you look underneath the hood, these stored procedures both use basic cursors…. Sure they make fancy work of the “?”, but you can to with your own cursor. Remove these from your code and roll your own cursors.

PRINT

Print is a bit antiquated when it comes to newer versions of SQL Server. I am not saying drop print altogether, but drop the 8000 varchar/4000 nvarchar print limitation. I have seen this byte (sorry couldn’t resist) people over and over. It does not make sense to allow a developer to store 2 GB worth of data in a variable and then only print 8000 characters… Sure we can roll our own print procedure, or use XML, but why should we work around the issue. Allow PRINT to “print” up to the maximum variable size.

ORDER BY Constant in Windowing Function

If you try to order by a constant value using a windowing function, such as Row_Number(), you will get a message stating constants are not allowed; however, there is a workaround. The workaround is to use a subquery (with a constant) in the order by clause. The behavior should be removed because it seemingly gives developers the idea that the data will be order in the order of table. Before anyone says anything, a table does not have a predefined order. So what we have here is a number sequence that is not guaranteed each time it is run. In my book, inconsistent behavior = Remove the functionality and make the user order by an actual column.

**** UPDATE ****

I have been asked to provide a sample of what I am talking about here. Essentially create a very simple table with a few columns as such:

DECLARE @t TABLE(
Id INT IDENTITY(1,1) PRIMARY KEY,
SomeChar CHAR(1)
);

INSERT INTO @t (SomeChar) VALUES ('a') ;
INSERT INTO @t (SomeChar) VALUES ('b') ;
INSERT INTO @t (SomeChar) VALUES ('c') ;

Now try each of the following queries against the table and you will see one windowing function says constants cannot be used, but clearly then can with a little ingenuity. The problem here is the illusion that the data will be sequenced in the order of the table, without actually sorting it; however, what is really occurring is the optimizer is generating a sequence whose order may vary from execution to execution. A column should be supplied; otherwise, unexpected results may occur.

SELECT Id,SomeChar,ROW_NUMBER() OVER(ORDER BY 'a') AS seq
FROM @t
/*Windowed functions do not support constants as ORDER BY clause expressions.*/

SELECT Id,SomeChar,ROW_NUMBER() OVER(ORDER BY (SELECT 'a')) AS seq
FROM @t
/*
Id          SomeChar seq
----------- -------- --------------------
1           a        1
2           b        2
3           c        3
*/

Edit Top X Rows

This is a SSMS feature that I just do not find useful at all, plus this feature gives non database professionals the ability to modify/inserted/delete data, with no understanding of what is occurring in the background. In my opinion, those who use this feature are asking for trouble. I believe all insert/update/delete transactions should be done through CRUD (Create/Read/Update/Delete) stored procedures or TSQL batch operations. If you do not know how to do CRUD through TSQL, you do not need to be doing it at all.

SELECT *

I may take a little heat from this one, but SELECT * should be removed from SQL Server. SQL Server intellisense should auto expand “*” into the column list. SELECT * is a prime candidate for performance problems and wasted network traffic. SELECT * affects the optimizer’s ability to use indexes, increases network bytes, and breaks code when column ordinal position is changed or columns are added or removed. Sure we all use SELECT * for quick ad-hoc queries, but believe me.. it also exists in production code. In my opinion, the benefits of expanding the “*” outweigh the cons because it makes developers/DBAs realize how many columns they are selecting, which may tip them off that they should restrict the number of columns being selected. Also expanding the “*” prevents insert statements from breaking when columns are added or removed.

These are the items I would remove from SQL Server given the chance. I am sure I can come up with a lot more, but I will let others take a stab at this.

Until next time, happy coding.

Tuesday, May 4, 2010

Performance tuning Case Expressions With Correlated Subqueries

Today I wanted to talk about some potential pitfalls that a developer may encounter when using correlated subqueries, in a case expression.  As you may recall, I have done a post on the potential performance pitfalls, in using correlated subqueries, before  http://jahaines.blogspot.com/2009/06/correlated-sub-queries-for-good-or-evil.html.  In this post, I will be focusing on case expressions that use  correlated subqueries.

I will start by creating a sample table.

USE [tempdb]
GO

IF EXISTS(SELECT 1 FROM sys.tables WHERE NAME = 'TestData')
BEGIN
    DROP TABLE dbo.[TestData];
END
GO

CREATE TABLE dbo.TestData(
RowNum INT PRIMARY KEY ,
SomeChar TINYINT
);
GO

INSERT INTO dbo.TestData
SELECT TOP 1000 
    ROW_NUMBER() OVER (ORDER BY t1.NAME) AS RowNumber,
    ABS(CHECKSUM(NEWID())%3+1)
FROM 
    Master.dbo.SysColumns t1,
    Master.dbo.SysColumns t2
GO

CREATE NONCLUSTERED INDEX ncl_idx_SomeChar ON dbo.TestData(SomeChar);
GO

IF EXISTS(SELECT 1 FROM sys.tables WHERE NAME = 'TestData2')
BEGIN
    DROP TABLE dbo.[TestData2];
END
GO

CREATE TABLE dbo.TestData2(
Id INT IDENTITY(1,1) PRIMARY KEY,
RowNum INT unique,
SomeChar TINYINT
);
GO

INSERT INTO dbo.TestData2
SELECT TOP 500
    ROW_NUMBER() OVER (ORDER BY t1.NAME) AS RowNumber,
    ABS(CHECKSUM(NEWID())%3+1)
FROM 
    Master.dbo.SysColumns t1,
    Master.dbo.SysColumns t2
GO

CREATE NONCLUSTERED INDEX ncl_idx_SomeChar ON dbo.TestData2(SomeChar);
GO

A typical correlated subquery in a case expression may look something like this:

SELECT 
    RowNum,SomeChar,
    CASE (SELECT SomeChar FROM dbo.TestData2 t2 WHERE t2.RowNum = t1.RowNum) 
        WHEN 1 THEN 'Type1' 
        WHEN 2 THEN 'Type2'
        WHEN 3 THEN 'Type3'
    END
FROM dbo.TestData t1
WHERE [RowNum] <= 500

Let’s have a look at the execution plan to see what is going on underneath the hood

image

IO:

Table 'Worktable'. Scan count 442, logical reads 2886, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TestData2'. Scan count 3, logical reads 12, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TestData'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

Whoa!!! This query is extremely inefficient.  As you can see the TestData2 table was scanned 3 times and a worker table was created and scanned 442 times.  The problem here is the optimizer chooses to spool the data from dbo.TestData2 twice.  The even bigger problem with this method is scalability.  This code does not scale well at all.  In the case of this query, the optimizer creates a relational number of index spools to the number of elements in the case expression.  The relationship can be defined as Number Of Spools = Number of Case Elements – 1.  What does this mean? It means that if your case expression has 4 elements you get 3 spools… if you case expression has 5 elements you get 4 spools and so on.  Simply put…..query performance decreases as the number of elements in the case expression increase.  Take a look at the example below.

image

IO:

Table 'Worktable'. Scan count 539, logical reads 4081, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TestData2'. Scan count 4, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TestData'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

So how should we change the query to help the optimizer make a better decision? 

The best solution is to allow the optimizer to get the computed value, while it is joining the TestData2 table, as shown below.

SELECT 
    RowNum,SomeChar,
    (SELECT CASE SomeChar WHEN 1 THEN 'Type1' WHEN 2 THEN 'Type2' WHEN 3 THEN 'Type3' END FROM dbo.TestData2 t2 WHERE t2.RowNum = t1.RowNum) 
FROM dbo.TestData t1
WHERE [RowNum] <= 500

image

IO:

Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TestData2'. Scan count 1, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'TestData'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

As you can see, this is a much better query plan.  The key here is the optimizer is able to use a compute scalar operator upstream, while joining the tables. Because the computed value is joined to the TestData table, we do not have to worry about spooling the data.

Conclusion

For me, correlated subqueries can have inconsistent behavior and often bear performance problems, such as this one.  Do not get me wrong, correlated subqueries are not all bad, but they should be thoroughly tested.  In my opinion, the best way to write this query is to LEFT OUTER JOIN dbo.TestData2.  An outer join will provide more consistent performance.

SELECT 
    t1.RowNum,t1.SomeChar,
    CASE t2.SomeChar 
        WHEN 1 THEN 'Type1' 
        WHEN 2 THEN 'Type2'
        WHEN 3 THEN 'Type3'
    END
FROM dbo.TestData t1
LEFT JOIN dbo.TestData2 t2  ON t1.RowNum = t2.RowNum
WHERE t1.[RowNum] <= 500

Until next time, happy coding.

Tuesday, April 13, 2010

T-SQL Tuesday #005 – Creating & Emailing HTML Reports

This post is my contribution to the popular TSQL Tuesday event, http://sqlvariant.com/wordpress/index.php/2010/04/t-sql-tuesday-005-reporting/.   The creator of this amazing event is Adam Machanic.   What I love most about this event is how it brings the SQL Server Community together. The “theme” for this TSQL Tuesday is reporting.  As you aware, reporting is a very broad topic.  I will be focusing on creating and emailing HTML reports.  Now this process is no substitute for a SSRS report or a cube report.  What I am about to show you is a very sleek way of presenting data to managers at a very high level.  You do not want to send an entire report as a HTML report, so this process should be limited to dashboards or reports that are small in nature.  If the user needs more detail , or is simply requesting too much data, you may want to add a detail link in the HTML body, as this gives the user the ability to drill through for more detail. 

Let’s get started by creating a sample table and a couple of views.  It should be noted that this process will primarily utilize views to expose data.  This code can be further expanded to filter for specific columns, but as it stands now…. this process returns all columns in the view or table.

USE [tempdb]
GO

SET NOCOUNT ON;
GO

IF OBJECT_ID('tempdb.dbo.Sales') IS NOT NULL
BEGIN
    DROP TABLE dbo.Sales;
END
GO

CREATE TABLE dbo.Sales(
SalesId INT IDENTITY(1,1) PRIMARY KEY CLUSTERED,
EmployeeId INT,
Amt NUMERIC(9,2),
LocationCd INT
);
GO

INSERT INTO dbo.Sales VALUES (1,12.50,1);
INSERT INTO dbo.Sales VALUES (1,99.99,4);
INSERT INTO dbo.Sales VALUES (2,45.64,1);
INSERT INTO dbo.Sales VALUES (3,44.65,2);
INSERT INTO dbo.Sales VALUES (3,52.89,4);
INSERT INTO dbo.Sales VALUES (4,250.54,3);
INSERT INTO dbo.Sales VALUES (5,150.00,5);
GO

IF OBJECT_ID('tempdb.dbo.vw_SalesVolumnByLocation') IS NOT NULL
BEGIN
    DROP VIEW dbo.vw_SalesVolumnByLocation;
END
GO

CREATE VIEW dbo.vw_SalesVolumnByLocation
AS
SELECT LocationCd, SUM(Amt) AS SalesVolume
FROM dbo.Sales
GROUP BY LocationCd
GO

CREATE VIEW dbo.vw_SalesBySalesCounselor
AS
SELECT [EmployeeId],[LocationCd],[Amt]
FROM dbo.Sales
GO

Next, the stored procedure.  First and foremost this code looks a lot worse than than it really is.  I had to use dynamic SQL because I did not want to have to create this stored procedure in every database.

The parameter list is pretty massive, but a lot of the parameters have default values, which means you do not have to specify anything.  The parameter are pretty self explanatory.

USE [master]
GO

CREATE PROCEDURE usp_Email_HTML_Rpt
    @DB VARCHAR(255) = NULL,
    @Object VARCHAR(255),
    @Schema VARCHAR(255),
    @Rec NVARCHAR(255),
    @CC NVARCHAR(255) = NULL,
    @rpt_Header VARCHAR(50),
    @rpt_Header_BGColor VARCHAR(10) = '#FFFFFF',
    @TblHdr_BGColor VARCHAR(10) = '#FFFFFF',
    @Condition1_Col VARCHAR(255) = NULL,
    @Condition1_Expression VARCHAR(500) = NULL,
    @Condition1_BGColor VARCHAR(10) = NULL,
    @Condition2_Col VARCHAR(255) = NULL,
    @Condition2_Expression VARCHAR(500) = NULL,
    @Condition2_BGColor VARCHAR(10) = NULL,
    @AltRowBGColor VARCHAR(10) = NULL,
    @Pred_Filter1_Col VARCHAR(255) = NULL,
    @Pred_Filter1_Expression VARCHAR(500) = NULL,
    @Pred_Filter2_Col VARCHAR(255) = NULL,
    @Pred_Filter2_Expression VARCHAR(500) = NULL,
    @OrderBy VARCHAR(500) = NULL
AS
BEGIN

SET NOCOUNT ON;

DECLARE @sql NVARCHAR(MAX),
        @StyleSheet VARCHAR(MAX),
        @RtnSQL NVARCHAR(MAX),
        @html_email NVARCHAR(MAX)
                
DECLARE @HTML TABLE(seq TINYINT, Tag VARCHAR(MAX));

--Create a new style sheet if none was passed in
IF @StyleSheet IS NULL
BEGIN
--Set the Procedure Stylesheet.  You can also supply this as a variable
SET @StyleSheet = 
'<head>
 <style type="text/css">
     th {width:150px;color:"#FFFFFF";font-weight:bold;background-color: ' + QUOTENAME(COALESCE(@TblHdr_BGColor,'#FFFFFF'),'"') +';border:1;border-width:thin; border-style:solid; align:center}
    td {width:150px;background-color: "#FFFFFF"; border: 1; border-style:solid;border-width:thin; text-align: "left"}
    td.Cond1Met {width:150px;background-color: ' + QUOTENAME(COALESCE(@Condition1_BGColor,'#FFFFFF'),'"') +'; border-style:solid;border-width:thin; text-align: "left"}
    td.Cond2Met {width:150px;background-color: ' + QUOTENAME(COALESCE(@Condition2_BGColor,'#FFFFFF'),'"') +'; border-style:solid;border-width:thin; text-align: "left"}
     td.AltRowColor {width:150px;background-color: ' + QUOTENAME(COALESCE(@AltRowBGColor,'#FFFFFF'),'"') +'; border: 1; border-style:solid;border-width:thin; text-align: "left"}
    td.LegendCond1Met {width:200px;background-color: ' + QUOTENAME(COALESCE(@Condition1_BGColor,'#FFFFFF'),'"') +'; border-style:solid;border-width:thin; text-align: "center"}
    td.LegendCond2Met {width:200px;background-color: ' + QUOTENAME(COALESCE(@Condition2_BGColor,'#FFFFFF'),'"') +'; border-style:solid;border-width:thin; text-align: "center"}
     th.LegendHdr {width:200px;color:"#FFFFFF"; font-weight:bold; background-color: ' + QUOTENAME(COALESCE(@rpt_Header_BGColor,'#FFFFFF'),'"') + ';border: 1;border-width:thin; border-style:solid;text-align: "center"}
    td.Legend {width:200px;background-color: "#FFFFFF"; border: 1; border-width:thin; border-style:solid; text-align: "center"} 
    th.LegendTitle {width:200px;color:black;background-color: "#C0C0C0"; border: 1; border-width:thin; border-style:solid; text-align: "center"} 
 </style>
<title>' + COALESCE(@rpt_Header,'Report Header') + '</title>
</head>
'
END 

--Build basic html structure
INSERT INTO @HTML (seq,Tag) 
VALUES (1,'<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">' + CHAR(13) + '<html>' + COALESCE(@StyleSheet,'') + '<body>');

--If optional conditions exist, build a legend
IF @Condition1_Col IS NOT NULL OR @Condition2_Col IS NOT NULL
BEGIN

INSERT INTO @HTML (seq,Tag) 
SELECT 2, '<table border="1" align="LEFT">' UNION ALL
SELECT 3, '<tr><th class="LegendTitle"COLSPAN=3>Legend</th></tr>' UNION ALL
SELECT 4, '<tr><th class="LegendHdr">Variable</th><th class="LegendHdr">Condition Column</th><th class="LegendHdr">Condition Expression</th></tr>' UNION ALL
SELECT 5, '<tr><td class="Legend">@Condition1</td><td class="Legend">' + COALESCE(@Condition1_Col,'n/a') + '</td><td class="LegendCond1Met"> ' + COALESCE(@Condition1_Expression,'n/a') + '</td></tr>' UNION ALL
SELECT 6, '<tr><td class="Legend">@Condition2</td><td class="Legend">' + COALESCE(@Condition2_Col,'n/a') + '</td><td class="LegendCond2Met"> ' + COALESCE(@Condition2_Expression,'n/a') + '</td></tr>' UNION ALL
SELECT 7, '</table><br><br><br><br><br><br><br>' + '<h1>' + COALESCE(@rpt_Header,'Report Header') + '</h1>' + '<table border="1" align="left" width="25%">'

END
ELSE
BEGIN --No legend is needed, start building the table
    INSERT INTO @HTML (seq,Tag) 
    SELECT 8, '<br>' + '<h1>' + COALESCE(@rpt_Header,'Report Header') + '</h1>' + '<table border="1" align="left" width="25%">'
END

--Create Table Header
SET @sql = N'
SELECT 9,CAST(
    (
    SELECT CAST(''<th>'' + COALESCE(c.COLUMN_NAME,'''') + ''</th>'' AS XML)
    FROM ' + COALESCE(QUOTENAME(@DB) + '.','') + '[INFORMATION_SCHEMA].[COLUMNS] c
    WHERE c.[TABLE_NAME] = @dynObject AND c.[TABLE_SCHEMA] = @dynSchema
    FOR XML PATH(''''),ELEMENTS,ROOT(''tr''),TYPE
    ) AS VARCHAR(MAX))';
    
INSERT INTO @HTML (seq,Tag) 
EXEC sp_executesql @sql, N'@dynObject VARCHAR(255),@dynSchema VARCHAR(128)',@dynObject = @Object, @dynSchema=@Schema

    
--Create SQL Statement to return actual values
SET @sql = N'
SELECT 
    @dynRtnSQL = ''SELECT 10,''''<tr>''''+'' + STUFF(
    (
        SELECT 
            ''+  CASE '' +
                 COALESCE(''WHEN '' + QUOTENAME(@dynCondition1_Col) +  @dynCondition1_Expression
            + '' THEN  ''''<td class="Cond1Met">'''' + CAST('' + QUOTENAME(c.COLUMN_NAME) + '' AS VARCHAR(MAX))'','''')
            +     COALESCE('' WHEN '' + QUOTENAME(@dynCondition2_Col) +  @dynCondition2_Expression
            + '' THEN  ''''<td class="Cond2Met">'''' + CAST('' + QUOTENAME(c.COLUMN_NAME) + '' AS VARCHAR(MAX))'','''')
            + '' WHEN ''''1''''= CASE WHEN ROW_NUMBER() OVER(ORDER BY ' + COALESCE(@OrderBy,'(SELECT NULL)') + ') % 2 = 0 THEN 1 ELSE 0 END''
            + '' THEN  ''''<td class="AltRowColor">'''' + CAST('' + QUOTENAME(c.COLUMN_NAME) + '' AS VARCHAR(MAX))''
            + '' ELSE ''''<td>'''' + CAST('' + QUOTENAME(c.COLUMN_NAME) + '' AS VARCHAR(MAX))''
            + '' END''
            + '' + ''''</td>''''''
        FROM ' + COALESCE(QUOTENAME(@DB) + '.','') + '[INFORMATION_SCHEMA].[Columns] c
        WHERE c.[TABLE_NAME] = @dynObject AND c.[TABLE_SCHEMA] = @dynSchema
        FOR XML PATH(''''),TYPE
    ).value(''.'',''VARCHAR(MAX)'')
    ,1,1,'''') + ''+''''</tr>'''' FROM '  + COALESCE(QUOTENAME(@DB) + '.','') +  ''' + QUOTENAME(@dynSchema) + ''.'' + QUOTENAME(@dynObject) +
    ''WHERE 1=1 ' + COALESCE(' AND' + QUOTENAME(@Pred_Filter1_Col) + SPACE(1) + @Pred_Filter1_Expression,'') + ''
    + COALESCE(' AND' + QUOTENAME(@Pred_Filter2_Col) + SPACE(1) + @Pred_Filter2_Expression,'') 
    + COALESCE(' ORDER BY ' + @OrderBy,'') + ''''

--Create a variable to hold the newly created dynamic sql statement
--PRINT @sql
EXEC sp_executesql 
    @sql, 
    N'@dynCondition1_Col VARCHAR(255), @dynCondition1_Expression VARCHAR(500), @dynCondition2_Col VARCHAR(255), @dynCondition2_Expression VARCHAR(500), @dynSchema VARCHAR(255), @dynObject VARCHAR(255), @dynRtnSQL NVARCHAR(MAX) OUTPUT',
    @dynCondition1_Col = @Condition1_Col,
    @dynCondition1_Expression = @Condition1_Expression,
    @dynCondition2_Col = @Condition2_Col,
    @dynCondition2_Expression = @Condition2_Expression,
    @dynSchema = @Schema,
    @dynObject = @Object,
    @dynRtnSQL = @RtnSQL OUTPUT

--PRINT @RtnSQL

--Execute the newly created dynamic TSQL statment.
INSERT INTO @HTML (seq,Tag)
EXEC sp_executesql @RtnSQL

--Close all report HTML tags
INSERT INTO @HTML (seq,Tag)
SELECT 11, '</table></body></html>'

--SELECT Tag FROM @HTML ORDER BY seq -- return HTML in the correct order

SELECT @HTML_Email = COALESCE(@HTML_Email,'') + Tag FROM @HTML ORDER BY seq -- return HTML in the correct order

--PRINT @HTML_Email
EXEC msdb.dbo.sp_send_dbmail
    @recipients = @rec,
    @copy_recipients = @CC,
    @subject = @rpt_Header,
    @body = @HTML_Email,
    @body_format = 'HTML',
    @importance = 'Normal'    

END 
GO

Now let’s see this stored procedure in action.  The code is very flexible and gives you a variety of methods to slice and dice data.  I have provided two conditional filters that will highlight data that meets the criteria to a specified color.  I have also include sort and filtering parameters to help reduce the amount of data being returned.  As I stated before, not all of the parameters are required.  One of my favorite parameters is @AltRowBGColor.  @AltRowBGColor accepts an HTML color that will alternate the row color of the HTML table.

Execute the following code: (AltRowBGColor is commented out for this demo)

EXECUTE [dbo].[usp_Email_HTML_Rpt] 
   @DB ='tempdb'
  ,@Rec = 'ahaines@stei.com' –Change to your email address
  ,@Object = 'vw_SalesVolumnByLocation'
  ,@Schema = 'dbo'
  ,@rpt_Header = 'Sales Volumn By Location'
  ,@rpt_Header_BGColor = '#87AFC7'
  ,@TblHdr_BGColor = '#87AFC7'
  ,@Condition1_Col = 'SalesVolume'
  ,@Condition1_Expression = '<100'
  ,@Condition1_BGColor = '#E55451'
  ,@Condition2_Col = 'SalesVolume'
  ,@Condition2_Expression = '>200'
  ,@Condition2_BGColor = '#00FF00'
  --,@AltRowBGColor = '#A0CFEC'
  ,@OrderBy = '[SalesVolume] DESC'

You will get an email similar to the one below.  Note that you have to have database mail enabled for this code to work.  You will note that because a conditional filter was supplied a legend was generated.  The legend contains the details of the supplied parameters.  In the case below, Locations with a sales volume < 100 is considered sub par, hence the red color, and Locations with a sales volume > 200 is green.  As you can see this is a great way to visually see your data.  I use these types of reports in my environment to monitor backups, jobs, and their corresponding metrics. 

image

Now, I will execute the stored procedure with lesser parameters and use the @AltRowBGColor variable.  You will note that no legend is generated because no conditional formatting was supplied.

EXECUTE [master].[dbo].[usp_Email_HTML_Rpt] 
   @DB ='tempdb'
  ,@Rec = 'ahaines@stei.com'
  ,@Object = 'vw_SalesBySalesCounselor'
  ,@Schema = 'dbo'
  ,@rpt_Header = 'Sales Volume By Sales Counselor'
  ,@rpt_Header_BGColor = '#87AFC7'
  ,@TblHdr_BGColor = '#87AFC7'
  ,@AltRowBGColor = '#A0CFEC'
  ,@OrderBy = '[Amt] DESC'

image

This type of reporting is very good for quick and dirty analysis, like dash boarding.  It is also very easy to implement and gives developers/DBAs quick turnaround for reporting.  The alternative would be to open BIDs (or another reporting tool) and generate a report which takes a lot more time that executing a stored procedure.  If you need automation, you can schedule this procedure to execute via a SQL job.  There are a lot of modifications that this stored procedure can undergo.   This stored procedure is by no means perfect, but it does get the job done.  I am planning on enhancing a lot of the features provide here, but for the time being I am satisfied. I personally believe that variable checks need to be put into place and a show/hide legend bit should be introduced.  Someone more versed in HTML might find it better to import a style sheet.  When I get a little more time, I will formally update this post with more complete code.  The idea here was to present a concept and show you the power of TSQL and database mail.

I hope that you find this stored procedure useful and I invite you to modify the code to work for your environment.  If you have ideas on how to optimize the code or make a cool add-on, please keep me informed, so I can update this post.

Until next time happy coding.

Wednesday, March 24, 2010

SQL Server Management Studio Tips And Tricks

This week I decided I wanted to take a step back from my performance tuning series and present material that DBAs and developers may not know.  I will be focusing on SQL Server Management Studio (SSMS) tips and tricks. 

Keyboard Shortcuts:

SQL Server Management Studio allows keyboard shortcuts that can help you save time and increase efficiency.  Keyboard shortcuts allow you to execute TSQL commands at the push of a button. The best part about keyboard shortcuts is the shortcut can be used to pass highlighted text as parameters.  I will start with some of the built-in keyboard shortcuts.

To open the keyboard shortcuts menu, click Tools –> Options –> Expand Environment –> Click Keyboard.  Now you will note that keyboard shortcuts are loaded by default.  Some of the defaults are sp_help (Alt+F1), sp_who (Ctrl+2) etc… You can implement just about any code you want in the keyboard shortcut.  For example, you can put the query “select * from sys.dm_exec_requests” directly into the text box.  The beauty of this is you can open a new query window and just hit the shortcut keys to execute the query.  This saves time and makes life a bit easier.  Below is a screenshot of my shortcut list, including the query I posted above.

image

Well that is all good and nice but what else can shortcuts do.  The best part about shortcuts in my opinion is they can actually be used to execute stored procedures and supply highlighted values as parameters.

Lets run sp_help but we will never type the text sp_help.  Open a new query window and create a new table.

CREATE TABLE dbo.MyTable(ID INT);

Now type MyTable in the query window and highlight the text.  Once the text is highlighted, hold Alt and press F1.  You should see the results of sp_help displayed for the highlighted object, “MyTable.”

image

Note: If you need to specify an object in a different schema, you have to use the two part name “schema.Object” and put it in single quotes.  For example, ‘dbo.MyTable’.

Now that is pretty cool right?  What I will show you next is even cooler!  I will execute a user defined stored procedure with parameters  Open the keyboard shortcut menu and assign the below stored procedure to a shortcut key. 

image

Open a new query window and create the below stored procedure.

CREATE PROCEDURE dbo.KeyBoardShortcut(@Id INT,@Col CHAR(1))
AS
BEGIN
    SELECT @Id,@Col
END
GO

Now type the text below and highlight it.  Once highlighted, hit Ctrl+F1. (Note: If you assigned the procedure to a different shortcut, you will need to use that shortcut)

1,'a'

image

As you can see, the stored procedure was executed with our highlighted parameters!  This is a awesome feature that can save a lot of time, especially when you have custom code that you need to access frequently.  This increases efficiency and puts more information at your fingertips.  One of my favorite shortcut keys is sp_helptext.  You can use create a shortcut for sp_helptext and highlight any procedure, function, view, trigger, or stored procedure and get the create script of that object.

Listing All Table Columns

This is a request that I see very often.  The request usually revolves around developers who have a lot of columns in their table and have to select from most of them.  As you can imagine, typing each column name can be a very tedious process. There are two methods that I use to accomplish this task, when I am feeling a little lazy…… BUT the key is to work smarter not harder right :^) !

The first method is to right-click the table click script table as –> SELECT To –> New Query Window.  Voila we now have a select query that lists all the columns in the table.  You can perform the same steps to get a list of all columns in a view.

The next method is to expand the table –> Drag the columns folder into the query window.  This will generate a column delimited list of the column names.

Scripting Multiple Objects

I am sure many of you have tried to script multiple objects from Management Studio, but have had little luck.  Now, we could choose to right-click the database –> tasks –> generate scripts, but where is the fun it that?  This is an easier way that does not make you jump through hoops or follow a wizard.  The trick is to click on the folder containing the objects you want to script and then look at the Object Explorer Details pane.  Inside the Object Explorer Details pane you can Ctrl or Shift click multiple objects.  Once selected, you can right-click –> Script <Object Type> AS –> Create –> To New Window.

image

Creating a TSQL Toolkit

One of the most underrated or unknown features of SQL Server Management Studio is Template Explorer.  You may be thinking that template explorer is used specifically for TSQL templates, but I will show you a couple of ways to make Template Explorer function as a TSQL Toolkit.  Click CTRL+ALT+T or go to view –> Template Explorer to open the window.

image

Now this does not look that impressive from the get, but trust me it can and does get better. Right-click SQL Server Templates and choose New –> then click folder.  Give the folder a name. I will be using the name Test_Scripts.  Next create a new template in the Test_Scripts folder.  I named the new template sys.dm_exec_requests.  Your template explorer should now look like this.

image

Okay, well that is great…… How does this help me?  Well lets add some TSQL code to that new template.  Once we add code to the template, We can then drag the template into any query window and SSMS will automatically post the code into the open query window.  Basically, we can use Template Explorer as our personal Toolkit.  We can create any folder structure we want and SSMS does a great job of keeping all our scripts organized.  Let’s see this in action.

Right-click the sys.dm_exec_requests template and choose edit.  Paste the code below and click save.

SELECT * FROM sys.dm_exec_requests

Open a new query window and drag the sys.dm_exec_requests  template into the query window. Voila!!!!!!! the TSQL is automatically scripted into our new query window.  As you can see, we can use the template explorer to save our scripts and make them easily accessible.  This alleviates our need to open windows explorer or browse the file system for our scripts.  Plus al the scripts are saved and managed in one place.  If you need to copy your scripts out of template explorer, the actually already exist on the file system.  The directory will be within your documents.  On my SQL Server 2008 instance, my templates are stored here, C:\Documents and Settings\ahaines\Application Data\Microsoft\Microsoft SQL Server\100\Tools\Shell\Templates\Sql.  I don’t know about you, but I for one love this feature.  This gives me the ability to quickly get data, without having to leave SSMS, plus I do not have to waste time searching the file system.  Give it a try and see what you think.

That is all the tips I have for now.  I hope that you have learned something new or any of these tips can help you.  Until next time, happy coding.