March 20, 2011 at 4:50 pm
So I am trying to work out a way to ensure that the SQL development is performance checked at the most rudimentary levels.
I have this idea that if the test environment had a sufficiently sensible data set, I could review the stored procedures that are being generated and check each query's cost (or at least the estimated cost).
It was then suggested to me that this test could be automated and there might be a way to interrogate some meta-data about the cost of some queries. This data is all available through SHOWPLAN.
Has anyone come across this data or a tool to review it?
March 20, 2011 at 8:04 pm
The supposed cost of a query frequently has nothing to do with its actual performance. Your best bet is to setup SQL Profiler to monitor the real use of resourses.
Here is proof of what I say. Run the following with the Actual Execution Plan enabled. Notice that the first plan claims 0% of batch while the measured run times that are displayed tell of a totally different story.
/****************************************************************************************
Purpose:
This code demonstrates that the estimated and actual execution plans in SQL Server can
be 100% INCORRECT and that the execution plan should only be relied on to provide hints
as to what may be wrong with a query rather than an absolute indication. This code runs
in SQL Server 2005 only.
The code creates a temp table for 10 years worth of dates starting with 2000-01-01 using
two different methods. The first method uses a recursive CTE and the second method uses
a "Tally" table. The output of each method is directed to a "throw-away" variable to
take display delays out of the picture.
Please check both the actual and estimated execution plans and compare the % of batch.
Please see the following article on how to build a Tally table and how they can be used
to replace certain While Loops.
http://www.sqlservercentral.com/articles/T-SQL/62867/
****************************************************************************************/
SET NOCOUNT ON
--=======================================================================================
-- Recursive method shown by (Name with-held)
--=======================================================================================
PRINT '========== Recursive method =========='
--===== Turn on some performance counters ===============================================
SET STATISTICS IO ON
SET STATISTICS TIME ON
DECLARE @Bitbucket DATETIME --Holds display output so display times aren't measured.
--===== Execute the code being tested ===================================================
DECLARE @DateVal DATETIME
SET @DateVal = '2000-01-01'
;with mycte as
(
select @DateVal AS DateVal
union all
select DateVal + 1
from mycte
where DateVal + 1 < DATEADD(yy, 10, @DateVal)
)
select @Bitbucket = d.dateval
from mycte d
OPTION (MAXRECURSION 0)
--===== Turn off the performance counters and print a separator =========================
SET STATISTICS TIME OFF
SET STATISTICS IO OFF
PRINT REPLICATE('=',90)
GO
--=======================================================================================
-- Tally table method by Jeff Moden
--=======================================================================================
PRINT '========== Tally table method =========='
--===== Turn on some performance counters ===============================================
SET STATISTICS IO ON
SET STATISTICS TIME ON
DECLARE @Bitbucket DATETIME --Holds display output so display times aren't measured.
--===== Execute the code being tested ===================================================
DECLARE @StartDate AS DATETIME
SET @StartDate = '2000-01-01'
SELECT TOP (DATEDIFF(dd,@StartDate,DATEADD(yy,10,@StartDate)))
@Bitbucket = @StartDate-1+t.N
FROM dbo.Tally t
ORDER BY N
--===== Turn off the performance counters and print a separator =========================
SET STATISTICS TIME OFF
SET STATISTICS IO OFF
PRINT REPLICATE('=',90)
GO
--Jeff Moden
Change is inevitable... Change for the better is not.
March 20, 2011 at 8:44 pm
Fair enough! I wasn't convinced this was a "good" approach.
So I'll turn this question around a little bit.
How do people quality check code pro-actively? What tools should we be using to ensure that the SQL statements aren't doing table scans on multi-million row tables?
March 20, 2011 at 9:31 pm
I certainly do use the execution plans to give me an idea of where potential trouble spots are concerned especially when it comes to index usage and possible many-to-many joins (I call them "accidental Cross Joins") by looking for fat arrows with rowcounts larger than any of the tables. During test runs, though, I have SQL Profiler turned on to monitor just the SPID I'm working from. Of course, you also need enough test data to ensure scalability.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply