November 2, 2003 at 9:02 am
I was turning up a moderately complex query that was taking 15 minutes or so. I got it down to about 6 minutes but noticed something and investigated.
When I would run the Execution Plan, producing it took about 2-3 minutes. This is on a quad processor fast server. Minutes, not seconds. The query is dynamically generated (though I may change that now!).
Can I assume that is effectively statement compilation time, and that the same portion of the time will be going into regular execution as well, so if it takes 6 minutes (and is not cached) the same 2-3 minutes of compilation are part of it?
But what the heck can cause that. I'm expecting compilations in milliseconds, or a few tenths. But minutes?
By the way, the server was completely idle when I was doing the testing, and before each run I did:
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
I just can't figure out why any select statement would take so long to compile.
I realize without context it will not mean much, but am including the statement below. The ORPostDetail table is very large (150 million rows), the others are much smaller. The statement is produced by something which replaces the SET statements just after the DECLARE's.
DBCC DROPCLEANBUFFERS
DBCC FREEPROCCACHE
-- This query will report those returns which were processed in the statement period(s) indicated
-- It will then break these down by the Fiscal month/year (calendar numbers) in which those titles were put on sale (by on-sale date)
-- SBT customers are ignored according to the SBT Cutoff (if cutoff is before they became SBT they are included)
declare @FromWk SmallDateTime
declare @ToWk SmallDateTime
declare @NatChainCd int
declare @DivChainCd int
declare @BipadCd int
declare @IssueDateCd int
declare @SBTCutoffDt smalldatetime
declare @sNatChainCd varchar(20)
declare @sDivChainCd varchar(20)
declare @sBipadCd varchar(20)
declare @sIssueDateCd varchar(20)
declare @sSBTCutoffDt smalldatetime
set @FromWk = '10/4/2003'
set @ToWk = '12/31/2003'
set @sBipadCd = ''
set @sIssueDateCd = ''
set @sNatChainCd= ''
set @sDivChainCd= ''
set @sSBTCutoffDt = '9/27/2003'
set @BipadCd = case when @sBipadCd='' then null else @sBipadCd end
set @NatChainCd = case when @sNatChainCd='' then null else @sNatChainCd end
set @DivChainCd = case when @sDivChainCd='' then null else @sDivChainCd end
set @IssueDateCd = case when @sIssueDateCd='' then null else @sIssueDateCd end
set @SBTCutoffDt = case when @sSBTCutoffDt='' then null else @sSBTCutoffDt end
select Period,
sum(case when UnitCd='BRN' then Qty else 0 end) as BRNQty,
sum(case when UnitCd='CLA' then Qty else 0 end) as CLAQty,
sum(case when UnitCd='CLC' then Qty else 0 end) as CLCQty,
sum(case when UnitCd='LUN' then Qty else 0 end) as LUNQty,
sum(case when UnitCd='BRN' then ARAmt else 0 end) as BRNARAmt,
sum(case when UnitCd='CLA' then ARAmt else 0 end) as CLAARAmt,
sum(case when UnitCd='CLC' then ARAmt else 0 end) as CLCARAmt,
sum(case when UnitCd='LUN' then ARAmt else 0 end) as LUNARAmt,
sum(case when UnitCd='BRN' then APAmt else 0 end) as BRNAPAmt,
sum(case when UnitCd='CLA' then APAmt else 0 end) as CLAAPAmt,
sum(case when UnitCd='CLC' then APAmt else 0 end) as CLCAPAmt,
sum(case when UnitCd='LUN' then APAmt else 0 end) as LUNAPAmt
from (
select UnitCd, Period, sum(Qty) as Qty, sum(Qty * ARCst) as ARAmt, sum(Qty * APCst) as APAmt
from (
select od.UnitCd,
( select top 1 Convert(char(4),CalendarYearNbr) + ' - ' + Substring(Convert(Char(3),CalendarMthNbr + 100),2,2)
from Calendar cl
where cl.StmtDt >= i.OnSaleDt
order by StmtDt asc
) as Period,
Qty, od.ARCst, od.APCst
from ORPost o
inner join ORPostDetail od on o.UnitCd=od.UnitCd and o.CustCd=od.CustCd and o.ORPostTK=od.ORPostTK
inner join ARType a on a.UnitCd=od.UnitCd and a.ARCd=od.ARCd
inner join ORPostAR oa on oa.UnitCd = od.UnitCd and oa.CustCd = od.CustCd and oa.ORPostTK = od.ORPostTK and oa.ARCd = od.ARCd
inner join Issue i on i.UnitCd = od.UnitCd and i.TitleTK = od.TitleTK and i.IssueDateCd = od.IssueDateCd
inner join title t on t.UnitCd=i.UnitCd and t.TItleTK=i.TitleTK
inner join Customer c on c.UnitCd=od.UnitCd and c.CustCd=od.CustCd
where od.StmtDt between @FromWk and @ToWk and O.PicAdjustFlg = 'N' and a.ORPostColCd=3 and oa.ActualARAmt <> 0 and
(c.SBTStartDt> @SBTCutoffDt)
) as x
group by UnitCd, Period
) as z
group by Period
order by 1 desc
November 2, 2003 at 3:50 pm
Compilation can be a substantial portion of the execution time, but 2-3 minutes seems excessive, even for something as complicated as you have here.
given that, you might consider working on the SQL a bit. Maybe defining a denormalized tables that "precomputes" some of this data. Like the sum of the QTYs. I might also precompute the dates if possible. Another suggestion might be to run some of this in steps. Use a temp table to load some values and reduce the number of joins.
Steve Jones
http://www.sqlservercentral.com/columnists/sjones
The Best of SQL Server Central.com 2002 - http://www.sqlservercentral.com/bestof/
November 2, 2003 at 3:53 pm
I think it's a bug in SQL. Though I'm not sure what kind.
I redid it as a stored procedure, and it creates the procedure (and presumably compiles it) in a fraction of a second.
When I run the stored procedure, it is about twice as fast as it was when I ran the SQL directly.
I'm not sure what was causing it, but something weird seemed to get the compiler a bit stuck. Strange.
November 4, 2003 at 1:32 am
One of my customers had a similar problem, though they were already using a stored procedure: during analisys of their system and stored procedures, I noticed that one sp was taking more than 1 hours to complete, but for the most part of the time the server was doing nothing but compiling the sp. It was a rather complex insert query with a join involving something like 18 tables (it was a datawarehouse loading process), but once an execution plan was found, the insert statement would take only a few minutes to execute. And since the server had to be stopped at the end of the loading process to detach the files, there was no hope the execution plan could be found in the cache the next day. Now I don't know if this was a bug, but sometimes the query optimizer can't do nothing but using a brute force algorithm hoping it can find a suitable plan fast (ok, I'm simplyfing things a bit, but not that much). My solution (the one I almost always use in these and other situations, and may I say quite successfully) was to use join hints to wire the execution plan in the query itself: this way the query engine only works as a sintax parser. Of course this means you must find a good execution plan to use, choosing which type of join to use and the best join order, but this is another story. Anyway, if you're satisfied with the plan SQL Server comes up with, you can use hints and brackets to wire that plan in the query.
Salvor
Salvor
November 4, 2003 at 2:34 am
How do you break the query down into compile time and execution time?
November 4, 2003 at 4:51 am
There are a few possibilities:
1) use SET STATISTICS TIME ON;
2) watch how long the COMPILE lock is held on the sp;
3) using the Profiler, trace the time between the event SP:StmtStarting (if you're executing sp's) or SQL:StmStarting (for single statements) and the subsequent event 'Show Plan All').
Salvor
Salvor
November 5, 2003 at 12:28 am
There are a few possibilities:
1) use SET STATISTICS TIME ON;
2) watch how long the COMPILE lock is held on the sp;
3) using the Profiler, trace the time between the event SP:StmtStarting (if you're executing sp's) or SQL:StmStarting (for single statements) and the subsequent event 'Show Plan All').
Salvor
Salvor
November 6, 2003 at 7:15 am
Fergusson,
have you had a look at the query plan when the "compile" time was so long? It must have been a litte complex, but try to check if it used temporary tables for execution. I suspect the optimizer chose to put the results of some of your sub-queries into internal temp tables (sub-queries of the type "select xxx from (select yyy... from zzz)" )
In a stored proc, this may not happen because it would not immediately collect the data and therefore would not put any data into temp tables. I wonder what the query plan looks like in either case.
November 6, 2003 at 9:04 am
No, it was quite normal looking. I experimented a bit with some hints forcing it to table scans or changing index usage as part of tuning it up, without really changing the compile times.
I did a statistics run, and frankly do not understand what I see. THis is a statistics run for a show queryplan only, no actual execution. This was only for the code above, NOT including the DBCC statements. I really didn't expect to see all these pieces, and not at all sure whether it is showing me anything useful. (More conversation under this)
=====================================================
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 160047 ms, elapsed time = 167569 ms.
SQL Server Execution Times:
CPU time = 160047 ms, elapsed time = 167569 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
=================================================
Afterwards, I actually executed the statements. This time I got the below. Look at that parse time, almost entirely CPU time.
SQL Server parse and compile time:
CPU time = 163765 ms, elapsed time = 186009 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
(data omitted)
SQL Server Execution Times:
CPU time = 261839 ms, elapsed time = 125663 ms.
=====================================================
Then I put a CREATE PROC around it, and ONLY a CREATE PROC -- no parameters, nothing, exactly the same statements. Here's the execution time for creating the proc (I clearned the proc and buffers before):
SQL Server Execution Times:
CPU time = 15 ms, elapsed time = 126 ms.
OK, now I executed the stored procedure (zeros removed for the assignment statements)
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 167250 ms, elapsed time = 172702 ms.
SQL Server Execution Times:
CPU time = 198217 ms, elapsed time = 87704 ms.
So it's still spending huge amounts of time compiling. Here's what I really do not understand -- I ran the same procedure again, unchanged. Just executed it again. Did not clear any compile or buffers this time. I expected it NOT to recompile. I got the below (again omitting zeros). It looks to me like saving it as below as a stored procedure is not storing the query plan.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 165906 ms, elapsed time = 173471 ms.
SQL Server Execution Times:
CPU time = 203151 ms, elapsed time = 97523 ms.
I just think it's broken, though not sure how it is broken.
November 6, 2003 at 10:06 am
OK, it may sound a little weird, but how about the following experiment: shrink the ORPostDetail table so it doesn't contain 150 Million rows but maybe just a few hundred and try the query again. If you don't want to touch the production table, copy it and adjust the query. I just wonder if the query compilation gets delayed by the amount of data in the table. Maybe during compilation data is evaluated or aggregated in internal temp tables and this takes all the time.
November 6, 2003 at 12:27 pm
If I get some time. I'm not quite sure whether that tells me a lot, since not only is the data a different size but all the statistics tables are different as well.
If the optimizer is actually looking at the data (beyond any statistics it wants to build, which should happen only once!) then there's something really different about this optimizer and what I expected it to do.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply