October 27, 2008 at 6:54 am
Hello all!
I have written this code that runs like a dog and I can't find out why!!
ALTER PROCEDURE [dbo].[getDiaryEntries] (
@StartDate AS DATETIME
,@EndDate AS DATETIME
,@BrokerId AS INT
,@Filter AS BIT
) AS
/*
--DECLARE @StartDate AS DATETIME
SET @StartDate = DATEADD(dd, -5000, GETDATE())
--DECLARE @EndDate AS DATETIME
SET @EndDate = DATEADD(dd, -11, GETDATE())
--DECLARE @BrokerId AS INT
--SET @BrokerId = 850
SET @BrokerId = 2
--DECLARE @Filter AS BIT
SET @Filter = 0 -- 1 = complete
*/
DECLARE @AccessLevel AS INT
SET @AccessLevel = (SELECT ACCESS_LEVEL_ID FROM BROKERS WHERE BROKER_ID = @BrokerId)
-- Get the broker's group if the access level is 3
DECLARE @BrokerGroupName AS VARCHAR(30)
SET @BrokerGroupName =
(SELECT
BG.BROKER_GROUP_NAME
FROM BROKERS AS BR
INNER JOIN BROKER_OFFICES AS BO
ON BR.BROKER_OFFICE_ID = BO.BROKER_OFFICE_ID
INNER JOIN BROKER_GROUPS AS BG
ON BO.BROKER_COMPANY_ID = BG.BROKER_COMPANY_ID
WHERE
BR.BROKER_ID = @BrokerId
AND @AccessLevel = 3)
SELECT DE.DAIRY_EVENT_ID AS DIARY_EVENT_ID
, AG.AGREEMENT_ID
, DE.AGREEMENT_NUMBER
, BR.BROKER_ID
, BR.BROKER_NAME + ' (' + UPPER(BR.USERNAME) + ')' AS DisplayName
, DE.DAIRY_SUBJECT AS DIARY_SUBJECT
, DE.DAIRY_DESCRIPTION AS DIARY_DESCRIPTION
, CONVERT(CHAR(5), CONVERT(VARCHAR(19), DE.DAIRY_DATE, 108)) AS EntryTime
, DE.DAIRY_DATE AS DiaryDate
, CASEWHEN (DE.[STATUS] = 1) THEN 'Complete'
ELSE 'Active'
END AS [Status]
FROM DAIRY_EVENTS AS DE
INNER JOIN BROKERS AS BR
ON BR.BROKER_ID = DE.BROKER_ID
LEFT JOIN BROKER_OFFICES AS BO
ON BR.BROKER_OFFICE_ID = BO.BROKER_OFFICE_ID
LEFT JOIN BROKER_GROUPS AS BG
ON BO.BROKER_COMPANY_ID = BG.BROKER_COMPANY_ID
LEFT JOIN AGREEMENTS AS AG
ON DE.AGREEMENT_NUMBER = AG.AGREEMENT_NUMBER
WHERE
DE.DAIRY_DATEBETWEENDATEADD(dd, 0, DATEDIFF(dd, 0, @StartDate))
ANDDATEADD(ss, -1, DATEADD(dd, 1, DATEDIFF(dd, 0, @EndDate)))
AND ((@AccessLevel <> 3) OR (@AccessLevel = 3 AND BG.BROKER_GROUP_NAME = @BrokerGroupName))
AND DE.[STATUS] = 0
ORDER BY DE.DAIRY_DATE
When running it like this getDiaryEntries '2003-01-01 00:00:00.000', '2008-10-27 00:00:00.000', 2, 0
over a relatively small DB / table it takes approx 1:30 - 1:40
Now when I remove the /* */ at the top and run the same code but with constants it runs in about 3 seconds - the speed that I would expect given the size of the table.
I have had a search on the web and seen a few people who have found the same thing, however how can I stop this behaviour?? The whole point of SPs are to give a performance boost!
Things that I won't be trying include any form of hard-coding / constant usage or any form of dynamic sql for obvious reasons.
Please help!
October 27, 2008 at 7:32 am
There are a couple of things you should do.
2. View your execution plan to verify that you have appropriate indexes
3. Lookup "parameter sniffing". Here's a blog entry about it.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
October 27, 2008 at 7:38 am
Sure sounds like parameter sniffing. Since it's 2005, you might try using the OPTIMIZE FOR method to get good execution plans.
You might try setting these values into two other parameters instead of calculating them:
DATEADD(dd, 0, DATEDIFF(dd, 0, @StartDate))
AND DATEADD(ss, -1, DATEADD(dd, 1, DATEDIFF(dd, 0, @EndDate)))
No guarantees there, but I'd try it.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 27, 2008 at 7:41 am
I'd run this with recompile as well and see if it's sniffing something.
The execution plan, as Jack mentioned, is where I'd start looking for answers.
October 27, 2008 at 8:20 am
Thanks for your helpful comments - parameter sniffing was exactly what it was!! The fix (so to speak) was to copy the Datetime variables into variables held within the SPROC and run my BETWEEN from them...
Only thing is I now am left wondering why the hell I have never seen this behaviour before being as I have been running on SQL of one form or another for over 5 years and have written litertally hundreds to SPs using two DateTime variables !!
October 27, 2008 at 8:45 pm
chunkym0nk3y (10/27/2008)
Thanks for your helpful comments - parameter sniffing was exactly what it was!! The fix (so to speak) was to copy the Datetime variables into variables held within the SPROC and run my BETWEEN from them...Only thing is I now am left wondering why the hell I have never seen this behaviour before being as I have been running on SQL of one form or another for over 5 years and have written litertally hundreds to SPs using two DateTime variables !!
It is mostly a function of data volumes and/or data distributions. Often neither are present in a dataset and the query plan picked for one set of inputs is acceptable for essentially all others. Or the data is so small that spinning nested-loop joins for a 'large-spread' set of input parameters is still sufficiently fast that it doesn't show up on the performance monitoring radar.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
October 29, 2008 at 2:34 pm
The best medicine for parameter sniffing (in my experience) is a "UPDATE STATISTICS tblName WITH FULLSCAN" on all (large) tables involved.
__________________________________________________________________________________
SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
Persisting SQL Server Index-Usage Statistics with MERGE[/url]
Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]
October 29, 2008 at 4:34 pm
Marios Philippopoulos (10/29/2008)
The best medicine for parameter sniffing (in my experience) is a "UPDATE STATISTICS tblName WITH FULLSCAN" on all (large) tables involved.
Actually update stats does nothing to address parameter sniffing other than clear out the procedure cache for those objects. That does allow a new plan to be instantiated on the next call of a particular sproc, but you are then stuck with that plan - and if it was one that generated a nested loop series of joins because it only hit 3 rows and then the next 1000 calls of that sproc output hit 100K rows POOF goes performance.
Typical ways to deal with PS include (but are probably not limited to):
1) call sproc (or execute statement) WITH RECOMPILE
2) compile sproc WITH RECOMPILE
3) use variables inside the sproc that get set to the input parameters
4) dynamic sql
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
October 30, 2008 at 5:28 am
I'd add one more:
5) Use the OPTIMIZE FOR query hint to get a plan based ona good parameter.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 30, 2008 at 7:32 am
Grant Fritchey (10/30/2008)
I'd add one more:5) Use the OPTIMIZE FOR query hint to get a plan based ona good parameter.
That still gets you a bad plan for "non-good" parameters. Perhaps that would be acceptable in some situations, but it has been my experience that clients don't want ANY of their queries to crush the server for an extended period and/or timeout to the front end. They are willing to accept a bit slower (i.e. wait for compile everytime for example) performance individually to avoid the bad scenario.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
October 30, 2008 at 7:50 am
TheSQLGuru (10/30/2008)
Grant Fritchey (10/30/2008)
I'd add one more:5) Use the OPTIMIZE FOR query hint to get a plan based ona good parameter.
That still gets you a bad plan for "non-good" parameters. Perhaps that would be acceptable in some situations, but it has been my experience that clients don't want ANY of their queries to crush the server for an extended period and/or timeout to the front end. They are willing to accept a bit slower (i.e. wait for compile everytime for example) performance individually to avoid the bad scenario.
True, but I've found that by and large, and there surely are exceptions, that the parameter that gets a "good" plan works well enough on the parameters that cause the "bad" plan but you save all the recompile times rather than swapping plans out all the time.
But it really is a situational fix. In some situations, this could cause as much of a problem as the original issue.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply