April 26, 2010 at 2:54 am
We are using a query and in its execution plan it utilizes 182% CPU cost while performing a sort. Please advice on how to reduce it.
query structure:
SELECT DISTINCT
'Dupility' as Rery_Cls,
'LINE' Rery_Tpe,
b.clpe,
b.Check_Ref_ID,
min(a.claim_id) oriim,
convert(varchar,getdate(),101) Load_Date,
c.year_month_num load_month,
0 as include_flag,
0 as isreviewed
from DBO.PR_PaieDups a with(nolock, index(PR_PaidCls1))
join DBO.PR_Paps b with(nolock, index(PR_Paups1))
on a.subscriber_id = b.subscriber_id--Subscriber
join dw.dbo.date c with(nolock) on convert(varchar,getdate(),101) = convert(varchar,c.date_id,101)
left outer join CIes z on b.claim_id = z.claim_nbr and z.recovery_class = 'Dupllity'
left outer join tbl_ble1 y on b.claid = y.claim_nbr and y.re_css = 'Dupllity'
where
a.from_date = b.from_date--From Date (DOS)
and a.se_id = b.seer_id--Provider ID
and a.billed_amount = b.billed_amount--Billed Amount
and a.procode = b.procde--Proc Code
and a.tos = b.tos--TOS
and a.rede = b.revode--Rev Code
and a.pos = b.pos--POS
and a.mod_1 = '' and b.mod_1 = ''--null modifiers
and a.mod_2 = '' and b.mod_2 = ''
and a.mod_3 = '' and b.mod_3 = ''
and a.mod_4 = '' and b.mod_4 = ''
and a.root_claim_id < b.root_claim_id--original claim ID is less than the duplim ID
and b.claicd = 'H'--Facility Claims
and b.pait > 0
and a.claidate <= b.cl_date--original received before duplicate received
and z.clabr is null
and y.clabr is null
group by
b.clape,
b.suid,
April 26, 2010 at 3:49 am
Some readings for you:
How to post performance problems[/url]
Consistency problems with NOLOCK
TIMEBOMB - THE CONSISTENCY PROBLEM WITH NOLOCK / READ UNCOMMITTED
-- Gianluca Sartori
April 26, 2010 at 6:20 am
We are using a query and in its execution plan it utilizes 182% CPU cost while performing a sort. Please advice on how to reduce it.
query structure:
SELECT DISTINCT
'Dupility' as Rery_Cls,
'LINE' Rery_Tpe,
b.clpe,
b.Check_Ref_ID,
min(a.claim_id) oriim,
convert(varchar,getdate(),101) Load_Date,
c.year_month_num load_month,
0 as include_flag,
0 as isreviewed
from DBO.PR_PaieDups a with(nolock, index(PR_PaidCls1))
join DBO.PR_Paps b with(nolock, index(PR_Paups1))
on a.subscriber_id = b.subscriber_id --Subscriber
join dw.dbo.date c with(nolock) on convert(varchar,getdate(),101) = convert(varchar,c.date_id,101)
left outer join CIes z on b.claim_id = z.claim_nbr and z.recovery_class = 'Dupllity'
left outer join tbl_ble1 y on b.claid = y.claim_nbr and y.re_css = 'Dupllity'
where
a.from_date = b.from_date --From Date (DOS)
and a.se_id = b.seer_id --Provider ID
and a.billed_amount = b.billed_amount --Billed Amount
and a.procode = b.procde --Proc Code
and a.tos = b.tos --TOS
and a.rede = b.revode --Rev Code
and a.pos = b.pos --POS
and a.mod_1 = '' and b.mod_1 = '' --null modifiers
and a.mod_2 = '' and b.mod_2 = ''
and a.mod_3 = '' and b.mod_3 = ''
and a.mod_4 = '' and b.mod_4 = ''
and a.root_claim_id < b.root_claim_id --original claim ID is less than the duplim ID
and b.claicd = 'H' --Facility Claims
and b.pait > 0
and a.claidate <= b.cl_date --original received before duplicate received
and z.clabr is null
and y.clabr is null
group by
b.clape,
b.suid,
April 26, 2010 at 6:51 am
The following two items in your code above...
and a.root_claim_id < b.root_claim_id --original claim ID is less than the duplim ID
and a.claidate <= b.cl_date --original received before duplicate received
... probably consitute a "Triangular Join". Please see the following article for why that's such a bad thing...
http://www.sqlservercentral.com/articles/T-SQL/61539/
Also, anytime you see a "SELECT DISTINCT" in code, you may simply not have the proper criteria in the ON and WHERE clauses which can cause a huge number of rows to be "touched" behind the scenes.
My recommendation is to reevaluate the code and divide it up to first return the smallest number of rows necessary first and then join the other tables to that using either a CTE or (sometimes better yet) a Temp Table to resolve the smallest number of rows necessary first. I've changed many a 45 minute query to one that resolves in just a couple of seconds using the "Divide'n'Conquer" methods I've just described.
As a side bar, trying to do everything in a single query doesn't necessarily mean the code is "Set Based". In fact, it can mean quite the opposite and can cause huge amounts of "Hidden RBAR".
--Jeff Moden
Change is inevitable... Change for the better is not.
April 26, 2010 at 7:25 am
And if you really want someone to begin to evaluate the effects of your code choices on the structure, bare minimum, you need to include the actual execution plan, but even better, you can provide scripts for the structure and some sample data.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply