May 25, 2012 at 12:07 pm
I have a 200GB DB running on SQL Server 2012 Ent, and have a SP that accesses a number of tables.
When i access data up to 350k records, it takes less than one second.
When i access data from 475k records, it takes 21 seconds.
What could cause the same SP to run that much slower, with only 125k more records?
Actual Execution plan does not suggest anything to improve the query 🙁
BTW total number of records in the DB is 4.5 Million.
Thanks
May 25, 2012 at 12:21 pm
Post both actual execution plans.Otherwise from this statement nothing can not be said..One possible reason could be that when you run proc with 350K rows it generated one plan and that was cached. Now if the number of rows are increased to 475K the same plan might not be so effective. Try following on the DEV server.
Run the query with 475K with recompile and see if there is any improvement in the plan cost and overall elapsed time.
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
May 25, 2012 at 12:42 pm
Test One 343k records - one second
May 25, 2012 at 12:45 pm
Test two, 474k records in 29 seconds!
May 25, 2012 at 12:56 pm
Try to execute the query with 475 K rows with recompile and see what kind of plan is being generated..
See the first table profiles which returns around 373 K rows but estimates are around 25K. Thus check whether the stats are upto date...
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
May 25, 2012 at 1:02 pm
isuckatsql (5/25/2012)
Actual Execution plan does not suggest anything to improve the query 🙁
Several parallelism operators in the parallel plan have a warning symbol to show you that exchange buffers were spilled to tempdb, probably to resolve an intra-query parallelism deadlock. The plan also shows that your query used 260 worker threads at DOP 64, despite only processing a relatively small number of rows. There is also a warning for a conversion that will affect cardinality estimation (and therefore, plan choice). Estimated rows counts are very different from actual row counts in a number of places. Example shown below:
You can improve this query by applying normal tuning techniques. Provide useful indexes (you have many scans) and ensure row count estimates are ballpark-correct compared with actual counts. You might find it easiest to split this complex query into smaller parts, using one or more temporary tables to hold small intermediate result sets. This will give the query optimizer better information and reduce the chance of incorrect estimates resulting in a poor plan choice. It will also make the query more maintainable. You should also consider significantly reducing the degree of parallelism for this query (if not for the server as a whole).
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 25, 2012 at 1:28 pm
I think the real issue is the table AllRecentwages where the actual rows are 95 (estimated is 16763) for 373k rows plan but is 4.4. million(estimated is 8000+) for 473K records...Check the stats of this table.
Best would be to check the statstics when these were updated and and if not update them rerun the query and see that generated plan is estimating the rows properly and the plan is good enough.
If still estimations are wrong and paln is not good then follow what Paul mentioned. By breaking the query into smaller queries to make sure that the estimates are correct..
It looks like from your query is that you apply some ranking function (partition by clause and some ranking function..and returned just few rows.(9 rows)..) on profiles table first and the join this with other.
The result of this ranking function is returning 1 row only.So i would suggest that first you put the result of ranking function into a temp table and then join it with rest of the query and if needed break it further.
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
May 25, 2012 at 2:03 pm
Guys,
Thanks for the assistance!
I removed the reference to "AllRecentWages" and got 850k records in 10 seconds, so that was certainly a big part of the issue!
Now i need to figure out how to add the data back in, without causing the same issue!
May 25, 2012 at 2:10 pm
Paul.
In attached screenshot if you will see that the IO cost estimation is 185.2 and CPU estimation cost is around 4.8. Then total subtree cost should have been around 190.15 but it is showing 0.38708 which is 1/500th of total cost of estimated cpu and IO cost.
Why is it happening?It is happening for other operators as well?
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
May 25, 2012 at 2:14 pm
Gullimeel (5/25/2012)
Paul.In attached screenshot if you will see that the IO cost estimation is 185.2 and CPU estimation cost is around 4.8. Then total subtree cost should have been around 190.15 but it is showing 0.38708 which is 1/500th of total cost of estimated cpu and IO cost.
Why is it happening?It is happening for other operators as well?
How do you come up with 190.15? Is this the same algorithm that the Query Optimizer uses to calculate costs?
May 25, 2012 at 2:15 pm
Guys,
Thanks for the assistance!
I removed the reference to "AllRecentWages" and got 850k records in 10 seconds, so that was certainly a big part of the issue!
Now i need to figure out how to add the data back in, without causing the same issue!
Can you post the query if possible?
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
May 25, 2012 at 2:17 pm
Gullimeel (5/25/2012)
--------------------------------------------------------------------------------
Paul.
In attached screenshot if you will see that the IO cost estimation is 185.2 and CPU estimation cost is around 4.8. Then total subtree cost should have been around 190.15 but it is showing 0.38708 which is 1/500th of total cost of estimated cpu and IO cost.
Why is it happening?It is happening for other operators as well?
How do you come up with 190.15? Is this the same algorithm that the Query Optimizer uses to calculate costs?
Below is the article where costs are explained very nicely by Joe Chang.This is a must read in understanding how costing works..
Usually it is sum of IO cost + cpu cost
GulliMeel
Finding top n Worst Performing queries[/url]
Improve the performance of Merge Join(special case)
How to Post Performance Problem -Gail Shaw[/url]
May 25, 2012 at 2:21 pm
USE [JobPortalIAN]
GO
/****** Object: StoredProcedure [dbo].[GetAllResumesSearchedDynamicQueryBUP5-25-12Test] Script Date: 5/25/2012 4:14:49 PM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- =============================================
-- Author: <Author,Ian>
-- Create date: <Create Date,21-Jan-2009,>
-- Description: <Description,[GetResumesSearchedListFullTextSearch],>
-- 1/14/11 - ian changes
--
-- =============================================
ALTER PROCEDURE [dbo].[GetAllResumesSearchedDynamicQueryBUP5-25-12Test]
-- Add the parameters for the stored procedure here
@Name varchar(250),
@Name1 varchar(250),
@Date int,
@Stateid int,
@CityId int,
@Industry1id int,
@Industry2id int,
@Industry3id int,
@Industry4id int,
@Industry5id int,
@Jobtype1id int,
@Jobtype2id int,
@Jobtype3id int,
@Jobtype4id int,
@Jobtype5id int,
@Language1 int,
@Language2 int,
@Language3 int,
@Language4 int,
@Language5 int,
@Salarytype varchar(10),
@Company1 varchar(200),
@Degree int,
@Skillsid int,
@Experience int,
@MilitaryExp int,
@Security int,
@WorkStatus int,
@ExpectedSalary int,
@ExpectedMaxSalary int,
@Position varchar(100),
@Phrase int,
@fromRec int,
@toRec int
AS
declare @Query as varchar(8000)
set @Query = 'Begin
WITH Res AS
(
select
p.id as Id,
p.regionid,
p.cityid,
p.countryid,
TotalYrsExp as experience,
TotalYears,
p.Industry1id,
p.Industry2id,
p.Industry3id,
p.Industry4id,
p.Industry5id,
p.Desiredempid1,
p.Desiredempid2,
p.Desiredempid3,
p.Desiredempid4,
p.Desiredempid5,
p.Language1,
p.Language2,
p.Language3,
p.Language4,
p.Language5,
--rw.RWageFreq,
p.AcademicExperienceTypeID,
p.Militaryid,
p.Securityid,
p.NewJobStatus,
degree3id, degree2id, degree1id,
case
when degree3id > degree2id then isnull((select top 1 d.Degree3 from AllDegrees d where p.id=d.uniqueid),'''')
when degree3id < degree2id then isnull((select top 1 d.Degree2 from AllDegrees d where p.id=d.uniqueid),'''')
when degree2id > degree1id then isnull((select top 1 d.Degree2 from AllDegrees d where p.id=d.uniqueid),'''')
else isnull((select top 1 d.Degree1 from AllDegrees d where p.id=d.uniqueid),'''')
end
as degree,
(select i.Name from Industries i where i.Id = p.Industry1id) as Industry1,
(select i.Name from Industries i where i.Id = p.Industry2id) as Industry2,
(select i.Name from Industries i where i.Id = p.Industry3id) as Industry3,
(select i.Name from Industries i where i.Id = p.Industry4id) as Industry4,
(select i.Name from Industries i where i.Id = p.Industry5id) as Industry5,
(select n.Name from NewJobTypes n where n.Id = p.Desiredempid1) as Jobtype1,
(select n.Name from NewJobTypes n where n.Id = p.Desiredempid2) as Jobtype2,
(select n.Name from NewJobTypes n where n.Id = p.Desiredempid3) as Jobtype3,
(select n.Name from NewJobTypes n where n.Id = p.Desiredempid4) as Jobtype4,
(select n.Name from NewJobTypes n where n.Id = p.Desiredempid5) as Jobtype5,
(select l.Name from Languages l where l.Id = p.Language1) as Languagename1,
(select l.Name from Languages l where l.Id = p.Language2) as Languagename2,
(select l.Name from Languages l where l.Id = p.Language3) as Languagename3,
(select l.Name from Languages l where l.Id = p.Language4) as Languagename4,
(select l.Name from Languages l where l.Id = p.Language5) as Languagename5,
(select m.Name from MilitaryExperienceTypes m where m.Id = p.Militaryid) as military,
(select s.Name from SecurityClearances s where s.Id = p.Securityid) as security,
(select njs.Name from NewJobStatus njs where njs.Id = p.NewJobStatus) as workstatus,
--rw.RecentWage as Salary,
(select a.RecentJobTitle from AllRecentJobTitles a where a.Uniqueid = p.id) as position,
p.InputDate,
p.SalaryMinID,
p.SalaryMaxID,
p.HomePhone,
p.CellPhone,
p.WorkPhone,
p.PrimaryEmailAddress,
p.SecondaryEmailAddress,
w.Company1,
isnull(p.FirstName,'''') + '' '' + isnull(p.MiddleName + '' '','''') + isnull(p.LastName,'''') as UserName,
ROW_NUMBER() OVER (ORDER BY p.id DESC) AS RowNumber
from profiles p
left outer join AllCompanies w on w.UniqueId=p.id
--left outer join AllRecentWages rw on rw.UniqueId=p.id
left outer join AllRecentJobTitles arjt on arjt.UniqueId=p.id
--left outer join AllDocuments dc on dc.UniqueId=p.id
--left outer join workexperiences w on w.ProfileId=p.id
--from profiles p inner join AllNewDesiredEmploymentType w on w.UniqueId=p.id
where p.allowrecruiters=1 '
if (@Date <> 0)
Begin
--set @Query = @Query + ' and LastActivityDate <''' + cast(@Date as varchar(20)) + ''''
set @Query = @Query + ' and LastActivityDate >''' + cast(((select dateadd(d,@Date,GETDATE()))) as varchar(20)) + ''''
End
if (@Position <> '')
Begin
if (@Phrase = 1)
Begin
set @Query = @Query + ' and contains(arjt.RecentJobTitle,''' + @Position + ''') '
end
else if (@Phrase = 2)
Begin
set @Query = @Query + ' and contains(arjt.RecentJobTitle,''' + @Position + ''') '
end
else if (@Phrase = 3)
Begin
set @Query = @Query + ' and contains(arjt.RecentJobTitle,''' + @Position + ''') '
end
else if (@Phrase = 4)
Begin
set @Query = @Query + ' and arjt.RecentJobTitle = ''' + cast(@Position as varchar(100)) + ''''
end
End
--if (@Salarytype <> '')
--Begin
--set @Query = @Query + ' and rw.RWageFreq = ''' + cast(@Salarytype as varchar(10)) + ''''
--end
if (@Company1 <> '')
Begin
set @Query = @Query + ' and contains(w.Company1,''' + @Company1 + ''') '
end
set @Query = @Query + '),
Resumes AS
(
SELECT p.id,
p.UserName,
P.InputDate,
substring(cast(InputDate as varchar(20)),1,12) as newdate,
isnull((select c.[Name] + '', '' from cities c where c.id=p.cityid),'''') +
isnull((select r.[abbreviatedName] from regions r where r.id=p.regionid),'''') as Location,
experience,
TotalYears,
case when len(p.Position) < 30 then p.Position
else
substring(p.Position,1,30) + ''...'' end
as Position,
Position as Title,
degree,
military,
security,
workstatus,
p.Militaryid,
p.Securityid,
p.NewJobStatus,
Company1,
--RWageFreq,
cast((select s.value from salaries s where p.SalaryMaxID = s.id) as int) as MaxSalary,
--salary,
Industry1,
Industry2,
Industry3,
Industry4,
Industry5,
Jobtype1,
Jobtype2,
Jobtype3,
Jobtype4,
Jobtype5,
Language1,
Language2,
Language3,
Language4,
Language5,
Languagename1,
Languagename2,
p.HomePhone,
p.CellPhone,
p.WorkPhone,
p.PrimaryEmailAddress,
p.SecondaryEmailAddress,
ROW_NUMBER() OVER (ORDER BY p.id DESC) AS RowNumber
FROM Res p
WHERE 1=1 '
if (@Stateid <>0)
Begin
set @Query = @Query + ' and p.regionid =' + cast(@Stateid as varchar(10))
End
if (@CityId <>0)
Begin
set @Query = @Query + ' and p.cityid =' + cast(@CityId as varchar(10))
End
if (@Industry1id <> 0)
Begin
set @Query = @Query + ' and p.Industry1id =' + cast(@Industry1id as varchar(10))
End
if (@Industry2id <> 0)
Begin
set @Query = @Query + ' and p.Industry2id =' + cast(@Industry2id as varchar(10))
End
if (@Industry3id <> 0)
Begin
set @Query = @Query + ' and p.Industry3id =' + cast(@Industry3id as varchar(10))
End
if (@Industry4id <> 0)
Begin
set @Query = @Query + ' and p.Industry4id =' + cast(@Industry4id as varchar(10))
End
if (@Industry5id <> 0)
Begin
set @Query = @Query + ' and p.Industry5id =' + cast(@Industry5id as varchar(10))
End
if (@Jobtype1id <> 0)
Begin
set @Query = @Query + ' and p.Desiredempid1 =' + cast(@Jobtype1id as varchar(10))
End
if (@Jobtype2id <> 0)
Begin
set @Query = @Query + ' and p.Desiredempid2 =' + cast(@Jobtype2id as varchar(10))
End
if (@Jobtype3id <> 0)
Begin
set @Query = @Query + ' and p.Desiredempid3 =' + cast(@Jobtype3id as varchar(10))
End
if (@Jobtype4id <> 0)
Begin
set @Query = @Query + ' and p.Desiredempid4 =' + cast(@Jobtype4id as varchar(10))
End
if (@Jobtype5id <> 0)
Begin
set @Query = @Query + ' and p.Desiredempid5 =' + cast(@Jobtype5id as varchar(10))
End
if (@Language1 <> 0)
Begin
set @Query = @Query + ' and p.Language1 =' + cast(@Language1 as varchar(10))
End
if (@Language2 <> 0)
Begin
set @Query = @Query + ' and p.Language2 =' + cast(@Language2 as varchar(10))
End
if (@Language3 <> 0)
Begin
set @Query = @Query + ' and p.Language3 =' + cast(@Language3 as varchar(10))
End
if (@Language4 <> 0)
Begin
set @Query = @Query + ' and p.Language4 =' + cast(@Language4 as varchar(10))
End
if (@Language5 <> 0)
Begin
set @Query = @Query + ' and p.Language5 =' + cast(@Language5 as varchar(10))
End
if (@Experience <> 0)
Begin
set @Query = @Query + ' and TotalYears =' + cast(@Experience as varchar(10))
End
if (@MilitaryExp <> 0)
Begin
set @Query = @Query + ' and p.Militaryid =' + cast(@MilitaryExp as varchar(10))
End
if (@Security <> 0)
Begin
set @Query = @Query + ' and p.Securityid =' + cast(@Security as varchar(10))
End
if (@WorkStatus <> 0)
Begin
set @Query = @Query + ' and p.NewJobStatus =' + cast(@WorkStatus as varchar(10))
End
if (@Degree <> 0)
Begin
set @Query = @Query + ' and p.Degree1id =' + cast(@Degree as varchar(10))
End
if (@Degree <> 0)
Begin
set @Query = @Query + ' and p.Degree2id =' + cast(@Degree as varchar(10))
End
if (@Degree <> 0)
Begin
set @Query = @Query + ' and p.Degree3id =' + cast(@Degree as varchar(10))
End
if (@Date <> 0)
Begin
--set @Query = @Query + ' and LastActivityDate <''' + cast(@Date as varchar(20)) + ''''
set @Query = @Query + ' and p.InputDate >''' + cast(((select dateadd(d,@Date,GETDATE()))) as varchar(20)) + ''''
--set @Query = @Query + ' and p.InputDate >= p.IanDate '
End
if (@ExpectedSalary <> 0)
Begin
set @Query = @Query + ' and p.SalaryMinID >= ' + cast(@ExpectedSalary as varchar(10))
End
if (@ExpectedMaxSalary <> 0)
Begin
set @Query = @Query + ' and p.SalaryMaxID <= ' + cast(@ExpectedMaxSalary as varchar(10))
End
set @Query = @Query + ')
select
id,
--UserName,
--case
--when len(salary) < 1 then ''n/a''
--else
--salary
--end as salary,
case
when len(degree) < 1 then ''n/a''
else
degree
end as degree,
case
when username like ''%email%'' then ''n/a''
else
username
end as username,
case
when len(primaryemailaddress) < 1 then ''n/a''
else
primaryemailaddress
end as primaryemailaddress,
InputDate,
newdate,
Location,
Experience,
--Position,
case
when len(position) between 1 and 60 then position
when len(position) < 1 then ''n/a''
else
substring(position,1,60) + ''...'' end
as position,
case
when len(title) between 1 and 60 then title
when len(title) < 1 then ''n/a''
else
substring(title,1,60) + ''...'' end
as title,
MaxSalary,
Languagename1,
Languagename2,
HomePhone,
CellPhone,
WorkPhone,
military,
security,
workstatus,
Company1,
--PrimaryEmailAddress,
SecondaryEmailAddress,
RowNumber,
(select max(rownumber) from Resumes) as TotalCount
from Resumes
WHERE RowNumber BETWEEN ' + cast(@fromRec as varchar(10)) + ' AND ' + cast(@toRec as varchar(10)) +
'
ORDER BY RowNumber ASC
--order by LastActivityDate DESC
End
'
exec (@Query)
May 25, 2012 at 2:21 pm
Gullimeel (5/25/2012)
Gullimeel (5/25/2012)
--------------------------------------------------------------------------------
Paul.
In attached screenshot if you will see that the IO cost estimation is 185.2 and CPU estimation cost is around 4.8. Then total subtree cost should have been around 190.15 but it is showing 0.38708 which is 1/500th of total cost of estimated cpu and IO cost.
Why is it happening?It is happening for other operators as well?
How do you come up with 190.15? Is this the same algorithm that the Query Optimizer uses to calculate costs?
Usually it is sum of IO cost + cpu cost
So it isn't necessarily same algorithm used by the Query Optimizer, correct?
May 25, 2012 at 2:23 pm
SQL Kiwi (5/25/2012)
isuckatsql (5/25/2012)
Actual Execution plan does not suggest anything to improve the query 🙁Several parallelism operators in the parallel plan have a warning symbol to show you that exchange buffers were spilled to tempdb, probably to resolve an intra-query parallelism deadlock. The plan also shows that your query used 260 worker threads at DOP 64, despite only processing a relatively small number of rows. There is also a warning for a conversion that will affect cardinality estimation (and therefore, plan choice). Estimated rows counts are very different from actual row counts in a number of places. Example shown below:
You can improve this query by applying normal tuning techniques. Provide useful indexes (you have many scans) and ensure row count estimates are ballpark-correct compared with actual counts. You might find it easiest to split this complex query into smaller parts, using one or more temporary tables to hold small intermediate result sets. This will give the query optimizer better information and reduce the chance of incorrect estimates resulting in a poor plan choice. It will also make the query more maintainable. You should also consider significantly reducing the degree of parallelism for this query (if not for the server as a whole).
Dang Paul! :w00t:
I *really* wish that you would write a book with all of this stuff in it. Every time you post one of these I am just dumbstruck with how much you can coherently read from a query plan, and how much more you can infer from it.:Wow:
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
Viewing 15 posts - 1 through 15 (of 29 total)
You must be logged in to reply to this topic. Login to reply