March 24, 2011 at 5:57 am
I am trying to perform optimization of a sproc. It contains one "select" statement which is taking 82% of overall time. Attached is the select query and execution plan for this particular query. I can see Clustered index seeks and index seeks in the plan. Little confused on how I can tune it. It takes around 2 mins to return 10000 rows.
March 24, 2011 at 6:13 am
Things that may help is updating statistics with fullscan (depending on how large the table is, of course). Perhaps breaking up the stored procedure into mini-steps? Those are some of the things I would look into for the time being. Hopefully, someone more knowledgeable will chime in.
Hope that helps a little.
March 24, 2011 at 7:15 am
You do have some disparity between estimated and actual rows, so you might have some issues with the age of your statistics. That clustered index seek on 300000 rows against this table [OGREDEV1].[dbo].[T_VM_GLACIER_BASE_FEEDS_FB].[CI_VM_GLACIER_BASE_FEEDS_FB] is getting executed twice because of parallelism.
In short, you're moving a heck of a lot of data. What's this for?
With an estimated cost overall >70, I don't think you can mess with the cost threshold for parallelism to see if a serial execution would be better. You'll need to use MAXDOP. I'd try it. Considering how much data you're moving, parallelism makes sense.
Best things I can suggest at the moment, update your stats, test it with MAXDOP set to 1, try reducing the amount of data you're moving.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 24, 2011 at 8:03 am
Before posting this thread, i did checked the table and index health and all looks good. Fragmentation level is almost around 10% for every participating table.
March 24, 2011 at 8:32 am
Yes, but index fragmentation and statistics being up to date are two very different things. You can have very low fragmentation and statistics that are quite old. You're dealing with largish data volumes and it takes more than 20% data changes before stats get updated automatically.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 25, 2011 at 12:45 am
Is there a way to write this query more efficiently ?
March 25, 2011 at 6:29 am
Grant Fritchey (3/24/2011)
Best things I can suggest at the moment, update your stats, test it with MAXDOP set to 1, try reducing the amount of data you're moving.
Have you tried Grant's suggestions?
In response to your question, I am still learning the art of query performance tuning (ie, reading Grant's book and researching online) so if I had to rewrite the query, I would need a test database (and data) to play with. Sorry I can't be much help in that regards.
March 28, 2011 at 12:46 am
I already checked the statistics last updated date and these shows as current date so no issues with old statistics at all. Let me know if anything can be done on query side.
March 28, 2011 at 10:33 am
Did you try the good old Divide'n'Conquer approach?
As far as I see each subquery is limited by at least the TaskId = @TaskId condition, so you'd end up with smaller tables. This should also help to get rid of the NOLOCK hint (I strongly recommend to avoid it due to the risk of dirty reads).
March 28, 2011 at 7:34 pm
LutzM (3/28/2011)
Did you try the good old Divide'n'Conquer approach?
[font="Arial Black"]+1[/font] 🙂
--Jeff Moden
Change is inevitable... Change for the better is not.
March 28, 2011 at 8:45 pm
sql_butterfly (3/25/2011)
Is there a way to write this query more efficiently ?
Had some success reducing time at a previous job using the HASH join hint. Unfortunately I can't remember exactly what I did back then but the reduction was stupendous.
From BOL:
USE AdventureWorks;
GO
SELECT p.Name, pr.ProductReviewID
FROM Production.Product p
LEFT OUTER HASH JOIN Production.ProductReview pr
ON p.ProductID = pr.ProductID
ORDER BY ProductReviewID DESC;
Steve.
March 29, 2011 at 2:14 am
Not working. Well as you can see the execution plan, the tables within the subquery with alias "e" is taking around 60-70% of time as these are the huge tables. I tried the HASH hint as well but in vain. Divide & Conquer was done by me by adding taskId condition to get ore selectivity [Not possible mre than that in my case]. Should it take that much time anyways ? Its taking around 2 mins to show 10000 records.
March 29, 2011 at 8:30 am
Here is the result of SET STATISTICS IO ON:
Table '#AAMaxRunVersion____________________________________________________________________________________________________000000026E3A'. Scan count 3005, logical reads 6033, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'T_VM_GLACIER_TERMOPTION'. Scan count 10000, logical reads 42110, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'T_VM_GLACIER_SWAPSIDE_FEEDS_FB'. Scan count 1, logical reads 21832, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'T_VM_GLACIER_BASE_FEEDS_FB'. Scan count 390, logical reads 15350, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table '#BaseMaxRunVersion__________________________________________________________________________________________________000000026E39'. Scan count 1, logical reads 5, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'T_VM_GLACIER_PROVISION'. Scan count 1, logical reads 38, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
March 29, 2011 at 9:19 am
sql_butterfly (3/29/2011)
Here is the result of SET STATISTICS IO ON:Table '#AAMaxRunVersion____________________________________________________________________________________________________000000026E3A'. Scan count 3005, logical reads 6033, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'T_VM_GLACIER_TERMOPTION'. Scan count 10000, logical reads 42110, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'T_VM_GLACIER_SWAPSIDE_FEEDS_FB'. Scan count 1, logical reads 21832, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'T_VM_GLACIER_BASE_FEEDS_FB'. Scan count 390, logical reads 15350, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table '#BaseMaxRunVersion__________________________________________________________________________________________________000000026E39'. Scan count 1, logical reads 5, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'T_VM_GLACIER_PROVISION'. Scan count 1, logical reads 38, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Your query may or may not be optimal, but that really isn't much IO in total. Therefore I will hazard a guess that either a) your IO subsystem totally sucks (as most of my client's IO systems do), b) you are getting blocking that is keeping your query from running full steam to completion, or c) both
Do an IO stall analysis while this is running and also use sp_whoisactive to watch for blocking.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply