May 13, 2014 at 7:11 am
Grant Fritchey (5/13/2014)
I'm being blunt and honest here, I see very little opportunity to tune this query. You should reassess the approach.
Agreed.
I've had several major tuning wins by removing temp tables, but if the temp table has to stay, hardware can't be added, the process can't be changed, I suspect the performance can't easily be improved.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 13, 2014 at 8:00 am
Don't know if it will be correct in this case or not ..
but just try by taking filtered records of
first query [select from table1 where PartitionID between 1200 and 1000000]
into temp table then after join that temp table with second query table [select * from table2].
This approach have worked in one case where our view have 1 million rows and join was not performing well with that view,
we have taken that view data in temp table with filter of parameter and then after used join,that given us some performance benefit.
May 13, 2014 at 8:07 am
Megha P (5/13/2014)
Don't know if it will be correct in this case or not ..but just try by taking filtered records of
first query [select from table1 where PartitionID between 1200 and 1000000]
into temp table then after join that temp table with second query table [select * from table2].
This approach have worked in one case where our view have 1 million rows and join was not performing well with that view,
we have taken that view data in temp table with filter of parameter and then after used join,that given us some performance benefit.
That's probably because eliminating the JOIN to the view, not considered a good practice, resulted in a better execution plan, speeding things up.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 13, 2014 at 8:15 am
Grant Fritchey (5/13/2014)
That's probably because eliminating the JOIN to the view, not considered a good practice, resulted in a better execution plan, speeding things up.
Ok..means view should be avoided in Join for performance..thanks 🙂
May 13, 2014 at 8:27 am
Hi Megha,
I have applied that also, But result is same. I am just going to share execution plan as need to modify table and field name will take some time and cannot apply indexed view as of now, will share findings tomorrow.
May 13, 2014 at 9:00 am
Please find the attached query plan.:cool:
May 13, 2014 at 9:30 am
Is the temp table created on the fly (SELECT INTO) or is it predefined using CREATE TABLE and then populated with an INSERT INTO statement? If the later, what is the definition of this table and does it have predefined indexes?
Also, not sure why you can post the DDL for the tables and the SQL (sort of) for the query but can't post the actual execution plan?
Seeing that would greatly improve our ability to help you tune the query.
May 13, 2014 at 10:23 am
Lynn Pettis (5/13/2014)
Is the temp table created on the fly (SELECT INTO) or is it predefined using CREATE TABLE and then populated with an INSERT INTO statement? If the later, what is the definition of this table and does it have predefined indexes?Also, not sure why you can post the DDL for the tables and the SQL (sort of) for the query but can't post the actual execution plan?
Seeing that would greatly improve our ability to help you tune the query.
Two odd things you will see on examination of the plan;
It's serial
There are loads of implicit conversions to/from float.
For better assistance in answering your questions, please read this[/url].
Hidden RBAR: Triangular Joins[/url] / The "Numbers" or "Tally" Table: What it is and how it replaces a loop[/url] Jeff Moden[/url]
May 13, 2014 at 10:24 am
And based on that, I still don't see any way to speed this up. Oh, and I know you obfuscated the query, but we can see the objects and the row outputs, so, I don't think you're helping yourself much.
You're moving a million rows that has to be joined together. It's expensive. You need to reassess your approach.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 13, 2014 at 10:30 am
ChrisM@home (5/13/2014)
Lynn Pettis (5/13/2014)
Is the temp table created on the fly (SELECT INTO) or is it predefined using CREATE TABLE and then populated with an INSERT INTO statement? If the later, what is the definition of this table and does it have predefined indexes?Also, not sure why you can post the DDL for the tables and the SQL (sort of) for the query but can't post the actual execution plan?
Seeing that would greatly improve our ability to help you tune the query.
Two odd things you will see on examination of the plan;
It's serial
There are loads of implicit conversions to/from float.
The cost estimate is fairly high at 93.9, so parallel might be a help here. Good point
The conversions aren't on columns used in joins or anything else, so it's just some processing overhead. The main cost really is putting 900k rows together with 868k rows. Heck, the estimates & actuals are pretty close. I think we're just talking about a lot of data overpowering the system.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 14, 2014 at 2:24 am
today I created indexed view but it also did not help, still zooms around 30 seconds. So I have released this approach, Still waiting for any revolutionized idea to bring down query time. However I have started working on refactoring code as suggested by Gail Shaw. Thanks all of you for your all valuable comments.
Thanks
Ashok
Viewing 11 posts - 16 through 25 (of 25 total)
You must be logged in to reply to this topic. Login to reply