October 15, 2010 at 3:42 am
Gianluca, looks like you have it covered. You may need to experiment to get the best fill factor for your busy tables.
Ninja, perhaps I should have come up with a better title for my topic 🙂
---------------------------------------------------------------------
October 15, 2010 at 3:54 am
Thanks George.
I think I'm on the right track, but sometimes, since I set up this plan, the optimizer decides to pick a very odd plan.
Odd plan + 75 mln rows = pain.
I'll experiment with fillfactors. Thanks for the hint!
-- Gianluca Sartori
October 15, 2010 at 7:31 am
amazingly good maintenance stuff here: ola.hallengren.com. You can have it update statistics on indexes that are not defragged at all or are only reorg'd.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
October 18, 2010 at 1:33 am
TheSQLGuru (10/15/2010)
amazingly good maintenance stuff here: ola.hallengren.com. You can have it update statistics on indexes that are not defragged at all or are only reorg'd.
Thanks for the tip, Kevin, this is exactly what I'm using and I find it very useful.
-- Gianluca Sartori
October 18, 2010 at 1:42 am
Have you considered writing a view and exposing that to Business Objects instead of letting it use the base tables?
How many tables are used in the joins for the query?
October 18, 2010 at 5:41 am
David, thanks but don't go there. the part of the query that performs badly only accesses one table but it is unioned to another query which accesses 19 tables. (then there is another union)
How would the view help performance unless it was indexed? It still has to access the same base tables.
---------------------------------------------------------------------
October 18, 2010 at 1:05 pm
If you are getting locking/blocking activity on a table and are not bothered by dirty reads then using the READUNCOMMITTED hint in the join may alleviate this.
If you are doing a union are either sides of the union naturally distinct? If so think of UNION ALL instead.
19 joins and multiple unions probably causes the query engine to give up optimising the query.
Can any of the data be pre-calculated?
October 18, 2010 at 1:26 pm
David.Poole (10/18/2010)
If you are getting locking/blocking activity on a table and are not bothered by dirty reads then using the READUNCOMMITTED hint in the join may alleviate this.If you are doing a union are either sides of the union naturally distinct? If so think of UNION ALL instead.
there was no blocking going on. I tried union all and the query returned exactly the same results so yes naturally distinct, only knocked two seconds off the run time though. 🙁
19 joins and multiple unions probably causes the query engine to give up optimising the query.
Can any of the data be pre-calculated?
If the optimiser didn't give up it certainly may as well have done. good point.
Pre-calcualtion don't know. I was wondering if they could use divide and conquer with the query, store partial results in a temp table and simplify the joins? How much influence can you have over a query produced by business objects?
I'll pass these ideas on to the devs david.
---------------------------------------------------------------------
Viewing 8 posts - 16 through 22 (of 22 total)
You must be logged in to reply to this topic. Login to reply