October 20, 2010 at 9:18 am
Here's my thought: if they are having unacceptable performance accessing a 13M row table then there is something seriously wrong here - and it isn't likely to be index related. Have you done a waitstats and fileIO stall analysis? Are they on a 32 bit box that isn't using all available memory (just had a client with that!). Lots of other questions.
Another thought is that improperly chosen and set up partitioning can be HORRIBLE for performance. For example, if they aren't using the partitioning key in queries they are hitting ALL partitions ALL the time, which each have their own index btrees and thus you are getting many more reads than may otherwise be required if all data were in a single table. I would like to see how things go with a non-partitioned table...
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
October 20, 2010 at 9:58 am
TheSQLGuru (10/20/2010)
Here's my thought: if they are having unacceptable performance accessing a 13M row table then there is something seriously wrong here - and it isn't likely to be index related.
Heh, Kevin, "unacceptable" is not an absolute measure. Something perfectly acceptable for me could be totally unacceptable for the end users... 🙂
Seriously, there's much more behind. This is not the biggest table in the database: some other tables hold more than 100 M rows, but are not accessed as frequently as this one.
The main problem with this table is its design, that was (very badly) modified over time to accomodate some changes in the business. They ended up with a denormalized table and LOTS of dirty tricks in the code to "hide" the monster to the app side.
Obviously, since working on this side would give the best results, I can't do it. 😛
Have you done a waitstats and fileIO stall analysis? Are they on a 32 bit box that isn't using all available memory (just had a client with that!). Lots of other questions.
No, it's a 64 bit box. Memory is indeed an issue with this DB. There's low PLE and high pagefile usage.
The SAN is working decently, even if I would have expected better performace from this class of storage (it's IBM's top class storage: 13 ms Avg. Disk Sec/Read is a bit too high). With this kind of performance, high free space scans and page splits/sec tends to be an issue.
However, before upgrading hardware, I'll check the sql and find the worst queries. When the DB engine is not busy grinding offensive queries can respond much quicker.
Another thought is that improperly chosen and set up partitioning can be HORRIBLE for performance. For example, if they aren't using the partitioning key in queries they are hitting ALL partitions ALL the time, which each have their own index btrees and thus you are getting many more reads than may otherwise be required if all data were in a single table. I would like to see how things go with a non-partitioned table...
Agreed. I'll check that.
Thanks for your help.
-- Gianluca Sartori
October 20, 2010 at 10:25 am
Gianluca Sartori (10/20/2010)
If you know how to extract a single query's plan from a procedure plan I'll be glad to post it.
There's a way. Two, indeed.
Obtaining Statement-Level Query Plans
SQL Server – Find the most expensive operations in Execution plans[/url]
-- Gianluca Sartori
Viewing 3 posts - 16 through 17 (of 17 total)
You must be logged in to reply to this topic. Login to reply