May 8, 2006 at 4:15 am
One of our applications is sticking for 8 seconds at a particularly business-sensitive page, I've been able to correlate that page to a few tables of similar names, containing 700,000 rows. As there's been no DBA since 2001 - oh yeah - so I immediately thought, "fragmentation".
I've run dbcc showcontig on the 10 largest tables - of which the above were included - and then dbcc indexdefrag against all the tables in the database. To be honest, I hadn't kept the baselines, but I have reason to believe that indexdefrag wasn't effective on the table below, which is by far the worst of the 10.
Here is the post-indexdefrag result for it - the tablename has been changed to protect the thieving swine who sold us this system:
DBCC SHOWCONTIG scanning 'BookPayLink' table...
Table: 'BookPayLink' (178099675); index ID: 0, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 1624
- Extents Scanned..............................: 381
- Extent Switches..............................: 380
- Avg. Pages per Extent........................: 4.3
- Scan Density [Best Count:Actual Count].......: 53.28% [203:381]
- Extent Scan Fragmentation ...................: 99.48%
- Avg. Bytes Free per Page.....................: 1385.4
- Avg. Page Density (full).....................: 82.88%
This table contains nearly 700,000 rows, and clearly the most suspicious reading is the scan density, with the Avg Page density coming a distant second.
The table has 2 columns and 3 non-clustered indexes, of which one is a composite. I don't know whether this table is the sole source or even a source of the performance woes, so all suggestions welcome.
The way I see it, there are a few ways forward:
1) DBCC dbreindex - don't know if this is more effective than indexdefrag?
2) Performance Monitor analysis;
3) Index Tuning
4) Add a clustered index to the table.
Your thoughts, good people?
Jaybee.
May 8, 2006 at 5:04 am
Hi,
Sometimes I had the problem that indexdefrag and so one didn't had the disered result.
As the problem is only with a few tables why you don't consider to make a SELECT INTO in a new table, delete the old one, rename the new table and build again the indexes.
With that you will get rid of the fragmentation and after that set up a few maintenance jobs (best is you put on a maintenance plan)
Hope that helps.
Regards,
Holger
May 8, 2006 at 6:23 am
I'd first have a look at the query execution plans.
Compare the estimated plan against the actual to see if the statistics differ much, use DBCC SHOW_Statistics to clarify because these might just need updating.
May 9, 2006 at 11:16 am
Here's my 'short list' for tuning:
Round I
DBCC UPDATEUSAGE
UPDATE STATISTICS (with FULL scan) for all tables
exec sp_recompile for all tables
exec sp_refreshview for all views
Round II
DBCC DBREINDEX
UPDATE STATISTICS (with FULL scan) for all tables
exec sp_recompile for all tables
exec sp_refreshview for all views
Round III
Profiler
Query Hints
You're in Round II right now ... maybe you need to take a step back
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply