November 13, 2008 at 5:08 am
I am confused I have some huge tables on a 24/7 system so we can not reindex so we defrag over the a week
Using DBCC INDEXDEFRAG (@db_name,@table_name,@index_name)
To spread the load over the week, so we are defragging index by index.
This is how the table looks now. Why is my Scan Density so awful and extent switches etc?
Many thanks
DBCC SHOWCONTIG scanning 'CASENOTE_VOLUMES_REQUESTED' table...
Table: 'CASENOTE_VOLUMES_REQUESTED' (1275151588); index ID: 1, database ID: 8
TABLE level scan performed.
- Pages Scanned................................: 685935
- Extents Scanned..............................: 86156
- Extent Switches..............................: 390460
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 21.96% [85742:390461]
- Logical Scan Fragmentation ..................: 0.75%
- Extent Scan Fragmentation ...................: 35.14%
- Avg. Bytes Free per Page.....................: 1162.4
- Avg. Page Density (full).....................: 85.64%
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
November 13, 2008 at 6:34 am
But your logical scan fragmentation is low which is good. Also the page density, aka Fill Factor is high which is good too.
How about updating the statistics with full scan?? This should get you what you want..
Thanks!!
The_SQL_DBA
MCTS
"Quality is never an accident; it is always the result of high intention, sincere effort, intelligent direction and skillful execution; it represents the wise choice of many alternatives."
November 13, 2008 at 7:54 am
Updating stats won't change the scan density. That's a measure of how much extent fragmentation there is. Stats just contain the data distribution in the columns.
Try a rebuild instead of a defrag. I don't think that defrag can fix extent fragmentation, which is what the scan density is affected by
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 14, 2008 at 7:06 am
I have to agree with Gail (GilaMonster) here, as all too often, companies operating a 24x7 website "buy in" to the idea that you can just keep everything going 24x7 for as long as you want just because it "appears to work". Such thinking is flawed, as even the operating system eventually needs to be re-booted, and while clustering can get you past that kind of trouble, managing one's data is an entirely different bailiwick, and at some point, there's simply no other alternative but to take some down time and "clean things up". Just as disk defragmentation on a regular basis is a good thing for a PC, so it is with updating stats and re-building indexes for SQL Server databases. Just sitting back and claiming a need for 24x7 operation is the functional equivalent of intentionally ignorning the symptoms of a health problem until it becomes so acute that a hospital ER visit is required.
Steve
(aka smunson)
:):):)
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply