March 12, 2003 at 8:18 am
Please could someone explain how I should be regarding the results of the above.
Here are the results I get for a particular table in my DB, is this good or bad?
- Pages Scanned................................: 63782
- Extents Scanned..............................: 7973
- Extent Switches..............................: 7972
- Avg. Pages per Extent........................: 8.0
- Scan Density [Best Count:Actual Count].......: 100.00% [7973:7973]
- Logical Scan Fragmentation ..................: 0.00%
- Extent Scan Fragmentation ...................: 1.14%
- Avg. Bytes Free per Page.....................: 523.9
- Avg. Page Density (full).....................: 93.53%
Also, how often should I be creating/updating statistics?
TIA,
Carl.
March 12, 2003 at 8:29 am
It would depend on what that table would be used for. If it's a pure look up table it looks great. It's a heavy transactional table you sequentialy add transaction from one-two sources, your still okay. If your primary key and cluster index is a SSN you and you have heavy inserts you might want to change yuor fill factor to avoid page splits.
John Zacharkan
John Zacharkan
March 12, 2003 at 9:29 am
I think the problem is in here:
Logical Scan Fragmentation ..................: 0.00%
Never saw a % so low. I guess it can't be 100% fragmented.
March 12, 2003 at 10:04 am
That basically means all the pages are contiguous in the system.
Steve Jones
March 12, 2003 at 10:57 am
So is that good or bad?
March 12, 2003 at 1:15 pm
Good
quote:
So is that good or bad?
John Zacharkan
John Zacharkan
March 12, 2003 at 1:41 pm
Try
DBcc showcontig (30)
quote:
I think the problem is in here:Logical Scan Fragmentation ..................: 0.00%
Never saw a % so low. I guess it can't be 100% fragmented.
John Zacharkan
John Zacharkan
March 13, 2003 at 3:30 am
Carl,
Overall, that's as good as you can possibly get. You have a very healthy table without any problems, based on that DBCC output.
You've got 7973 extents in your table, which means the minimum number of switches you can have when scanning the extents in a non-extent fragmented table is 7972 - exactly what you have got. For this metric, lower is better, but you can only go as low as (number of extents - 1).
You also have an average of 8.0 pages per extent, which is the best you can get, as there are 8 pages in an extent. So for this metric, higher is better, up to a maximum of 8.0 - lower means more fragmentation.
Your scan density of 100% is superb - 100% means that the tables extents are not fragmented, ie: they are totally contigious.
It's the measure of how many pages are in your table (7973) divided by the number of switches between the extents when they are scanned + 1.
Also, your logical scan fragmentation is 0%, which again is as good as you get (lower is better) - this figure tells you how out of order the pages are on which your data is held are. Higher means more fragmented - generally I start to defragment a large table's indexes if this creeps above 25-30%.
Finally, the pages on which your data is held are on average 93% full (higher is better, up to 100%, which you'll likely never see, so don't try too hard!), so you're not creating unecessary amounts of I/O from your disk, bringing in half empty pages. Anything above 90% is good, the closer to 100% full the better, unless lots of inserts are being applied to your table, in which case you may see page splitting, which causes excessive I/O. You can check this by firing up Performance Monitor and monitoring SQL Server : Access Methods - Page Splits/sec. Lower is better. Only if you see a large number of page splits occuring on a near continuous basis should you consider changing the fill factor for a table. The best advice is to leave it set to the default value UNLESS you see alot of page spliting going on. The problem is that if you tinker with decreasing a table's fill factor, you automatically increase the amount of I/O from disk, as more pages will have to be brought into memory to read the same amount of data, because more of each page will be empty. On a big table, this can be a lot of extra I/O going on.
An ideal value for Page Splits/sec is zero - the lower the better.
With regard to updating stats, it depends on how many changes are being applied to your tables. Stats only need to be updated if they get out of date, so for a static table (or a very small table), it doesn't matter too much. For a large table such as this, it's worth updating them on a regular basis for best performance, as my guess is that you'll be inserting a lot of new data into it.
We've got a busy helpdesk system with 1,500 people accessing it, with loads of daily updates, so it gets its stats updated each night, though weekly would probably be sufficient. You might find on a smaller system that running a scheduled update once per week or month is more than enough.
If you want to find out more, I'd suggest you first get your head into Books Online and read up on DBCC SHOWCONTIG (and note the differences between 7.0 and 2000 - 7.0 takes object ids as parameters, 2000 takes object ids -or- object names, check the syntax for the version you're using).
If you want to go further still, Karen Delaney's book 'Inside SQL Server 2000' is a good reference.
Happy reading 🙂
Jon Reade.
Edited by - jonreade on 03/13/2003 07:36:49 AM
Jon
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply