January 8, 2002 at 10:06 am
I have a rather large table (13Gb) in a 18Gb database that has become fragmented as follows:
TABLE level scan performed.
- Pages Scanned................................: 1630793
- Extents Scanned..............................: 205325
- Extent Switches..............................: 1368918
- Avg. Pages per Extent........................: 7.9
- Scan Density [Best Count:Actual Count].......: 14.89% [203850:1368919]
- Logical Scan Fragmentation ..................: 40.79%
- Extent Scan Fragmentation ...................: 0.07%
- Avg. Bytes Free per Page.....................: 2281.5
- Avg. Page Density (full).....................: 71.81%
The best resolution would, obviously, be to reindex (either by dropping / re-creating the index) or using DBCC DBREINDEX. Does SQL Server still need 2 x Table Size free within the Database to re-build a clustered index? I suppose I could increase the size of the Db, then shrink it afterwards... if I had to.
I have done this in the past (on SQL 6.5, where shrinking wasn't an option) by exporting the data in order, clearing, then re-loading the data.
I have also noticed that the Logical Scan Fragmentation is the biggest problem - would this be cured if I was to use DBCC INDEXDEFRAG?
Any suggestions on the best way forward would be much appreciated.
Many thanks,
Nigel.
January 8, 2002 at 11:15 am
Early this morning I added a cluster index to a table with 9 million rows in it. Upon completion I noticed that the database size had grown from about 2.5GB to 4.5GB. So I would say you're going to need a good amount of space. I just checked and the database is down to 3.3 GB so about 1GB of the growth is now gone. We have autoshrink on and I ran an extra backup on that database.
Robert Marda
Robert W. Marda
Billing and OSS Specialist - SQL Programmer
MCL Systems
January 8, 2002 at 11:27 am
I think you still need the space, but you can do the reindex while hte db is on line in sql 2000. The routines have been rewritten to reduce the contention.
Steve Jones
January 8, 2002 at 11:47 am
I keep forgetting that there are major differences between SQL Server 7.0 and SQL Server 2000. The servers I use are SQL Server 7.0 and my previous post describes what happened when I added one clustered index.
Robert Marda
Robert W. Marda
Billing and OSS Specialist - SQL Programmer
MCL Systems
January 8, 2002 at 9:32 pm
I'd bet on the space too. I've got one box where I have to watch the space more than I'd like, doing some of the larger db's takes some juggling. A lot of it is logging. You might consider doing a full backup, switching to simple to reindex, then switching back and backing up again. Reindexing a table at a time will also let you manage it better if you have to. I've had fair results with indexdefrag, but you're probably better off to bite the bullet and do the whole thing on this one.
Andy
January 9, 2002 at 8:22 am
Many thanks for the comments.
I upped the Maximum Sizes on Database and Log Files, switched to Simple recovery mode and kicked off a DBCC DBREINDEX.
All complete in about 30 minutes, with the DB expanded to 30Gb.
Shrunk the DB back down to 18Gb - took just over an hour.
Finally, performed a DBCC CHECKDB to sort out 'sp_spaceused' results - another 20 minutes, then Dumped the DB before putting back as Full recovery mode.
Painless, compared to SQL 6.5 - marvellous!
January 9, 2002 at 10:55 am
Glad we could help a little!
Andy
January 9, 2002 at 11:01 am
January 14, 2002 at 11:56 am
You could further reduce the time by doing DBCC on Same or different tables in Parallel
i.e. While DBCC is being performed on One Table from One Client (Like QA) you could perfrom DCC on another Table. Further You could Perform DBCC on different indices of the same table in parallel.
January 17, 2002 at 7:42 am
Can I re-open the problem?
Further checking has shown that the table is fragmented once more.
I have done a number of little experiments on this and I believe it to be happening during the DBCC SHRINKDATABASE ([DB], 30) command.
Is this likely to re-order the extents in a big way, or have I missed something?
-- TABLE level scan performed.
-- - Pages Scanned................................: 1291230
-- - Extents Scanned..............................: 162189
-- - Extent Switches..............................: 263582
-- - Avg. Pages per Extent........................: 8.0
-- - Scan Density [Best Count:Actual Count].......: 61.23% [161404:263583]
-- - Logical Scan Fragmentation ..................: 99.48%
-- - Extent Scan Fragmentation ...................: 0.05%
-- - Avg. Bytes Free per Page.....................: 84.8
-- - Avg. Page Density (full).....................: 98.95%
From what I can tell, the pages are filled nicely, but scattered across non-contiguous extents.
Compare this to the picture prior to the DBCC SHRINKDATABASE:
-- TABLE level scan performed.
-- - Pages Scanned................................: 1291230
-- - Extents Scanned..............................: 162168
-- - Extent Switches..............................: 162167
-- - Avg. Pages per Extent........................: 8.0
-- - Scan Density [Best Count:Actual Count].......: 99.53% [161404:162168]
-- - Logical Scan Fragmentation ..................: 0.52%
-- - Extent Scan Fragmentation ...................: 0.41%
-- - Avg. Bytes Free per Page.....................: 84.8
-- - Avg. Page Density (full).....................: 98.95%
(nice and tight)
The only other things that I have done since the re-index are a DBCC CheckDb, sp_spaceused @updateStats, and I have tried a DBCC SHRINKDATABASE using the TRUNCATEONLY option. Could any of these be contributors?
Thanks again,
January 17, 2002 at 9:49 am
I would bet shrinkdb does some rearranging - depends on how you shrink, whether you remove space from the end or do a 'compact' operation.
Andy
January 17, 2002 at 10:23 am
Thanks Andy, I'd kinda worked that out from my findings.
I'm a little frustrated that in order to de-frag the table...
o I'm expanding the database (just for that purpose)
o Then re-indexing the table
o If I try a DBCC SHRINKDB with the TRUNCATEONLY option, then it
doesn't shrink very much
o But if I do a proper DBCC SHRINKDB, the table is re-fragmented
and I'm back to square one!
The data that I am loading comes in roughly in order, but not quite. e.g. job 1, 2, 5, 3, 6, 4, 7.
My current proposal is as follows:
o Defragment using either another database or a text file
o Ensure that data is always inserted in order (so that no
further fragmentation is created)...
o Split the table into two - one large table that holds data
in the exact Clustered Index order
o Load data temporarily into the second table as it is extracted
from the source.
o Move data over to the large table as and when it is possible
to do so without a break in the order.
Any further comments appreciated.
Cheers,
Nigel.
January 17, 2002 at 4:49 pm
You might also consider not rebuilding all the indexes in one pass, maybe get SQL to reuse some of that extra space. Probably a good case for not trying to do parallel defrag either.
On the other hand, are you shrinking because you need the space, or just to be neat? If you can afford it, maybe just leave the free space!
Andy
January 18, 2002 at 2:27 am
Andy,
There is only one index on the table (the Clustered one).
I may end up leaving the database at the enlarged size, but the trouble with leaving space, of course, is that someone always feels obliged to fill it!
Thanks again,
January 18, 2002 at 5:15 am
Steve is busy this week - when he gets back will have to see if he has any ideas. Does seem awfully bizarre! The problem, not Steve having ideas.
Andy
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply