January 4, 2007 at 8:14 pm
Hello.
Have an 85GB database made up of a bunch of tables. Only 3 of the tables have any size to them being in the 20-35GB of allocated space. According to SHOWCONTIG, these tables are only using 2 percent of the space on each page. All the stats I can produce show me that all the data on this database could fit in 11GB and that I have way too many pages in the 3 tables mentioned (see stats below).
Some of these tables have clustered indexes and some have only nonclustered indexes. This database has been growning in leaps and bounds (8GB a month) since earlier last year to get to 85GB. The E: harddrive the data and log (log using 400MB) files are on only have 28GB of free space left, so I have that also going for me . To boot, whoever set up this server with SQL for the application using this database, used Personal Edition 2000 of SQL rather than Standard or Enterprise.
Tried running SHRINKFILE to get size down to 40GB, but it only took off a few GB's. Did compressing options to move pages to front of file and also indicated the maximum percentage of free space in the files to be 15-20%. But again, no change to overall size and the SHOWCONTIG looked the same.
Checked the fill factor on the indexes for these large tables and they are set at 90%
I did SHOWCONTIG on the three indexes in one of these large tables and got the results below where each index numbers looked about the same:
DBCC SHOWCONTIG scanning 'act_log' table...
Table: 'act_log' (52195236); index ID: 3, database ID: 7
LEAF level scan performed.
- Pages Scanned................................: 1730
- Extents Scanned..............................: 266
- Extent Switches..............................: 271
- Avg. Pages per Extent........................: 6.5
- Scan Density [Best Count:Actual Count].......: 79.78% [217:272]
- Logical Scan Fragmentation ..................: 97.86%
- Extent Scan Fragmentation ...................: 97.74%
- Avg. Bytes Free per Page.....................: 863.9
- Avg. Page Density (full).....................: 89.33%
I did SHOWCONTIG on the table itself and got these results:
DBCC SHOWCONTIG scanning 'act_log' table...
Table: 'act_log' (52195236); index ID: 0, database ID: 7
TABLE level scan performed.
- Pages Scanned................................: 281041
- Extents Scanned..............................: 280925
- Extent Switches..............................: 280924
- Avg. Pages per Extent........................: 1.0
- Scan Density [Best Count:Actual Count].......: 12.51% [35131:280925]
- Extent Scan Fragmentation ...................: 99.59%
- Avg. Bytes Free per Page.....................: 7914.3
- Avg. Page Density (full).....................: 2.22%
The values on the table look bad. 1 page per extent, scan density pf 12%. ave free space per page of 7914, and ave page density of 2% all look bad and would account for a table using way too much space for the amount of data actually there.
Doing a sp_spaceused on the database and one of the large tables within shows:
Database Database_size Unallocated_space
AHD 86,766.75 MB 431.20 MB
Reserved Data Index_size Unused
88,273,464 KB 11,440,992 KB 336,624 KB 76,495,848 KB
Table Name Rows Reserved Data Index_size Unused
act_log 450,120 36,084,184 KB 4,505,280 KB 68,344 KB 31,510,560 KB
Am I fighting the fact that this is Personal Edition, do not have much room left on the drive for SQL to do its thing, or just not running the right commands ??
Thanks (Sorry for all the detail)
January 4, 2007 at 11:44 pm
Run DBCC DBREINDEX first so that you can remove the fragementation and then run the shrinkfile command. If any of the tables contains text columns you may not be able shrink the file...
http://www.mssqlcity.com/Articles/Adm/index_fragmentation.htm
1. DBCC DBREINDEX()
2. To shrink data/log file use DBCC SHRINKFILE command...
Check BOL for more details...
DBCC SHRINKFILE and SHRINKDATABASE commands may not work because of sparsely populated text, ntext, or image columns
http://support.microsoft.com/kb/324432
MohammedU
Microsoft SQL Server MVP
January 5, 2007 at 8:44 pm
Thanks Mohammed. Will do some defraging/re-indexing of the indexes this weekend (slow time for production). Will also create a temp clustered index on an appropriate field(s) to get the data re-sequenced and hopefully packed tighter. Then delete the temp clustered index.
By the way, I created a test database on the same instance and copied a couple of these larger tables. One table was 5GB on original production database and the copied table on the test database only took 22MB!! This table did not have a clustered index. The other table I copied was 28GB with a clustered index and the copied version only took up 1.2GB.
Hoping not to have to do some more drastic moves (copy individual tables to a temp database and then copy them back to original db) and that the DEFRAG/RE-INDEX and building temporary clustered indexes will pack these tables tight. Then I will shrink the database.
Stay tune.
Thanks
January 7, 2007 at 1:16 pm
Mohammed,
Thanks for your input !
Yesterday, when usage was low for this application, there was about 9 tables of any size (2-35GB) and all but one had no clustered index. I found that if I created a cluster index on the primary key field (all were an id field) and I used a 95% fill factor, that I got anywhere from a 50% to 90% space gain. Doing further defrag and reindexes did not seem to help anymore than that. Then I deleted this newly created clustered index.
After working with all 9 tables, the database was still sitting at 88GB, but it had about 30GB of freespace. Then did a Shrinkfile doing the compress option. After 2 hours, the database was at 53GB with an additional 20GB free. The Shrinking and compressing seemed to come up with still more free space but had not released this extra free space.
Planned on doing some more shrinking next weekend, but did not want to wait another two hours yesterday.
I am thinking I should re-create those clustered indexes and leave them out there. Pretty much all the tables I was working with have an id field as its key where rows are continually added with this id field keep on bumping up. There is little data deleted once it is added at this point. So, I do not have to worry much about later page splits in trying to add keys in the middle.
Definitely opportunities for further improvements, but getting this space knocked down will takes the pressure off. []
January 8, 2007 at 5:29 am
OK first point .. if your table is fragmented at leaf level and is a heap ( no clustered index ) then no matter how many times you rebuild indexes you will not optimise the data structure.
Your table act_log has loads of wasted space and is probably a heap - either create a clustered index to regain space and then remove it , or add a clustered index for good. To be honest it's generally best practice to have tables with clustered indexes unless they're very small.
I hate shrinkfile ops they can seriously degrade performance and with hard disk prices so cheap just increase storage. Once you've done your shrinking you must rebuild all your indexes in an attempt to make your data storage contiguous.
I'd suggest you read up on how indexes work and such so you understand the significance of what you are attempting to do. Fillfactor will bloat your data storage and degrade performance. If you haven't researched which indexes and structures require a fill factor then take it off and regain your space - then monitor and add fillfactor to tune.
Ultimately the method to regain space is to move all the data to a new database. If you have text columns then there are some other issues you may face.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
January 9, 2007 at 6:28 pm
Colin,
I believe you are pretty much right on on all your points. Thanks
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply