April 23, 2009 at 10:44 am
I have a huge database (almost 2TB) on a server with about 65GB free on disk. We are replacing the hardware with serveral servers, but for this question assume 1 production sever (to house only part of the existing data) and one archive server (to host the oldest data).
I'm currently working to free up some space on the existing server. I scripted one extremely large table ("results") to the archive server, then copied a portion of the records (about 200 million) and subsequently deleted those records from the current system.
The results of "findbigtables" show that table with about rows=2147483647 , reservedkb = 824197760 KBdatakb=790618896 KBindexkb=4689448 KB unusedkb=28889416 KB.
When I take a look at the data file this table resides on using:
SELECT name AS NameOfFile,
size/128.0 as TotalSizeInMB,
CAST(FILEPROPERTY(name, 'SpaceUsed' )AS int)/128.0 AS SpacesUsedInMB,
size/128.0 -CAST(FILEPROPERTY(name, 'SpaceUsed' )AS int)/128.0 AS AvailableSpaceInMB
FROM dbo.SYSFILES
I see totalsizeinmb= 1109120.000000 spaceusedinmb=1108193.687500 and availablespaceinMB=926.312500
My goal is to "release" the unusedKB in the "results" table back to the datafile, so I can shrink it, leaving more space on disk for other operations. I'll have to repeat this process for numerous tables.
Would running DBCC REINDEX against the clustered index on "results" accomplish this goal? I have about 14GB allocated to the transaction log am taking tlog backups hourly, so space there isnt really a problem, but space on disk is... will DBCC REINDEX grow the data file or will it use some of the existing "unusedkb" already allocated to the results table?
Thanks in advance!
Warren
April 23, 2009 at 2:19 pm
The reindex will need space to complete, but it ought to get you a set of contiguous extents. The issue is that you don't know where those will be.
What you can do is shrink the files down to close to the data size, then expand the database by at least the size of your largest CI. You can then reindex a table at a time to remove fragmentation.
Shrinking the data files doesn't require space at the end. It moves pages down. Shrinking the log requires free space at the end fo the file.
April 23, 2009 at 3:06 pm
Steve Jones - Editor (4/23/2009)
The reindex will need space to complete, but it ought to get you a set of contiguous extents. The issue is that you don't know where those will be.What you can do is shrink the files down to close to the data size, then expand the database by at least the size of your largest CI. You can then reindex a table at a time to remove fragmentation.
Shrinking the data files doesn't require space at the end. It moves pages down. Shrinking the log requires free space at the end fo the file.
Just remember that shrinking a data file introduces index fragmentation as it moves the pages down. After shrinking and then growing - you have to rebuild ALL indexes to remove the fragmentation, and on a system this large that is going to take some time.
Unless I am mis-reading these numbers, you have a single table that is using ~790GB of storage? Rebuilding the index on that monster is not only going to take a long time, but you don't have enough free space to rebuild it contiguously. Again, I could be mis-reading this, but it does look like the table has reserved ~800GB of storage?
At the end, the most you could end up recovering from this is ~28GB (the unused space).
Please, someone tell me I am wrong here and reading the numbers incorrectly?
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
April 23, 2009 at 5:44 pm
Your reading the numbers right... the entire database is almost 2 TB- this single table is almost 800GB... It currently contains about 8 years worth of data. My task is to archive off all but the 3 most recent years, then migrate to new hardware...
My approach has been, delete from the largest tables first, but they are still adding 2GB per day... quite a challenge in front of me.
April 24, 2009 at 5:44 am
My thought is that a shrink... i.e. DBCC SHRINKFILE wont work... but please correct me if I'm wrong.
The data file shows only a few hundred MB "free".... so, the "unusedkb" isnt "free" in the data file. If I attempt to shrink the file, will that effectively get the "unusedkb" added back to the "free" mb that SQL Server sees in the datafile, or is reindexing the only way to do that?
I'm hesitant to reindex since there is so little space left on the data drive... is dropping and recreating the index a better option?
dbcc index defrag will not release the "unusedkb" in the tables back to the data file, right?
April 24, 2009 at 9:17 am
I am not sure - I do know that performing the shrink will force the pages to be re-allocated so there is free space at the end, which is going to cause index fragmentation.
I would not be so concerned about shrinking the file itself. I would be more concerned with making sure we had enough space in the data file to continue operations. Leave the data file the same size, just free up space to be reused. And that will probably need to done with an index rebuild - either drop and create or DBREINDEX.
I would be hesitant about performing a drop and recreate - especially on a clustered index.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply