September 27, 2004 at 12:03 pm
Hello friends,
I know this issue has been talked about number of times before but after reading a number of places (including BOL) and experimenting with few things, still i am not able to get the results that i need. Also i want to know few things that i am still not aware of. I hope you guys will help me.
I have a database whose size right now is around 4 GB. But the actual size is not this big.
-1) I wanted to know how to i exactly determine how much space (data) has been used in my current database, how much is left and how much can i free.
Can it be done by first checking it through sp_spaceused?
I ran sp_spaceused but i wan't sure of what each field means. I have read about it in BOL but am still not sure. Below are the results when i ran sp_space used on my database
a)
database_name database_size unallocated space
=========== =========== ==============
SalesDb 4786.00 MB 2370.58
b)
reserved data index_size unused
====== ====== ======== ======
2471344 KB 868720 KB 1596552 KB 6072 KB
Now from above information what i am judging is that the Size of database (data and log files combined) is around 4 GB +. The space that is reserved (fill with data that cannot be removed) is around 2.4 GB. The space that can be removed hence reducing the datbase size is around 2.3 GB.
If i am wrong, please correct me.
All i want to do is to remove the unsued space and reduce the size of the database.
I tried running DBCC SHRINKFILE(SalesDB_Data) command without target size as i wasn't sure how much should i shrink and i read in BOL that if target_size is not specified than it resized to the default. But when i ran the command the query batch ran for like 30 Mins. and it seemed as if it had gone in loop and so i had to cancel it. After that when i checked the space through sp_spaceused command the "reserved" column size was increased to around 3.3 GB. Howerver i update the usage through "sp_spaceused @updateusage = 'TRUE'" and it was showing back the earlier information.
So right now i am stuck as i don't know what should i do to take care of this.
Please advice me on what does the columns of sp_spaceused indicate. Am i having the right knowledge about the columns that are displayed in space_spaceused and also what should i do to Shrink the database (data file) which is my ultimate goal.
Any help would be appreciated.
Thanks
September 27, 2004 at 12:38 pm
The reserved space (2.5 GB) is the amount of space allocated to the file, but not actually used.
0.9 GB of space is allocated to the data and 1.6 GB of space is allocated to indexes resulting in about 2.5 GB of space actually required for the database. The 2.5 + 2.5 actually exceedes the 4.8 GB attributed to the database size, but it should all be rounding issues.
Since you are moving around up to 2.5 Gig on disk the shrink operation may very well take a long time. You might want to just let it run. 30 minutes is not forever. I've run read-only queries that took hours (very huge, complex, and non-optimized, but hours none-the-less).
As a side note, the more often you shrink your database and allow it to grow, the more fragmented it is likely to get on the hard disks. Consider allowing it some growth room and shrinking it down to say 3 or 3.5 GB for example.
September 27, 2004 at 12:50 pm
It looks like that you need to relocate the used pages so that you can shrink. If there is a fragmentation in allocated space, DBCC SHRINKFILE without option just won't work.
Try the "NOTRUNCATE" option to relocate the page and then use "TRUNCATEONLY" to shrink it.
My 2cents
September 28, 2004 at 6:01 am
Notwithstanding the point raised about file level fragmentation, shrinking databases can cause all manner of issues internally and may only cause extra growth, and more fragmentation when you next rebuild indexes.
Backups are unaffected by free space so i don't see the problem. I'd be really worried if a matter of free space so small was an issue on my server < grin >
I've found that shrinkfile often works best when the database is in simple recovery model and/or after a checkpoint.
I'd suggest you analyse the fragment count for your mdf file after your shrink file.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
September 28, 2004 at 9:36 am
Thank you for the reply. So are suggesting that i change the recovery model to simple, than shrink the datafile and then change back to Full? Can you be more clear on this. Also what do i do to anyalyze the fragment count. I am sorry if my question is too simple but if i don't know the answer now, i would never know in future.
It will be great if you can explain me the exact steps that i need to take.
Thanks once again.
September 28, 2004 at 9:45 am
Only minor problem with that is there is possible that you loosing transaction log if you need that. If you don't need that, just change the recovery model to just simple and leave as it is.
If you don't want to loose transaction log, try the DBCC SHRINKFILE with "NOTRUNCATE" option. I've tried for over hundreds databases and always work. This method also have bad side. This takes up a lot of resource so I'd recommend to do during non-business hour if you want to run that.
And I agree with all other say about not shrink too much. Plan ahead about growth rate and shrink only if it is necessary.
September 28, 2004 at 10:01 am
hello Journeyman,
Thanks for the reply.
I do understand about the difference of simple recovery and full recovery model in regards to transaction log and i am planning to shrink it only when the resources are low.
But still my doubts are that how do i know from the statistics that i have given when i need to shrink the database. I need someone to exactly explain me that how much space is still free in my database. Is it the unallocated size (2370.58 MB) or unsed size (6072 KB) and if i have to run the command DBCC Shrinkfile what exactly would it be. DBCC SHRINKFILE (SalesDB_Data,2000) ?
September 28, 2004 at 10:02 am
Yes I should have made clear that changing recovery model will loose the transaction log - you need to make a full backup before / after such a change.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
September 28, 2004 at 1:55 pm
It is unallocated space that you will be able to shrink it and I am using this script to figure out what database to shrink/expand. This is very nice one and I got it from this forum. Sorry I didn't keep the name of the auther though.
--exec sp_MSForeachdb 'use ? exec sp_spaceused @updateusage = ''TRUE'''
DECLARE @DBName nvarchar(60)
DECLARE @SQLString nvarchar (2000)
DECLARE c_db CURSOR FOR
SELECT name
FROM master.dbo.sysdatabases
WHERE status&512 = 0
CREATE TABLE #TempForFileStats([Server Name] nvarchar(40),
[Database Name] nvarchar(60),
[File Name] nvarchar(128),
[Usage Type] varchar (6),
[Size (MB)] real,
[Space Used (MB)] real,
[Space Used (%)] real,
[MaxSize (MB)] real,
[Next Allocation (MB)] real,
[Growth Type] varchar (12),
[File Id] smallint,
[Group Id] smallint,
[Physical File] nvarchar (260),
[Date Checked] datetime)
CREATE TABLE #TempForDataFile ([File Id] smallint,
[Group Id] smallint,
[Total Extents] int,
[Used Extents] int,
[File Name] nvarchar(128),
[Physical File] nvarchar(260))
CREATE TABLE #TempForLogFile ([File Id] int,
[Size (Bytes)] int,
[Start Offset] bigint,
[FSeqNo] int,
[Status] int,
[Parity] smallint,
[CreateTime] varchar(40))
OPEN c_db
FETCH NEXT FROM c_db INTO @DBName
WHILE @@FETCH_STATUS = 0
BEGIN
SET @SQLString = 'SELECT @@SERVERNAME as ''ServerName'', ' +
'''' + @DBName + '''' + ' as ''Database'', ' +
' f.name, ' +
' CASE ' +
' WHEN (64 & f.status) = 64 THEN ''Log'' ' +
' ELSE ''Data'' ' +
' END as ''Usage Type'', ' +
' f.size*8/1024.00 as ''Size (MB)'', ' +
' NULL as ''Space Used (MB)'', ' +
' NULL as ''Space Used (%)'', ' +
' CASE f.maxsize ' +
' WHEN -1 THEN NULL ' +
' WHEN 0 THEN f.size*8/1024.00 ' +
' ELSE f.maxsize*8/1024.00 ' +
' END as ''Max Size (MB)'', ' +
' CASE ' +
' WHEN (1048576&f.status) = 1048576 THEN (growth/100.00)*(f.size*8/1024.00) ' +
' WHEN f.growth =0 THEN NULL ' +
' ELSE f.growth*8/1024.00 ' +
' END as ''Next Allocation (MB)'', ' +
' CASE ' +
' WHEN (1048576&f.status) = 1048576 THEN ''Percentage'' ' +
' ELSE ''Pages'' ' +
' END as ''Usage Type'', ' +
' f.fileid, ' +
' f.groupid, ' +
' filename, ' +
' getdate() ' +
' FROM [' + @DBName + '].dbo.sysfiles f'
INSERT #TempForFileStats
EXECUTE(@SQLString)
------------------------------------------------------------------------
SET @SQLString = 'USE [' + @DBName + '] DBCC SHOWFILESTATS'
INSERT #TempForDataFile
EXECUTE(@SQLString)
--
UPDATE #TempForFileStats
SET [Space Used (MB)] = s.[Used Extents]*64/1024.00,
[Space Used (%)] = (s.[Used Extents]*64/10.24) / f.[Size (MB)]
FROM #TempForFileStats f,
#TempForDataFile s
WHERE f.[File Id] = s.[File Id]
AND f.[Group Id] = s.[Group Id]
AND f.[Database Name] = @DBName
--
TRUNCATE TABLE #TempForDataFile
-------------------------------------------------------------------------
SET @SQLString = 'USE [' + @DBName + '] DBCC LOGINFO'
INSERT #TempForLogFile
EXECUTE(@SQLString)
--
UPDATE #TempForFileStats
SET [Space Used (MB)] = (SELECT (MIN(l.[Start Offset]) +
SUM(CASE
WHEN l.Status <> 0 THEN l.[Size (Bytes)]
ELSE 0
END))/1048576.00
FROM #TempForLogFile l
WHERE l.[File Id] = f.[File Id]),
[Space Used (%)] = (SELECT (MIN(l.[Start Offset]) +
SUM(CASE
WHEN l.Status <> 0 THEN l.[Size (Bytes)]
ELSE 0
END))/10485.76
FROM #TempForLogFile l
WHERE l.[File Id] = f.[File Id])/ f.[Size (MB)]
FROM #TempForFileStats f
WHERE f.[Database Name] = @DBName
AND f.[Usage Type] = 'Log'
--
TRUNCATE TABLE #TempForLogFile
-------------------------------------------------------------------------
FETCH NEXT FROM c_db INTO @DBName
END
DEALLOCATE c_db
SELECT * FROM #TempForFileStats
--where [space used (%)] > 85
------------
DROP TABLE #TempForFileStats
DROP TABLE #TempForDataFile
DROP TABLE #TempForLogFile
September 29, 2004 at 8:47 am
And this is sample query
DBCC SHRINKFILE(Test_Data1,1,NOTRUNCATE)
I put "1" for size but with NOTRUNCATE option, it won't be matter so I just put 1 for anytime.
And here is simple query that will generate script to shrink all datafile and log file. Make sure to run it from Query Analyzer and "Result in Text" (Ctrl+T) mode and run this and copy back to the query pane and rerun it.
USE <DBNAME>
SET NOCOUNT ON
GO
select 'BACKUP LOG ' + db_name() + ' WITH TRUNCATE_ONLY'
GO
Select 'DBCC SHRINKFILE(' + rtrim(convert(char(30),name))+ ',' + '1,NOTRUNCATE)' + char(13) + 'GO' from sysfiles
GO
Select 'DBCC SHRINKFILE(' + rtrim(convert(char(30),name))+ ',' + '1,TRUNCATEONLY)' + char(13) + 'GO' from sysfiles
October 5, 2004 at 3:39 am
This is a reply of Jeremy (Member of SQL Magazine Forum) whuch works best with me when I want to shrink a DB (I suugest you to Make Log Backup first):
SQL Server sometimes has a problem with shrinking the TLog. If you are not backing up the transaction log as a part of your data recovery plan, I recommend having truncate log on checkpoint enabled. This will help ease further issues.
For unruly database transaction logs, I use the following methodology:
1. First try to shrink the tlog using DBCC Shrinkfile (tlog_filename). After this has completed, run DBCC loginfo on the database of interest. If the last record returned has a status = 2 (active) and there are several entries before it with a status = 0 (inactive) you will need to run some transactions to push the active entry to the beginning of the log (step 2).
2. Run the following script
USE your database_name
GO
CREATE TABLE log_shrinker (char1 char(4000))
GO
DECLARE @i INT
SELECT @i = 0
WHILE (1 = 1)
BEGIN
WHILE (@i < 100)
BEGIN
INSERT log_shrinker VALUES ('a')
SELECT @i = @i +1
END
TRUNCATE TABLE log_shrinker
BACKUP LOG database_name WITH TRUNCATE_ONLY
END
This is a script which contains an infinite loop, which will need to be stopped manually. It will push the active entry to the beginning of the log so that the rest of the log can be truncated. I recommend running it for a minimum of 40 minutes for an 8GM Tlog and then stop it manually.
3. Try running BACKUP LOG database_name WITH TRUNCATE_ONLY
4. Now run DBCC Shrinkfile (tlog_filename) again, if the tlog hasn't already shrunk
5. If the Tlog is still huge, run the script in step (2) again. This should do the trick.
Note: This is a manual process which requires you to stop the infinite loop query. I have used it many times to shrink both live(production) databases as well as development databases which have grown too large. Sometimes the script in step (2) needs to be run for a longer time. You can periodically check the tlog using DBCC loginfo to ensure that the active entry in the log is moving to the beginning of the log file.
Hope this helps!
Alamir Mohamed
Alamir_mohamed@yahoo.com
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply