May 12, 2005 at 7:20 am
On one of my SQL Server 7 installations I have a database that had its log file set to auto grow. It grew to about 12gb over the past while, due to some add hoc stuff which was done. This will not happen again so I figured I would shrink the file and release some of that space back to the OS. The problem is I cannot get the file to shrink. I tried DBCC SHRINKFILE, and shrinking the file using the GUI, but no luck with either way. When I use the GUI it say it has shrunk the file successfully, but when I look at the file it is the same size. Anyone have any ideas?
May 12, 2005 at 7:25 am
There's a lot of good information and links in this thread:
http://www.sqlservercentral.com/forums/shwmessage.aspx?forumid=24&messageid=122690
May 13, 2005 at 2:35 am
Hi,
Try backup log file with truncate_only and then shrink the file.
May 13, 2005 at 6:38 am
But don't whatever you do, dont do this on a production system that you want to be able to restore transaction log backups to.
If you do truncate the log then you'll need to do a full DB backup as soon as it's finished, this then becomes your restore point.
May 13, 2005 at 6:48 am
It is always a best practice to have a backup policy on Production Servers, incase your log file is full and you want to shrink database.
First backup the transactional Log, and do a full database backup before you run your shrinkfile command on your database.
Thanks
Prasad Bhogadi
www.inforaise.com
May 13, 2005 at 2:30 pm
In SQL 7.0, the log shrink does not happen immediately, and it can only shrink to the first "virtual log file" which is in use. I'm including a stored procedure that "forces" the shrink to occur. Note that the default behavior is to use "truncate_only" and Mike's recommendations about that are extremely important! This procedure was written by Andrew Zanevsky, and modified by me...
use master
go
if object_id( 'sp_af_force_shrink_log' ) is not null drop proc sp_af_force_shrink_log
go
create proc sp_af_force_shrink_log
/*
*************************************************************
Name: sp_af_force_shrink_log
Description:
Shrink transaction log of the current database in SQL Server 7.0.
Switch context to proper db to execute.
Usage: exec sp_af_force_shrink_log <target_percent>, <target MB>, <iterations>, <backup options>
exec pubs..sp_af_force_shrink_log
Author: Andrew Zanevsky, AZ Databases, Inc., 12/25/1999, v5 - 08/18/2000
Input Params:
-------------
@target_percent tinyint. default = 0. Target percentage of remaining shrinkable
space. Defaults to max possible.
@target_size_MB int. default = 10. Target size of final log in MB.
@max_iterations int. default = 1000. Number of loops (max) to run proc through.
@backup_log_opt nvarchar(1000). default = 'with truncate_only'. Backup options.
Output Params:
--------------
Return:
Results:
---------
Locals:
--------
@err Holds error value
Modifications:
--------------
3/2004 - Steve Phelps - made a number of small changes, all commented with the initials SBP
*************************************************************
*/
@target_percent tinyint = 0,
@target_size_MB int = 0,
@max_iterations int = 1000,
@backup_log_opt nvarchar(1000) = 'with truncate_only'
as
-- test stuff SBP
-- use CYB2001
-- GO
--
-- declare
-- @target_percent tinyint ,
-- @target_size_MB int ,
-- @max_iterations int ,
-- @backup_log_opt nvarchar(1000)
--
-- set @target_percent = 0
-- set @target_size_MB = 0
-- set @max_iterations = 1000
-- set @backup_log_opt = 'with truncate_only'
--
-- set @target_size_MB = 20
-- end test stuff
set nocount on
declare
@db sysname,
@last_row int,
@log_size decimal(15,2),
@unused1 decimal(15,2),
@unused decimal(15,2),
@shrinkable decimal(15,2),
@iteration int,
@file_max int,
@file int,
@fileid varchar(5),
@prev_max_iterations int,
@command varchar(500)
select @db = db_name(),
@iteration = 0,
@prev_max_iterations = 2^31-1 -- SQL 7.0 max value for int data type, will be reset within the loop SBP
create table #loginfo
(
id int identity,
FileId int,
FileSize numeric(22,0),
StartOffset numeric(22,0),
FSeqNo int,
Status int,
Parity smallint,
TimeorLSN varchar(25))
create unique clustered index loginfo_FSeqNo on #loginfo ( FSeqNo, StartOffset )
create table #logfiles
(
id int identity(1,1),
fileid varchar(5) not null)
insert #logfiles ( fileid ) select convert( varchar, fileid ) from sysfiles where status & 0x40 = 0x40
select @file_max = @@rowcount
if object_id( 'table_to_force_shrink_log' ) is null
exec( 'create table table_to_force_shrink_log ( x nchar(3000) not null )' )
insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, TimeorLSN ) exec ( 'dbcc loginfo' )
select @last_row = @@rowcount
print 'Max iterations = ' + rtrim(cast(@max_iterations as varchar(20)))
select @log_size = sum( FileSize ) / 1048576.00,
@unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
@shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
from #loginfo
select @unused1 = @unused -- save for later
-- changed this so that it will print with rest of output SBP
print '
iteration ........... = ' + cast(@iteration as varchar(10)) + '
log size, MB ........ = ' + cast(@log_size as varchar(10)) + '
unused log, MB ...... = ' + cast(@unused as varchar(10)) + '
shrinkable log, MB .. = ' + cast(@shrinkable as varchar(10)) + '
shrinkable % ........ = ' + cast(convert( decimal(6,2), @shrinkable * 100 / @log_size ) as varchar(10))
-- original proc did not consider @target_size_MB, only @target_percent. modified so that if @target_size_MB is
-- entered, it will take precedence by calculating a new @target_percent. SBP
if @target_size_MB > 0
select @target_percent = (@target_size_MB / @log_size) * 100
else
select @target_size_MB = 10
-- changed @target_percent to + 1, because many times the end result is slightly larger than the target. SBP
while @shrinkable * 100 / @log_size > (@target_percent + 1)
and @iteration < @max_iterations
begin
select @iteration = @iteration + 1 -- this is just a precaution
exec( 'insert table_to_force_shrink_log select name from sysobjects
delete table_to_force_shrink_log')
select @file = 0
while @file < @file_max begin
select @file = @file + 1
select @fileid = fileid from #logfiles where id = @file
-- added @target_size_MB as a parm in dbcc shrinkfile. also moved into @command. SBP
select @command = 'dbcc shrinkfile( ' + @fileid + ',' + rtrim(cast(@target_size_MB as varchar(10))) + ')'
print @command
exec (@command)
end
exec( 'backup log [' + @db + '] ' + @backup_log_opt )
truncate table #loginfo
insert #loginfo ( FileId, FileSize, StartOffset, FSeqNo, Status, Parity, TimeorLSN ) exec ( 'dbcc loginfo' )
select @last_row = @@rowcount
-- The most iterations we really need to do is the number of logical log files, and this should change if the
-- log is shrinking. Therefore, reset @max_iterations within the loop. SBP
select @max_iterations = count(*) from #loginfo
-- If the number of logical log files did not change from last iteration, get out. SBP
if @max_iterations = @prev_max_iterations
select @max_iterations = 0
else
select @prev_max_iterations = @max_iterations
print 'Max iterations = ' + rtrim(cast(@max_iterations as varchar(20)))
select @log_size = sum( FileSize ) / 1048576.00,
@unused = sum( case when Status = 0 then FileSize else 0 end ) / 1048576.00,
@shrinkable = sum( case when id < @last_row - 1 and Status = 0 then FileSize else 0 end ) / 1048576.00
from #loginfo
print '
iteration ........... = ' + cast(@iteration as varchar(10)) + '
log size, MB ........ = ' + cast(@log_size as varchar(10)) + '
unused log, MB ...... = ' + cast(@unused as varchar(10)) + '
shrinkable log, MB .. = ' + cast(@shrinkable as varchar(10)) + '
shrinkable % ........ = ' + cast(convert( decimal(6,2), @shrinkable * 100 / @log_size ) as varchar(10))
end
if @unused1 < @unused
select 'After ' + convert( varchar, @iteration ) +
' iterations the unused portion of the log has grown from ' +
convert( varchar, @unused1 ) + ' MB to ' +
convert( varchar, @unused ) + ' MB.'
union all
select 'Since the remaining unused portion is larger than 10 MB,' where @unused > 10
union all
select 'you may try running this procedure again with a higher number of iterations.' where @unused > 10
union all
select 'Sometimes the log would not shrink to a size smaller than several Megabytes.' where @unused <= 10
else
select 'It took ' + convert( varchar, @iteration ) +
' iterations to shrink the unused portion of the log from ' +
convert( varchar, @unused1 ) + ' MB to ' +
convert( varchar, @unused ) + ' MB'
-- cleanup
drop table #loginfo
drop table #logfiles
exec( 'drop table table_to_force_shrink_log' )
go
if object_id( 'sp_af_force_shrink_log') Is Null
select 'sp_af_force_shrink_log Not Created'
else
select 'sp_af_force_shrink_log Created'
go
May 15, 2005 at 11:13 pm
Hi,
1. backup the log and try to shrink it.
2. If you have replication set with any of the tables that went heavy transactions and if
your replication log-reader is not running, then sql server wont free up the log even though if you take full-backup.
If this is the case, start the log-reader, then you will see the actual used space reducing.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply