May 23, 2007 at 5:20 am
Hi,
We are currently experiencing a problem with one of our databases, the transaction log has grown to 12GB and we need to shrink it. The DB was originally in Simple mode, which i mistakenly thought meant we didn't need a transaction log (after researching this problem i discovered i was wrong). I have been trying to shrink the file and it's just not having it. I've done a full backup and backed up the transaction log (after switching back to full mode) and on both occasions if i go to the 'Shrink file' option in Enterprise manager the 'Space Used' figure, which i was expecting to drop right down, has actually not changed at all, so i have no space left in the log file to remove.
I have also tried calling the checkpoint function and using 'DBCC SHRINKFILE' and 'BACKUP LOG pubs WITH TRUNCATE_ONLY', with no success.
Can anyone give any suggestions?
May 23, 2007 at 5:24 am
you may need to repeat the procedure until it shrinks.
http://www.support.microsoft.com/kb/256650
-- select db_name()
-- select * from sysfiles
USE Your_Database_Name -- This is the name of the database for which the log will be shrunk.
go
SET NOCOUNT ON
DECLARE @LogicalFileName sysname,
@MaxMinutes INT,
@NewSize INT
-- *** MAKE SURE TO CHANGE THE NEXT 3 LINES WITH YOUR CRITERIA. ***
SELECT @LogicalFileName = 'Your_log', -- Use sp_helpfile to identify the logical file name that you want to shrink.
@MaxMinutes = 10, -- Limit on time allowed to wrap log.
@NewSize = 100 -- in MB
-- Setup / initialize
DECLARE @OriginalSize int
SELECT @OriginalSize = size -- in 8K pages
FROM sysfiles
WHERE name = @LogicalFileName
SELECT 'Original Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),@OriginalSize) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(@OriginalSize*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName
CREATE TABLE DummyTrans
(DummyColumn char (8000) not null)
-- Wrap log and truncate it.
DECLARE @Counter INT,
@StartTime DATETIME,
@TruncLog VARCHAR(255)
SELECT @StartTime = GETDATE(),
@TruncLog = 'BACKUP LOG ' + db_name() + ' WITH TRUNCATE_ONLY'
-- Try an initial shrink.
DBCC SHRINKFILE (@LogicalFileName, @NewSize)
EXEC (@TruncLog)
-- Wrap the log if necessary.
WHILE @MaxMinutes > DATEDIFF (mi, @StartTime, GETDATE()) -- time has not expired
AND @OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName) -- the log has not shrunk
AND (@OriginalSize * 8 /1024) > @NewSize -- The value passed in for new size is smaller than the current size.
BEGIN -- Outer loop.
SELECT @Counter = 0
WHILE ((@Counter < @OriginalSize / 16) AND (@Counter < 50000))
BEGIN -- update
INSERT DummyTrans VALUES ('Fill Log') -- Because it is a char field it inserts 8000 bytes.
DELETE DummyTrans
SELECT @Counter = @Counter + 1
END -- update
EXEC (@TruncLog) -- See if a trunc of the log shrinks it.
END -- outer loop
SELECT 'Final Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),size) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName
DROP TABLE DummyTrans
PRINT '*** Perform a full database backup ***'
SET NOCOUNT OFF
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
May 24, 2007 at 6:41 am
Hi all, I appreciate the article http://support.microsoft.com/kb/256650 is SQL Server 7 specific. Further down in the article is states : You must make a full database backup after you run either the DBCC SHRINKFILE or the DBCC SHRINKDATABASE commands.
Can I take it that this requirement has been removed for SQL Server 2000 ? If not, I have had a very different understanding of these commands to date. We have only one instance of SQL 7 and multiple instances of SQL2K. Thanks, Derek
May 24, 2007 at 7:02 am
the sqlserver logfile is used with active chunks.
The log-file(s) can only be shrinked up to the point where the active chunk ends. It that is the beginning of the logfile, it will shrink up to the given size.
Best practise :
1) Ask the question if the shrink is actualy needed !! (the log has grown for one or more reasons, so it may be better to leave it like it is)
2) FULL backup before shrink action.
3) shrink to the wanted size
4) FULL backup after the shrink action.
.... work as usual ...
Also keep in mind this may interfere with ongoing operations because of the physical file actions needed. (Log-file is being written to synchronously !)
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
May 24, 2007 at 7:13 am
Thanks ALZDBA, i actually found the answer in an article that was referenced in the article that you posted a link to.
Basically there was a transaction that has been left uncommitted for about a week, so the log could never be cleared down and just kept building in size.
Should really have checked that myself sooner
May 24, 2007 at 7:32 am
aha, the mystics of dbcc opentran
We learn every day ....
"Learn to play ..... play to learn"
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
May 24, 2007 at 7:37 am
Hi Alzdba, forgve me if I'm a bit slow on the uptake here and I do appreciate the best practice as you have stated above. I suppose the nitty gritty of my question is ... does DBCC Shrinkfile and DBCC Shrinkdatabase behave in the same manner for both SQL7 and SQL2K. I am of the impression they do. Thanks Derek
May 24, 2007 at 7:47 am
... forgve me ...
No problem. The goal of this usergroup is to help each other
AFAIK they do behave the same for sql7 and sql2k.
I haven't used the shrinkdb for years because the shrinkfile does just what I want it to do and I want to have control which file(s) are involved in the operation.
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
May 24, 2007 at 7:50 am
Ah thanks very much Alzdba. Much appreciated. Derek
May 24, 2007 at 7:55 am
If your database is single file, and you can take it off-line for a few minutes, the single fastest way to get rid of a huge log file is to detach it (sp_detach_db) and reattach it using sp_attach_single_file_db. Before reattaching, rename the original log file. This will create a new log file of (I believe) 512K or some such. IMMEDIATELY expand the log file to an appropriate size and also set growth increments for database and log appropriately. Voila! No more 12GB log file (which can now be deleted).
Note: I recommend setting @skipchecks = true when detaching since you are going to immediately reattach. This avoids a full update statistics sweep which can be time-consuming.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
May 25, 2007 at 11:43 pm
I had the same problem with 2 log files, one was 40GB the other was a staggering 176GB, I found a script that I ran over the 2 log files, the 40GB went down to 100MB, the other went down to 4GB, this is the script and this will work on SQL 2000, 2005 & 2008 (Beta License running on Windows 2008);
/*
Shrink a named transaction log file belonging to a database
Originally found at;
http://support.microsoft.com/support/kb/articles/q256/6/50.asp
Changes:
28.08.2001
Modified the inner loop so it tested the dx time so long overruns did not happen
Modified the inner loop so it had a fixed minimum quantity so there was no skip in skip out
29.08.2001
Modified the inner loop so it had a dynamic minimum quantity to allow faster shrinkage
24.01.2002
Modified the USE statement so it uses brackets around the dbname
Modified the @TruncLog variable so it uses brackets around the dbname
31.05.2002
Modified the code to use PRINT instead of SELECT in several cases
Modified the code to use @MaxCount instead of two unclear rules
Modified the code to use @Factor instead of several hard-coded values
Commented the use of @Factor
Moved the configuration and @Counter init code to before the start of the first loop to avoid repetition
Modified the code to display the process runtime in seconds rather than minutes
*/
SET NOCOUNT ON
DECLARE @LogicalFileName SYSNAME,
@MaxMinutes INT,
@NewSize INT,
@Factor FLOAT
/*
The process has several control parameters, most of the time you only need to worry about the first four
as these are the big controls whereas the fifth is simply a fine tuning control which rarely needs to
come into play.
*/
--This is the name of the database for which the log will be shrunk.
USE [databasename]
--Use sp_helpfile to identify the logical file name that you want to shrink.
SET @LogicalFileName = 'database_Log';
--Limit on time allowed to wrap log in minutes
SET @MaxMinutes = 5;
--Ideal size of logfile in MB
SET @NewSize =100;
/*
Factor determining maximum number of pages to pad out based on the original number of pages in use
(single page = 8K). Values in the range 1.0 - 0.8 seems to work well for many databases.
Increasing the number will increase the maximum number of pages allowed to be padded, which should
force larger amounts of data to be dropped before the process finishes. Often speeds up shrinking
very large databases which are going through the process before the timer runs out.
Decreasing the number will decrease the maximum number of pages allowed to be padded, which should
force less work to be done. Often aids with forcing smaller databases to shrink to minimum size
when larger values were actually expanding them.
*/
SET @Factor = 1.0;
/*
All code after this point is driven by these parameters and will not require editing unless you need to
fix a bug in the padding/shrinking process itself.
*/
-- Setup / initialize
DECLARE @OriginalSize INT,
@StringData VARCHAR(500)
SELECT @OriginalSize = size -- in 8K pages
FROM sysfiles
WHERE name = @LogicalFileName;
SELECT @StringData = 'Original Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),@OriginalSize) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(@OriginalSize*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName;
PRINT @StringData;
PRINT ''
--Drop the temporary table if it already exists
IF ( OBJECT_ID('[dbo].[DummyTrans]') IS NOT NULL )
DROP TABLE [DummyTrans]
CREATE TABLE [DummyTrans]( [DummyColumn] CHAR(8000) NOT NULL );
-- Wrap log and truncate it.
DECLARE @Counter INT,
@MaxCount INT,
@StartTime DATETIME,
@TruncLog VARCHAR(500)
-- Try an initial shrink.
DBCC SHRINKFILE (@LogicalFileName, @NewSize)
SET @TruncLog = 'BACKUP LOG [' + db_name() + '] WITH TRUNCATE_ONLY';
EXEC (@TruncLog)
-- Configure limiter
IF @OriginalSize / @Factor > 50000
SET @MaxCount = 50000
ELSE
SET @MaxCount = @OriginalSize * @Factor
-- Attempt to shrink down the log file
PRINT 'Minimum Quantity : '+CAST( @MaxCount AS VARCHAR(10) )
PRINT 'Maximum Time : '+CAST( @MaxMinutes AS VARCHAR(10) )+' minutes ('+CAST( @MaxMinutes*60 AS VARCHAR(10) )+' seconds)'
PRINT ''
SET @Counter = 0;
SET @StartTime = GETDATE();
--loop the padding code to reduce the log while
-- within time limit and
-- log has not been shrunk enough
WHILE (
(@MaxMinutes*60 > DATEDIFF(ss, @StartTime, GETDATE())) AND
(@OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName)) AND
((@OriginalSize * 8 / 1024) > @NewSize)
)
BEGIN --Outer loop.
--pad out the logfile a page at a time while
-- number of pages padded does not exceed our maximum page padding limit
-- within time limit and
-- log has not been shrunk enough
WHILE (
(@Counter DATEDIFF(ss, @StartTime, GETDATE())) AND
(@OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName)) AND
((@OriginalSize * 8 / 1024) > @NewSize)
)
BEGIN --Inner loop
INSERT INTO DummyTrans VALUES ('Fill Log') -- Because it is a char field it inserts 8000 bytes.
DELETE FROM DummyTrans
SELECT @Counter = @Counter + 1
--Every 1,000 cycles tell the user what is going on
IF ROUND( @Counter , -3 ) = @Counter
BEGIN
PRINT 'Padded '+LTRIM( CAST( @Counter*8 AS VARCHAR(10) ) )+'K @ '+LTRIM( CAST( DATEDIFF( ss, @StartTime, GETDATE() ) AS VARCHAR(10) ) )+' seconds';
END
END
--See if a trunc of the log shrinks it.
EXEC( @TruncLog )
END
PRINT ''
SELECT @StringData = 'Final Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),size) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName;
PRINT @StringData
PRINT ''
DROP TABLE DummyTrans;
PRINT '*** Perform a full database backup ***'
SET NOCOUNT OFF
May 29, 2007 at 5:30 am
another nice ref regarding impact of shrink operations :
http://www.karaszi.com/SQLServer/info_dont_shrink.asp
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
May 30, 2007 at 2:06 am
If you need/want to keep your database online:
"dump tran <db> with truncate_only" followed by a shrink is very quick too
(do make a full db backup before you start making tran dumps again).
May 30, 2007 at 5:50 am
Truncate Only log backup will not affect the virtual log "devices" inside the file and thus a shrink is not guaranteed to get you much if any space back afterwards. See this in SQL2005 BOL: ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/udb9/html/36d7f140-b46a-4b1b-81a5-a2e2dddf88c6.htm
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply