April 16, 2007 at 10:28 am
For some reason one of our databases has its log file filled up very quickly and when I take a transaction log backup to free up some log space the operation completes but does not free up space on the log file.. the percentage of free space remains the same ,should I take the tran backup once again or is the transaction so big that it is not yet inactive so is not truncated , what could be the reason and how could I solve this.
April 16, 2007 at 10:39 am
Hello,
As you have mentioned that you have taken a transaction log backup, do take a full backup before and after shrinking the log file, which might free up the space as desired by you.
Look for ShrinkFile in BOL.
Thanks
Lucky
April 16, 2007 at 4:31 pm
If there are any active transactions the log file will not shrink. Try DBCC OPENTRAN to see if there any open transactions.
April 17, 2007 at 1:24 am
Freeing the log will not always succed because the 'loose' space is not at the end.
I allways use the following script to get the job done:
SET NOCOUNT ON
DECLARE @LogicalFileName sysname,
@MaxMinutes INT,
@NewSize INT
-- *** MAKE SURE TO CHANGE THE NEXT 4 LINES WITH YOUR CRITERIA. ***
USE [QissQate] -- This is the name of the database
-- for which the log will be shrunk.
SELECT @LogicalFileName = 'Qiss_Log', -- Use sp_helpfile to
-- identify the logical file
-- name that you want to shrink.
@MaxMinutes = 10, -- Limit on time allowed to wrap log.
@NewSize = 10 -- in MB
-- Setup / initialize
DECLARE @OriginalSize int
SELECT @OriginalSize = size -- in 8K pages
FROM sysfiles
WHERE name = @LogicalFileName
SELECT 'Original Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),@OriginalSize) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(@OriginalSize*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName
CREATE TABLE DummyTrans
(DummyColumn char (8000) not null)
-- Wrap log and truncate it.
DECLARE @Counter INT,
@StartTime DATETIME,
@TruncLog VARCHAR(255)
SELECT @StartTime = GETDATE(),
@TruncLog = 'BACKUP LOG ['+ db_name() + '] WITH TRUNCATE_ONLY'
-- Try an initial shrink.
DBCC SHRINKFILE (@LogicalFileName, @NewSize)
EXEC (@TruncLog)
-- Wrap the log if necessary.
WHILE @MaxMinutes > DATEDIFF (mi, @StartTime, GETDATE()) -- time has not expired
AND @OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName) -- the log has not shrunk
AND (@OriginalSize * 8 /1024) > @NewSize -- The value passed in for new size is smaller than the current size.
BEGIN -- Outer loop.
SELECT @Counter = 0
WHILE ((@Counter < @OriginalSize / 16) AND (@Counter < 50000))
BEGIN -- update
INSERT DummyTrans VALUES ('Fill Log') -- Because it is a char field it inserts 8000 bytes.
DELETE DummyTrans
SELECT @Counter = @Counter + 1
END -- update
EXEC (@TruncLog) -- See if a trunc of the log shrinks it.
END -- outer loop
SELECT 'Final Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),size) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName
DROP TABLE DummyTrans
PRINT '*** Perform a full database backup ***'
SET NOCOUNT OFF
April 17, 2007 at 3:10 am
It seems that some people are confusing two completely different things.
a) Realeasing space in a log file via a log backup
b) Reducing the size of a log file via a SHRINKFILE
You should do a) regularly - we run log backups every hour. This will release space in the log for re-use. The space released will consist of entire logical log files (see BOL for what this means), but the logical log file containing the start of the oldest active transaction will not be released. If you have a transaction active for a number of days, then you may find your log remains full even after a log backup. Use sp_who2 to find old transactions, and either kill them, get the application changed to run a COMMIT (see BOL for what this means...), or buy shedloads of disk space to hold gigantic log files.
You should not do b) regularly, and normally you should not do it at all. A good starting point is you should not shrink a SQL file if you expect it to grow again within 3 months. Every time a SQL file shrinks and grows you get Windows disk file fragmentation which will harm your performance. You also get a performance hit as the file grows, and when you do the shrink. You will have to do a Windows disk defrag to cure disk fragmentation. If your application needs a log file of (say) 10 GB once every week but only needs 50 MB otherwise, you should allow it to remain at 10 GB permanently.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
April 17, 2007 at 1:47 pm
In 123DBA's script, the line of code "@TruncLog = 'BACKUP LOG ['+ db_name() + '] WITH TRUNCATE_ONLY'" is a sleeper, once you have run this script, you restore path is invalid until you do a Full backup.
Don't run this script as a job - I have spent many hours getting rid of this script from jobs on our production servers. I also had to educate our users in that if we do regular T-Log backups (in our case, 4-hour cycles are sufficient), log files rarely grow too large, and the performance hit on auto-growing log files multiple times is more painful (to the users) than the hit to HDD space.
April 18, 2007 at 4:02 am
Simon...in defence of 123DBA, if you follow 123DBA's script you will see that he does print at the end of the run a message saying to now do a full backup. But I agree you wouldn't want this running as an everyday job. I use something similar for emergencies only or for special on-off cases.
That said, I agree that you would only want to run this if you tlog has grown in such a way as to take up all drivespace, cause error because it cannot grow, or maybe readjust the size due to an exceptional event that made it grow out of the norm (maybe you forgot to switch to bulkcopy for a large import or something). Oterewise don't shrink if not necessary.
We have tlog backups that on some servers run every hour, on others every 1/2 hour, and on some of our heavily used oltp dbs every 10 mins in an attempt to keep tlog size reasonable and performance good - I've even worked at a place that required tlog backups to kick off immediately the last backup finished because of the availability/recovery requirements and transaction throughputs).
Question for Bowled23 is: why do you want to shrink it down? Is it causing space issues for you?
April 18, 2007 at 2:13 pm
you can setup a job to free up space by shrinking or truncating the trans log.
April 19, 2007 at 3:09 pm
The one thing no one is talking about here is why the file grew so quickly.
I usually find that when my log file jumps in size, someone is running some type of update that has not fully qualified the where clause and is creating some type of cartisan product as part of that transaction.
after this I would disagree with one of the above posts .... that a shrink is needed, to get the reserved file space back down to somthing that reflects normal usage.
Eric
April 20, 2007 at 6:56 am
True Eric, but IMHO that's not the specific question Bowled23 is asking, and I'm sure that the author will want to know what caused this growth.
That said, if space allows to keep the tlog at its present size, yes I would start to look into what the source of the growth was before taking the tlog down to an appropriate size for the workload (plus a little extra), as it may be that whatever cause the initial growth spurt may do so again, causing possible issues associated with auto growths of data/tlog files and undoing all the shrinking work.
As with most things with SQL Server and the DB environment, it all depends, and this makes it the call of Bowled23 as to shrink or not and to what size...but I would wholeheartedly say, don't set up any scheduled jobs to auto shring tlogs on a busy production system...keep control of when and where you might do this.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply