September 22, 2008 at 12:26 am
Comments posted to this topic are about the item Shrink DB and Log
September 23, 2008 at 8:13 am
Can I shrink more then one at a time??
September 25, 2008 at 1:32 pm
Hello,
Looks like this script will only shrink the database log file and not the database (.mdf file). Can I replace the database_log with the database file to shrink that too?
Thanks,
Rudy
PS. Man it shrinks very fast!!
Rudy
November 16, 2008 at 10:52 am
hi all
i have SQL 2005 express and the database size is mor than 4G i tray to user the comand give me this
error
Msg 137, Level 15, State 1, Line 1
Must declare the scalar variable "@pmdb$primavera".
i write the command by this way
SET @pmdb$primavera = 'pmdb$primavera';
the name of the database is pmdb$primavera
plz advice me coa the work is down
January 7, 2009 at 1:15 pm
Do u have any script to shrink log files for Mirrored db its huge around 500GB without breaking mirroring .
July 30, 2009 at 4:27 am
hello All,
I run the following sintax
====================================
/*
Shrink a named transaction log file belonging to a database
Originally found at;
http://support.microsoft.com/support/kb/articles/q256/6/50.asp
Changes:
28.04.2004
Modified the inner loop so it tested the dx time so long overruns did not happen
Modified the inner loop so it had a fixed minimum quantity so there was no skip in skip out
29.03.2006
Modified the inner loop so it had a dynamic minimum quantity to allow faster shrinkage
24.05.2006
Modified the USE statement so it uses brackets around the dbname
Modified the @TruncLog variable so it uses brackets around the dbname
31.06.2006
Modified the code to use PRINT instead of SELECT in several cases
Modified the code to use @MaxCount instead of two unclear rules
Modified the code to use @Factor instead of several hard-coded values
Commented the use of @Factor
Moved the configuration and @Counter init code to before the start of the first loop to avoid repetition
Modified the code to display the process runtime in seconds rather than minutes
*/
SET NOCOUNT ON
DECLARE @LogicalFileName SYSNAME,
@MaxMinutes INT,
@NewSize INT,
@Factor FLOAT
/*
The process has several control parameters, most of the time you only need to worry about the first four
as these are the big controls whereas the fifth is simply a fine tuning control which rarely needs to
come into play.
*/
--This is the name of the database for which the log will be shrunk.
USE [cera]
--Use sp_helpfile to identify the logical file name that you want to shrink.
SET @LogicalFileName = 'cera_Log';
--Limit on time allowed to wrap log in minutes
SET @MaxMinutes = 5;
--Ideal size of logfile in MB
SET @NewSize =100;
/*
Factor determining maximum number of pages to pad out based on the original number of pages in use
(single page = 8K). Values in the range 1.0 - 0.8 seems to work well for many databases.
Increasing the number will increase the maximum number of pages allowed to be padded, which should
force larger amounts of data to be dropped before the process finishes. Often speeds up shrinking
very large databases which are going through the process before the timer runs out.
Decreasing the number will decrease the maximum number of pages allowed to be padded, which should
force less work to be done. Often aids with forcing smaller databases to shrink to minimum size
when larger values were actually expanding them.
*/
SET @Factor = 1.0;
/*
All code after this point is driven by these parameters and will not require editing unless you need to
fix a bug in the padding/shrinking process itself.
*/
-- Setup / initialize
DECLARE @OriginalSize INT,
@StringData VARCHAR(500)
SELECT @OriginalSize = size -- in 8K pages
FROM sysfiles
WHERE name = @LogicalFileName;
SELECT @StringData = 'Original Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),@OriginalSize) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(@OriginalSize*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName;
PRINT @StringData;
PRINT ''
--Drop the temporary table if it already exists
IF ( OBJECT_ID('[dbo].[DummyTrans]') IS NOT NULL )
DROP TABLE [DummyTrans]
CREATE TABLE [DummyTrans]( [DummyColumn] CHAR(8000) NOT NULL );
-- Wrap log and truncate it.
DECLARE @Counter INT,
@MaxCount INT,
@StartTime DATETIME,
@TruncLog VARCHAR(500)
-- Try an initial shrink.
DBCC SHRINKFILE (@LogicalFileName, @NewSize)
SET @TruncLog = 'BACKUP LOG [' + db_name() + '] WITH TRUNCATE_ONLY';
EXEC (@TruncLog)
-- Configure limiter
IF @OriginalSize / @Factor > 50000
SET @MaxCount = 50000
ELSE
SET @MaxCount = @OriginalSize * @Factor
-- Attempt to shrink down the log file
PRINT 'Minimum Quantity : '+CAST( @MaxCount AS VARCHAR(10) )
PRINT 'Maximum Time : '+CAST( @MaxMinutes AS VARCHAR(10) )+' minutes ('+CAST( @MaxMinutes*60 AS VARCHAR(10) )+' seconds)'
PRINT ''
SET @Counter = 0;
SET @StartTime = GETDATE();
--loop the padding code to reduce the log while
-- within time limit and
-- log has not been shrunk enough
WHILE (
(@MaxMinutes*60 > DATEDIFF(ss, @StartTime, GETDATE())) AND
(@OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName)) AND
((@OriginalSize * 8 / 1024) > @NewSize)
)
BEGIN --Outer loop.
--pad out the logfile a page at a time while
-- number of pages padded does not exceed our maximum page padding limit
-- within time limit and
-- log has not been shrunk enough
WHILE (
(@Counter DATEDIFF(ss, @StartTime, GETDATE())) AND
(@OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName)) AND
((@OriginalSize * 8 / 1024) > @NewSize)
)
BEGIN --Inner loop
INSERT INTO DummyTrans VALUES ('Fill Log') -- Because it is a char field it inserts 8000 bytes.
DELETE FROM DummyTrans
SELECT @Counter = @Counter + 1
--Every 1,000 cycles tell the user what is going on
IF ROUND( @Counter , -3 ) = @Counter
BEGIN
PRINT 'Padded '+LTRIM( CAST( @Counter*8 AS VARCHAR(10) ) )+'K @ '+LTRIM( CAST( DATEDIFF( ss, @StartTime, GETDATE() ) AS VARCHAR(10) ) )+' seconds';
END
END
--See if a trunc of the log shrinks it.
EXEC( @TruncLog )
END
PRINT ''
SELECT @StringData = 'Final Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),size) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName;
PRINT @StringData
PRINT ''
DROP TABLE DummyTrans;
PRINT '*** Perform a full database backup ***'
SET NOCOUNT OFF
====================================
but i got following error
Msg 155, Level 15, State 1, Line 1
'TRUNCATE_ONLY' is not a recognized BACKUP option.
I am using sql server 2008
July 30, 2009 at 5:27 am
I wouldn't think that would be a problem, although I haven't tried it myself. as long as you back up the database before you try this it shouldn't be a problem and of course not in operating hours, it's got to be OOH or if you have a change control make sure the database isn't being used.
October 1, 2009 at 10:03 am
Use this, to find all DBs both mdf and ldf, it generates shrink commands for you.
http://usa.redirectme.net/repriser/sqlserverpub.html
😛
Jason
http://dbace.us
😛
October 2, 2009 at 3:12 pm
If I understand Microsoft correctly, when mirrored, the transaction log got written to the mirror directly from memory (setting asynch or synch), so whatever happens to primary will happen to mirror. Shrink is resource intense operation (imagine), so I don't know in reality what it will do depending on your environment. If you are on a 70Kb pipe with huge amount to shrink, it may not do well.
Jason
http://dbace.us
😛
May 6, 2013 at 6:12 am
I hate to be negative, but it's extremely unfortunate that this script truncates the log, breaking the log backup chain, and that isn't explained anywhere in the script or the comments. Even if it was explained, this resets all your disaster recovery options afterwards to starting with the full backup it advises you to take at the end. Previous backups cannot be used to recover past the log truncation.
Be extremely wary about using this script in production. There's a reason we (as I was on the SQL team at the time) removed the TRUNCATE_ONLY/NO_LOG options to BACKUP LOG in SQL Server 2008.
[Edit: there's now a comment added at the bottom of the intro about understanding the implications - it wasn't there earlier today when I wrote this comment.]
Thanks
Paul Randal
CEO, SQLskills.com: Check out SQLskills online training!
Blog:www.SQLskills.com/blogs/paul Twitter: @PaulRandal
SQL MVP, Microsoft RD, Contributing Editor of TechNet Magazine
Author of DBCC CHECKDB/repair (and other Storage Engine) code of SQL Server 2005
May 6, 2013 at 6:47 am
I have to agree with Paul.
I make my living as a DBA and this is not good.
Several things about the script would make most professional DBA's wish they never have to repair a server it was not correctly run on.
The main reason anyone would need to run this script is because the database Backup and maintenance Plan does not accomadate the transaction level for the database or database growth.
Please be very carefull if you try to use this script. 😎
May 6, 2013 at 7:31 am
Paul Randal (5/6/2013)
I hate to be negative, but it's extremely unfortunate that this script truncates the log, breaking the log backup chain, and that isn't explained anywhere in the script or the comments. Even if it was explained, this resets all your disaster recovery options afterwards to starting with the full backup it advises you to take at the end. Previous backups cannot be used to recover past the log truncation.Be extremely wary about using this script in production. There's a reason we (as I was on the SQL team at the time) removed the TRUNCATE_ONLY/NO_LOG options to BACKUP LOG in SQL Server 2008.
Thanks
thank you for explaining. I had a suspicion that something was scary about this script.
There are comments, but there are no explanations. I kept looking for some information on how/why this series of steps was affecting the proposed change. I appreciate that nobody should be running code they find online without understanding exactly what it does... but it would be nice to be told that inline with the commands rather than creating a research project for me.
An overview of why this solution is 'better' than the tools provided by Microsoft would be an interesting read too. My guess is that you don't use this script on trivial databases because it isn't necessary. I read Paul's suggestion that you don't use this script on appreciably large/important databases because it is dangerous. Please explain the circumstances where this script _should_ be used (caveats, risks, etc.)
Also, if you are so inclined - share with us any "gone wrong" experiences while writing/debugging this script if you thrashed a DB before you got it working 🙂
May 6, 2013 at 10:21 am
In fairness the OP does state: "...Make sure that you understand the implications of this script on the backup and recovery of your database(s): Managing Transaction Logs."
The linked article was: http://www.sqlservercentral.com/articles/Administration/64582/
I agree with other comments posted to use caution and be wary of using the script. To that end, perhaps the OP could have made this above statement more prominent or provided ample warning/comments within the script.
As with consuming any scripts posted on the internet, one should be reviewing the script and understanding its use/impact and testing within their own environment before moving into production.
May 6, 2013 at 12:19 pm
@bitBIG - that comment about understanding the implications was added this morning.
Paul Randal
CEO, SQLskills.com: Check out SQLskills online training!
Blog:www.SQLskills.com/blogs/paul Twitter: @PaulRandal
SQL MVP, Microsoft RD, Contributing Editor of TechNet Magazine
Author of DBCC CHECKDB/repair (and other Storage Engine) code of SQL Server 2005
May 6, 2013 at 12:31 pm
Thanks for noting that Paul.
It would appear that the OP has taken your contribution/comments to heart and (has now) provided the casual reader with some guidance.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply