November 1, 2012 at 8:52 pm
Comments posted to this topic are about the item Shrinking all log files on SQL Server 2008R2
Igor Micev,My blog: www.igormicev.com
November 2, 2012 at 11:55 am
It is almost never recommended to shrink database or log files, even less likely to need them done all at once, and absolutely not in a scheduled job. Shrinking log files creates filesystem level fragmentation as the logs keep growing again. If your autogrow settings are too small, this can be a serious problem - see Kimberly Tripp's article http://www.sqlskills.com/blogs/kimberly/post/Transaction-Log-VLFs-too-many-or-too-few.aspx
Additionally, this shrink doesn't account for Transaction log backup first for non-simple mode databases or the repeated cycles often required to move the active log VLF away from the middle or end.
November 2, 2012 at 12:06 pm
OK, you're joking right??
Firstly, TRUNCATEONLY does not apply to T-log files!
A wholesale shrink of all log files on the instance, and you believe this will be beneficial?
What do you do for an encore, shrink the data files then rebuild the indexes?
Incidentally, Do you carry out index maintenance on your SQL Server instance?
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
November 2, 2012 at 12:09 pm
Perry, excellent point on TRUNCATEONLY only applying to data files (and all arguments about 'do not shrink' apply to data files just as well as to log files).
For anyone wondering, here's a reference: http://technet.microsoft.com/en-us/library/ms189493.aspx
November 2, 2012 at 12:13 pm
Furthermore, the script targets files with ".ldf" extension. What happens if a DBA creates a database and gives one of the data files on a database the ".ldf" extension 😉
At least filter the files in sys.master_files by their type (0 = rows, 1 = log)
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
November 2, 2012 at 3:41 pm
Nadrek (11/2/2012)
It is almost never recommended to shrink database or log files, even less likely to need them done all at once, and absolutely not in a scheduled job. Shrinking log files creates filesystem level fragmentation as the logs keep growing again. If your autogrow settings are too small, this can be a serious problem - see Kimberly Tripp's article http://www.sqlskills.com/blogs/kimberly/post/Transaction-Log-VLFs-too-many-or-too-few.aspxAdditionally, this shrink doesn't account for Transaction log backup first for non-simple mode databases or the repeated cycles often required to move the active log VLF away from the middle or end.
Hi All
The idea of this script is to be used on testing and developing environments. Of course you should never shrink LOG files on production environment, except in emergent cases.
I here want to share my experience with this, because parts of my work is restoring databases every day, and making massive inserts and updates, so that the LOG file can grow too much ... and I find this script useful. Once again it's purpose was not intended for production environments.
Thank you
IgorMi
Igor Micev,My blog: www.igormicev.com
November 2, 2012 at 3:46 pm
Perry Whittle (11/2/2012)
OK, you're joking right??Firstly, TRUNCATEONLY does not apply to T-log files!
A wholesale shrink of all log files on the instance, and you believe this will be beneficial?
What do you do for an encore, shrink the data files then rebuild the indexes?
Incidentally, Do you carry out index maintenance on your SQL Server instance?
Hi Perry,
Of course it is not for production. I needed it on test and development envs and just made a search and didn't find something useful and did it myself, so I'm just sharing it and appointing that this is recommended only on testing and dev instances.
Regards
IgorMi
Igor Micev,My blog: www.igormicev.com
May 5, 2016 at 7:09 am
We have a use for this in our testing environment. Thanks.
December 10, 2018 at 10:32 am
This script didn't work on a sharepoint server because of the awful long database names sharepoint uses. Enclosing the "Use" [DatabaseName] in brackets [] fixed this problem. On 2012 (at least) you need to join the [master_files] table to the [Databases] table to see if the database is online. The status of the files show online when the database is offline.
Thank you for the script!
DECLARE @logname nvarchar(256)
DECLARE @dbname nvarchar(256)
DECLARE @dynamic_command nvarchar(2048)
SET @dynamic_command = NULL
DECLARE log_cursor CURSOR FOR
SELECT db_name(mf.database_id), mf.name
FROM sys.master_files mf
JOIN sys.[databases] db ON [db].[database_id] = [mf].[database_id]
WHERE mf.database_id NOT IN ( 1, 2, 3, 4 ) --avoid system databases
AND mf.name NOT LIKE 'ReportServer$%' AND right(mf.physical_name, 4) = '.ldf' AND mf.state_desc = 'online' AND db.[state_desc] LIKE 'ONLINE'
OPEN log_cursor
FETCH NEXT FROM log_cursor
INTO @dbname, @logname
WHILE @@fetch_status = 0
BEGIN
SET @dynamic_command = N'USE [' + @dbname + N'] DBCC SHRINKFILE(N''' + @logname + N''',0,TRUNCATEONLY)'
PRINT @dynamic_command
EXEC sp_executesql @dynamic_command
FETCH NEXT FROM log_cursor
INTO @dbname, @logname
SET @dynamic_command = NULL
END
CLOSE log_cursor
DEALLOCATE log_cursor
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply