July 17, 2009 at 5:40 am
ALZDBA (7/16/2009)
shrinking a log file will not move log data !A log file can only be shrunk to the last active virtual log !
Hence, you may have to execute the shrink procedure more than once !
e.g.
--INF: How to Shrink the SQL Server 7.0 Transaction Log
-- SQL7 http://support.microsoft.com/support/kb/articles/q256/6/50.asp?id=256650&;SD
-- SQL7 http://www.support.microsoft.com/kb/256650 ;
-- SQL2000 http://support.microsoft.com/kb/272318/en-us
-- SQL2005 http://support.microsoft.com/kb/907511/en-us
-- select db_name()
-- select * from sysfiles
-- THIS SCRIPT IS NOT INTENDED FOR SCHEDULED USAGE !! READ BOL and the urls !!
SET NOCOUNT ON
DECLARE @LogicalFileName sysname,
@MaxMinutes INT,
@NewSize INT
-- *** MAKE SURE TO CHANGE THE NEXT 3 LINES WITH YOUR CRITERIA. ***
SELECT @LogicalFileName = 'logicalname', -- Use sp_helpfile to identify the logical file name that you want to shrink.
@MaxMinutes = 10, -- Limit on time allowed to wrap log.
@NewSize = 100 -- in MB
-- Setup / initialize
DECLARE @OriginalSize int
SELECT @OriginalSize = size -- in 8K pages
FROM sysfiles
WHERE name = @LogicalFileName
SELECT 'Original Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),@OriginalSize) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(@OriginalSize*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName
CREATE TABLE DummyTrans
(DummyColumn char (8000) not null)
-- Wrap log and truncate it.
DECLARE @Counter INT,
@StartTime DATETIME,
@TruncLog VARCHAR(255)
SELECT @StartTime = GETDATE(),
@TruncLog = 'BACKUP LOG ' + db_name() + ' WITH TRUNCATE_ONLY'
-- Try an initial shrink.
DBCC SHRINKFILE (@LogicalFileName, @NewSize)
EXEC (@TruncLog)
-- Wrap the log if necessary.
WHILE @MaxMinutes > DATEDIFF (mi, @StartTime, GETDATE()) -- time has not expired
AND @OriginalSize = (SELECT size FROM sysfiles WHERE name = @LogicalFileName) -- the log has not shrunk
AND (@OriginalSize * 8 /1024) > @NewSize -- The value passed in for new size is smaller than the current size.
BEGIN -- Outer loop.
SELECT @Counter = 0
WHILE ((@Counter < @OriginalSize / 16) AND (@Counter < 50000))
BEGIN -- update
INSERT DummyTrans VALUES ('Fill Log') -- Because it is a char field it inserts 8000 bytes.
DELETE DummyTrans
SELECT @Counter = @Counter + 1
END -- update
EXEC (@TruncLog) -- See if a trunc of the log shrinks it.
END -- outer loop
SELECT 'Final Size of ' + db_name() + ' LOG is ' +
CONVERT(VARCHAR(30),size) + ' 8K pages or ' +
CONVERT(VARCHAR(30),(size*8/1024)) + 'MB'
FROM sysfiles
WHERE name = @LogicalFileName
DROP TABLE DummyTrans
PRINT '*** Perform a full database backup ***'
SET NOCOUNT OFF
Thank you ALZDBA. I will look over the code and see how it works out for me. Based on your comments and Ravi's comments, I think I need to read up on virtual logs a bit more.
Cathy
July 17, 2009 at 5:55 am
Ravi (7/17/2009)
What are the scenarios that I am looking at if I shrink the file again, only this time, setting the target size to 12GB? In other words, what are the negatives to doing this and what risks am I taking?There are no negatives if you shrink your ldf file and set it to 12 GB. Just ensure that you have autogrow enabled and the growth increment is sufficiently large so that you don't end up with several hundred virtual log files in your ldf.
It is not guaranteed that your log file will not grow over 12 GB. It is just a general rule that a ldf file can be sized to be around 1/4th the size of the mdf file. But, you do come across exceptions. Take a look at the sizes of your transaction log backups over the last several days and that should give you a good idea of the amount of activity. Depending upon these backup sizes you can adjust your ldf file size.
Ravi, our trn files hover between 100MB and 500MB every hour. If I shrink my ldf to 12GB, does this mean that I have 12GB to grow each hour, before my next backup when the log gets truncated? I guess I'm getting hung up now on the size of the ldf and being able to cope with sizable transactions.
I think that while I thought I had a decent handle on transaction logs, maybe I need to read a little more. Thank you for pointing these things out because I know where to focus my research now.
Am I better off leaving the log at the size it is now or should I really try to keep the size down to 1/4 of the mdf unless proven otherwise by activity on the database?
It depends. Does that cause a shortage of free space on your server drive?
Ravi
We do not currently have hard disk space issues on our production server. Our test server is an entirely different matter. That is actually what called out this problem to me. I tried to restore our production database to our test server and wasn't able to because too much disk space was required.
July 17, 2009 at 5:57 am
It is indeed something you need to know about when trying to handle the files.
BOL has good info at http://msdn.microsoft.com/en-us/library/ms178037%28SQL.90%29.aspx
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
July 17, 2009 at 6:29 am
ALZDBA (7/17/2009)
It is indeed something you need to know about when trying to handle the files.BOL has good info at http://msdn.microsoft.com/en-us/library/ms178037%28SQL.90%29.aspx
ALZDBA, I paid more attention to the virtual log descriptions and how the actual shrink works. It was very informative. I do have a few questions though.
First, why does the code that you provided truncate the log upon backup, prior to shrinking? I thought that if the database is in full recovery mode, the log should never be backed up "WITH TRUNCATE_ONLY" but instead should just be backed up to disk. Is my understanding incorrect?
Second, is a full database backup really required after a shrink, or can it be delayed to its next regularly scheduled run?
Third, we have offices in the US and in India, so we are up 24/7. Can I safely perform these shrinks while transactions are going on?
My last question is should a similar process be followed for the mdf file, or should I be running SHRINKDATABASE? Based on what I have read in BOL, I think I would be better off shrinking the mdf file, rather than the database so I can provide a target size specific to the mdf and the ldf respectively. Where do I begin with target size for the mdf though? It's currently at 53GB, but I'm not sure it needs to be. Thank you for your help!
Cathy
July 17, 2009 at 6:57 am
Cathy DePaolo (7/17/2009)
ALZDBA (7/17/2009)
It is indeed something you need to know about when trying to handle the files.BOL has good info at http://msdn.microsoft.com/en-us/library/ms178037%28SQL.90%29.aspx
ALZDBA, I paid more attention to the virtual log descriptions and how the actual shrink works. It was very informative. I do have a few questions though.
First, why does the code that you provided truncate the log upon backup, prior to shrinking? I thought that if the database is in full recovery mode, the log should never be backed up "WITH TRUNCATE_ONLY" but instead should just be backed up to disk. Is my understanding incorrect?
Legacy, you know :Whistling:
I didn't use an extra script for sql2005.
backup log with TRUNCATE_ONLY is depricated after sql 2005 !
Second, is a full database backup really required after a shrink, or can it be delayed to its next regularly scheduled run?
You really need a full or differential backup !
I would prefer the FULL backup, to be able to support PIT restores from that moment on !
Third, we have offices in the US and in India, so we are up 24/7. Can I safely perform these shrinks while transactions are going on?
If I perform a shrink of any file, I plan it and determine a timeframe with the dev-tems/db user reps.
During that timeframe, I explicitly state PIT restores are not guaranteed for !
It's up to them to decide if they can live with that.
I know that is a luxury thing I still have 😉
You should be able to take regular incremental log backups to still support PIT restores during your shrink operation.
Gail may have more practical experience on that !
My last question is should a similar process be followed for the mdf file, or should I be running SHRINKDATABASE? Based on what I have read in BOL, I think I would be better off shrinking the mdf file, rather than the database so I can provide a target size specific to the mdf and the ldf respectively. Where do I begin with target size for the mdf though? It's currently at 53GB, but I'm not sure it needs to be. Thank you for your help!
Cathy
I prefer SHRINKFILE over shrink database, because you have more control !
Keep in mind a data file shrink operation will actually move datapages around !
Locking and io interference is to be expected !!
How much free space is there in your 53GB datafile ?
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
July 17, 2009 at 7:06 am
To answer your free space question, I ran sp_spaceused. I'll paste the results below. Is there another procedure that will provide better information?
database_name: qa
database_size: 100098.25 MB
unallocated space: 16313.77 MB
NOTE: The database name is qa, but the database is a production database
reserved: 36916456 KB
data:32743192 KB
index_size: 4151192 KB
unused: 22072 KB
July 17, 2009 at 2:03 pm
16% free. but it represents 16GB.
How big is your biggest table (ix included if they reside in that datafile) ?
Why ? if you rebuild that table, you may need that extra space !
Sum up your frequently rebuilt space an sum them up.
Shrinking that file may take a long time, so if you don't have space shortage, leave it as it is, because you intend to shrink whilst India/USA is online.
Your action will be noticable !
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
July 20, 2009 at 2:07 am
Ravi, our trn files hover between 100MB and 500MB every hour. If I shrink my ldf to 12GB, does this mean that I have 12GB to grow each hour, before my next backup when the log gets truncated? I guess I'm getting hung up now on the size of the ldf and being able to cope with sizable transactions.
Yes, that means you have 12 GB of space to grow every hour before your transaction log gets truncated.
Do you have any Reindex jobs that run against this database? If yes, then the transaction log may grow larger than 12 GB, depending on the size of the indexes.
I would recommend to shrink the ldf file, to a resonable size, after you are done reindexing.
We do not currently have hard disk space issues on our production server. Our test server is an entirely different matter. That is actually what called out this problem to me. I tried to restore our production database to our test server and wasn't able to because too much disk space was required.
Yes, that's one of the down side of having a huge, unwanted, transaction log. Though the transaction log maybe 99% free, SQL Server would still need to create the whole file during a restore.
July 20, 2009 at 3:28 am
Cathy DePaolo (7/16/2009)
I just wanted to post an update. I did shrink the log file and it is now around 48GB. I'll go through our other databases and do the same thing now.I was also only tracking db growth on a monthly basis. My mind set was focused on what we were using the database for, and not on maintaining the database. I will bump that up to daily so I will be able to track the growth to a particular day. If anyone has any recommendations with regard to managing the size of the files, I would love to hear them, especially with how they relate to the results of DBCC SQLPERF(LogSpace). Perhaps there is a certain percentage of utilization that indicates a shrink is required? Again, any tidbits here would be of tremendous help.
Thank you again for the help.
Cathy
[Note: Ooops... I wrote this post before discovering that the topic had 3 pages, so some of this stuff is already covered, but I also have some "new" stuff, so I deciced to post this anyway... 😉 ]
Well, like Ravi said, the percentage of utilization shows how much of your log space is being used right now - meaning at the time you ran DBCC SQLPERF(LogSpace). In general, this says nothing about whether you need to grow or shrink your log files, only how much of them is currently being used. The percentage will drop on file growth because more free space is added, but note also that a log backup will tend to cause a significant drop in the utilization percentage because it (at least by default) clears the log of all inactive transactions, thereby freeing up log space without actually growing the file. More often than not, I find that this is the key to managing log file sizes under recovery model full: If your log files are growing too large, you probably need to run your log backups more often. As for finding the "appropriate" size for a log file... well, this can be tricky because it depends on so many variables, but one approach is to shrink the file or manually set it to a specific size, make sure autogrowth is enabled and has a reasonable growth interval set, then simply allow the file to grow to the size where it stabilizes. Leave it running for a week or so and see where it ends up. Most likely, the file size you now have is about the size you need for your database with the current level of activity and the current backup schedule, and shrinking the file below this size won't really do any good because it will only end up needing to grow again.
Now for the virtual log files that Ravi mentioned:
Each transaction log (.trn) file is divided logically into smaller segments called virtual log files (VLFs). Virtual log files are the unit of truncation for the transaction log. When a VLF no longer contains log records for active transactions, it can be truncated and the space becomes available to log new transactions.
The existance of multiple VLFs is a good thing because it allows you to truncate inactive parts of your transaction log while still retaining any and all log info related to active transactions. However, too many VLFs is not a good thing because constantly having to work with hundreds of small VLFs will add significant system overhead and can cause performance issues. So, when you grow the log, you will generally want to do it in large chunks so that you can make fewer and larger VLFs. In other words, autogrowth by 10% (default in SQL Server 2000) is often not a good thing and by 1 MB (default in SQL Server 2005) is probably the worst default setting I have ever seen.
Managing virtual log files:
Here is what you do:
For each database on your server, run the command DBCC LOGINFO
(Syntax is "DBCC LOGINFO (DBname)" to run from master or just plain "DBCC LOGINFO" to run the command on the currently open database.)
This command will list all VLFs for the database.
And here we do have a general rule of thumb: There should be no more than 50 VLFs for a SQL Server database.
If you find that your database has an excessive number of VLFs, then this is what you can do to fix it:
1. Backup the log.
2. Shrink the log as small as you can.
I use the command DBCC SHRINKFILE (transaction_log_logical_filename, TRUNCATEONLY) were "transaction_log_logical_filename" is the logical file name of the .trn, found under database properties or by running sp_helpdb on the database.
3. Manually set the log back up to the size you want it.
I use this command:
ALTER DATABASE database_name
MODIFY FILE
(
NAME = transaction_log_logical_filename
, SIZE = new_total_size
)
Please note that you'll have to fill in appropriate names and sizes specifically for each case.
This will expand the log file (.trn) in one big chunk rather than umpteen little ones and SQL Server will automatically try to create the "optimal" number of VLFs for the total log size.
Have fun... 😉
Vegard Hagen
Norwegian DBA, occasional blogger and generally a nice guy who believes the world is big enough for all of us.
@vegard_hagen on Twitter
Blog: Vegards corner (No actual SQL stuff here - havent found my niche yet. Maybe some day...)
July 20, 2009 at 5:31 am
ALZDBA (7/17/2009)
16% free. but it represents 16GB.How big is your biggest table (ix included if they reside in that datafile) ?
Why ? if you rebuild that table, you may need that extra space !
The largest table is a little over 13GB (ix is included).
Sum up your frequently rebuilt space an sum them up.
Shrinking that file may take a long time, so if you don't have space shortage, leave it as it is, because you intend to shrink whilst India/USA is online.
Your action will be noticable !
How would I go about summing up the frequently rebuilt space? Would this tell me whether or not it would be worth shrinking the data file?
Thank you for all of your help ALZDBA!
July 20, 2009 at 5:33 am
Vegard,
This was all really incredible, especially the points you made on VLF's. Thank you very much. I've received tremendous support on this topic and truly appreciate it. I will take all recommendations to heart and implement everything that I can.
Cathy
July 20, 2009 at 5:39 am
Ravi (7/20/2009)
Ravi, our trn files hover between 100MB and 500MB every hour. If I shrink my ldf to 12GB, does this mean that I have 12GB to grow each hour, before my next backup when the log gets truncated? I guess I'm getting hung up now on the size of the ldf and being able to cope with sizable transactions.
Yes, that means you have 12 GB of space to grow every hour before your transaction log gets truncated.
Do you have any Reindex jobs that run against this database? If yes, then the transaction log may grow larger than 12 GB, depending on the size of the indexes.
Yes, I have a procedure that determines whether the an index should be rebuilt or reorg'd. This runs every evening. There are actually two procedures, so large tables are processed in a separate job from small tables.
I would recommend to shrink the ldf file, to a resonable size, after you are done reindexing.
Does this mean you are recommending a shrink every night then? I'm not sure I could manage to take that on in the evenings and I know this should not be a scheduled job. However, if this is what is truly required, I will figure out a way to make this work. Just to sum up, every evening I should reindex, shrink the log, and then perform a full backup of the database. Is this the full recommendation?
We do not currently have hard disk space issues on our production server. Our test server is an entirely different matter. That is actually what called out this problem to me. I tried to restore our production database to our test server and wasn't able to because too much disk space was required.
Yes, that's one of the down side of having a huge, unwanted, transaction log. Though the transaction log maybe 99% free, SQL Server would still need to create the whole file during a restore.
July 20, 2009 at 6:01 am
Shrinking the log every night sounds like terrible advice to me. More likely than not, the file will just have to grow again the next day and you'll only end up spending time and resources shrinking and expanding over and over and over without achieving anything useful whatsoever.
I think it's better to just leave the log file alone unless the mere size of the thing is a problem or you know it's hogging a lot of space it does not need and will not use again in the foreseeable future.
Vegard Hagen
Norwegian DBA, occasional blogger and generally a nice guy who believes the world is big enough for all of us.
@vegard_hagen on Twitter
Blog: Vegards corner (No actual SQL stuff here - havent found my niche yet. Maybe some day...)
July 20, 2009 at 9:11 am
DO NOT shrink the log after rebuilding. It's a waste of resources. The next reindex will grow it again.
The log should be sized for the peak load between log backups, including reindexing.
July 20, 2009 at 6:40 pm
Just to close the loop on this, I backed up the log and then ran DBCC SHRINKFILE with a target size of 12GB. I was able to successfully shrink the log to 12.4 GB.
I would have liked to follow Vegard's recommendation and modify the file size manually using ALTER DATABASE, but I am not allowed to restart the server. We have a scheduled reboot monthly, so I will try to coordinate this and see if I can get the number of VLF's down as well as the size of the database. After backing up the log and running the shrinkfile command 4 times, I was unable to reduce the number of VLF's any lower than 723 files.
Thank you to everyone that posted!
Cathy
Viewing 15 posts - 16 through 30 (of 35 total)
You must be logged in to reply to this topic. Login to reply