November 8, 2011 at 8:28 am
We have a 50gb LDF file that the norm is usually 8gb. We did have an issue with a runaway query that increased the size of a table exponentially. I took care of the table but the ldf file is still really large. How can we take care of it?
November 8, 2011 at 8:34 am
You can shrink the logfile. You can do that by issuing the following command
dbcc shrinkfile ('LogFileName',8000)
This would shrink your Log File to 8 GB. You can find more information about DBCC SHRINKFILE here http://msdn.microsoft.com/en-us/library/ms189493.aspx
November 8, 2011 at 11:57 am
If you have a database with Bulk/Full logging enabled you should schedule transaction log backups to run regularly, this will help keep them clean so-to-speak. If you haven't already set this up you should do so today.
Also keep in mind, sometimes there are things going on in your logging file (like transaction rollbacks, etc) that will cause your log file to grow until it has completed. When this happens, issuing a DBCC SHRINKFILE, will do nothing.
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
November 8, 2011 at 12:02 pm
MyDoggieJessie (11/8/2011)
If you have a database with Bulk/Full logging enabled you should schedule transaction log backups to run regularly, this will help keep them clean so-to-speak. If you haven't already set this up you should do so today.Also keep in mind, sometimes there are things going on in your logging file (like transaction rollbacks, etc) that will cause your log file to grow until it has completed. When this happens, issuing a DBCC SHRINKFILE, will do nothing.
how do i check if i have Bulk/Full logging enabled?
November 8, 2011 at 12:06 pm
Bulk-logged/Full is Recovery Model of your database. You can set this by going to the Database Properties and clicking Options
November 8, 2011 at 12:12 pm
If you're LDF files are growing and not going back to their respective sizes over time, you definitely have one of the two enabled. In fact, Full is the default when a database is created.
Here is a query you can use to monitor the sizes of your LDF files. It also will show you useful information about your log files...
SELECT
db.[name] AS [Database Name] ,
db.recovery_model_desc AS [Recovery Model] ,
db.log_reuse_wait_desc AS [Log Reuse Wait Description] ,
ls.cntr_value AS [Log Size (KB)] ,
lu.cntr_value AS [Log Used (KB)] ,
CAST(CAST(lu.cntr_value AS FLOAT) / CAST(ls.cntr_value AS FLOAT)
AS DECIMAL(18,2)) * 100 AS [Log Used %] ,
db.[compatibility_level] AS [DB Compatibility Level] ,
db.page_verify_option_desc AS [Page Verify Option]
FROM master.sys.databases AS db
INNER JOIN master.sys.dm_os_performance_counters AS lu ON
db.name = lu.instance_name
INNER JOIN master.sys.dm_os_performance_counters AS lsON
db.name = ls.instance_name
WHERE db.state = 0
AND lu.counter_name LIKE 'Log File(s) Used Size (KB)%'
AND ls.counter_name LIKE 'Log File(s) Size (KB)%'
ORDER BY ls.cntr_value DESC
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
November 8, 2011 at 12:19 pm
awesome query, it says recovery model is SIMPLE. What does that mean?
November 8, 2011 at 12:34 pm
Simple recovery does not back up the logs. This saves space, with the tradeoff that the point in time for a disaster recovery is when that backup was last performed. Full ensures that you can recover at the specific point before the failure, as you're also backing up the transaction logs. Here's a quick rundown of the recovery types:
November 8, 2011 at 12:36 pm
SIMPLE is the "Recovery Model" of the database. A database can operate in a few different mode- or recovery models. The three possiblities are SIMPLE, FULL and BULK LOGGED.
A database management system needs to track all the operations- or transactions- that are taking place over time in order to be able to ensure consistency. For example, two users can not change the same thing at the same time- that's against the rules- so in essence, the database tracks all these operations in a log, and then executes them in order. This log is your "transaction log" which is stored in the "transaction log file"- the .ldf file.
The recovery model describes how the database system manages that log file. Under the SIMPLE recovery model, the database writes the operations to the data file (the .mdf) as soon as the rules allow, and then discards the entires from the log. Under most circumstances, this means that your log file remains quite small. The trade off is that you can use those logs as a recovery option- you can take backups of those logs and reuse them to "roll forward" from the most recent full backup.
The alternatve is FULL recovery mode. In this mode, we keep the transaction logs, by backing them up to backup files. This means that we can recover to any "point in time" by using backups of our transaction logs.
In your case, your log file grew large even though you are in simple recovery mode. That means that "the rules" prevented the system from writing the entries in the log to the data file so they could be discarded from the log. This usually means you had a "long running transaction" which was preventing the database from issuing a "checkpoint" which is when the log is written to the datafile.
this is good information for further reading: http://msdn.microsoft.com/en-us/library/ms189275.aspx
November 8, 2011 at 12:40 pm
awesome information, being that we have a simple recovery model would it be safe then to shrink the ldf? its 50gb which seems really large especially based on the information that you've shared.
November 8, 2011 at 12:52 pm
mbender (11/8/2011)
awesome information, being that we have a simple recovery model would it be safe then to shrink the ldf? its 50gb which seems really large especially based on the information that you've shared.
Yes. It is safe to do it.
If in Simple Recovery model t-log should get automatically truncated on check-point meaning, there is no reason to keep t-log information after a transaction completes.
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.November 8, 2011 at 12:54 pm
Yep, you can.
If you're ever worried about data loss, take a full or differential backup before running the DBCC command
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
November 8, 2011 at 12:59 pm
OK- the easy answer is "yes, probably safe to shrink that file"... BUT, I want you to really understand what you are doing.
There is some reason that the file grew to 50GB. Usually we say, "leave the file as big as it grows to- if it got that big, it will probably get that big again". If we keep growing and shrinking, all we are doing is creating fragmentation on the disk. Now, if something unusual, undesirable happened and that's what caused the file to grow so big, and you dont expect this to happen again, the shrink it. If some normal condition is what caused it to grow this big (maybe index maintenance, or something like that) then we are better off just leaving it at 50GB.
November 8, 2011 at 1:01 pm
awesome thanks all, as usual great source of help and information
Viewing 15 posts - 1 through 15 (of 24 total)
You must be logged in to reply to this topic. Login to reply