April 1, 2020 at 8:22 pm
To figure out why your log file grew so much, you may be able to look at the default trace to see what caused the growths:
DECLARE @path NVARCHAR(260);
SELECT @path = REVERSE(SUBSTRING(REVERSE([path]), CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc'
FROM sys.traces
WHERE is_default = 1;
SELECT td.DatabaseName, td.Filename, te.name AS Event, (IntegerData*8)/1024 AS Change_MB, td.StartTime, td.EndTime,
td.LoginName, td.HostName, td.ApplicationName, td.spid, td.ClientProcessID, td.IsSystem, td.SqlHandle, td.TextData
FROM sys.fn_trace_gettable(@path, DEFAULT) td
INNER JOIN sys.trace_events te ON td.EventClass = te.trace_event_id
WHERE td.EventClass IN (92,93)
ORDER BY td.StartTime;
You can of course add WHERE clause criteria to filter only for the DatabaseName or FileName you want. Note: EventClass 92 = Data file growth, EventClass 93 = Log file growth, so if you want you can filter that as well.
Another thing I noticed in results you've shown me, you have the files set to percent growth instead of a fixed size growth. This means that when your 247 GB log file needs to grow, it will grow by 24 GB, which may not be what you intended.
April 1, 2020 at 8:30 pm
With that much free space it should be able to shrink some after a transaction log backup. Unfortunately I don't know a way to determine which VLF in the transaction log are in use. It looks odd that it says the file_id = 1?
You can use DBCC LOGINFO or sys.dm_db_log_info to determine which VLF's are available - as long as that database is not part of an AG. The Status column will be a 2 for any VLF that is inuse - and a 0 if not inuse.
So the file size of my log file being 253GB is nothing to worry about (?). The total size of the drive is 1 terabyte
I would not say there is nothing to worry about - there are issues related to how the log file grew to that size. If you have too many VLF's or too little VLF's there can be various different problems. But as an absolute size - no, there isn't really any issues on how large the file if the file grew at a fixed size in MB. If the growth setting is a percentage then you definitely have some issues - not only with VLF's but also with file fragmentation.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
April 1, 2020 at 9:05 pm
You can use DBCC LOGINFO or sys.dm_db_log_info to determine which VLF's are available - as long as that database is not part of an AG. The Status column will be a 2 for any VLF that is inuse - and a 0 if not inuse.
Excellent thanks! I'd know more about sys.dm_db_log_info if my server wasn't still 2014 😉
April 1, 2020 at 9:37 pm
Jeffrey Williams wrote:You can use DBCC LOGINFO or sys.dm_db_log_info to determine which VLF's are available - as long as that database is not part of an AG. The Status column will be a 2 for any VLF that is inuse - and a 0 if not inuse.
Excellent thanks! I'd know more about sys.dm_db_log_info if my server wasn't still 2014 😉
Yep - back to good ole DBCC LOGINFO...
I still wish they would figure out how to provide the correct status when the database is part of an AG.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
April 1, 2020 at 10:36 pm
Actually, there IS a problem with having a large log file and that would be the sometimes huge amount of extra time a RESTORE would take. Restoring a log file is nearly as bad as formatting a disk because it has to build all those wonderful VLFs on the entire file before the database will come back online.
You also have to remember that maybe your restore will be to a smaller machine in a serious DR situation if the proverbial poo hits the fan.
And, finally, contrary to popular belief, disk space is not free either on-premise or in the cloud. There are a lot of temporary ancillary uses for such a large piece of disk. I broke people's antlers at work to get the DBAs a "private, never store anything permanent, DBA ONLY" work space at work to do things like being able to rebuild 200GB clustered indexes without having the PRIMARY MDF file blow up or making a temporary copy of a table in a temporary new file group when someone is getting ready to do something big. If you don't have such a thing where you work, that extra 200GB or so can come in mighty handy in a pinch.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 5 posts - 16 through 19 (of 19 total)
You must be logged in to reply to this topic. Login to reply