December 22, 2011 at 9:49 am
No, david unfortunately we dont have any 3rd vendor like redgate 🙁
December 22, 2011 at 9:58 am
islas.tonatiuh (12/22/2011)
One more question Sir, what's the interpretation for the operation and context columns retrieved from the query?I ran it but how can i use those results, because i saw:
e.g.
dbo.TransactionLog.PK_TransactionLogLOP_MODIFY_ROWLCX_TEXT_MIX
dbo.TransactionLog.PK_TransactionLogLOP_MODIFY_ROWLCX_TEXT_TREE
dbo.TransactionLog.PK_TransactionLogLOP_SET_BITSLCX_GAM
dbo.TransactionLog.PK_TransactionLogLOP_SET_BITSLCX_IAM
sys.sysallocunits.clustLOP_COUNT_DELTALCX_CLUSTERED
Personally, fn_dblog would be my last option for locating operations, because of the vast volume of log entries (a single row insert can easily generate several log entries) and the complete lack of documentation.
Those rows modify a LOB (Large Object) column in the table TransactionLog and then alter some allocation pages. That's likely part of an update or insert statement (though hard to be sure)
btw, the log entries for what caused that 4GB log backup won't be in the transaction log any longer, they'd have been marked inactive by that 4GB log backup and likely overwritten by now, so looking through the active transaction log is not going to help in this case.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 22, 2011 at 10:09 am
I was asking to the owner of the application and he's not aware of any heavy workload in the server of massive deletion, probably it's happening like you said guys but he's not a good clue to follow up.
December 22, 2011 at 11:14 am
My guess would be index rebuilds. They're usually the culprits for large log growth/large log backups, especially in the scenario you gave where the log backup is over half the size of the DB.
I've seen apps that have index rebuilds called from stored procs, I've heard of ones that allow a rebuild to be executed from the app. Plus, if you have junior DBAs or developers with server access, they could have rebuild indexes without thinking of the consequences.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 22, 2011 at 11:48 am
Check your server's maintenance plans to see what may have been scheduled.
Also, if there was something like an index rebuild, then a query of the default trace may reveal what, when, and who. The event should still be in the trace log, assuming that the default trace is running (by default it is always) and the server hadn't been rebooted recently. In the example below, I've set a filter on trace events created in last 24 hours.
DECLARE @tracefile NVARCHAR(256);
SET @tracefile = (SELECT LEFT([path],LEN([path])-CHARINDEX('\',REVERSE([path])))+ '\log.trc' FROM sys.traces WHERE [is_default] = 1);
SELECT
gt.[ServerName]
,gt.[DatabaseName]
,gt.[SPID]
,gt.[StartTime]
,gt.[ObjectName]
,gt.[objecttype] [ObjectTypeID]
,sv.[subclass_name] [ObjectType]
,e.[category_id] [CategoryID]
,c.[Name] [Category]
,gt.[EventClass] [EventID]
,e.[Name] [EventName]
,gt.[LoginName]
,gt.[ApplicationName]
,gt.[TextData]
FROM fn_trace_gettable(@tracefile, DEFAULT) gt
LEFT JOIN sys.trace_subclass_values sv ON gt.[eventclass] = sv.[trace_event_id] AND sv.subclass_value = gt.[objecttype]
INNER JOIN sys.trace_events e ON gt.[eventclass] = e.[trace_event_id]
INNER JOIN sys.trace_categories c ON e.[category_id] = c.[category_id]
WHERE gt.[spid] > 50
AND gt.[objecttype] <> 21587 --Statistics
AND gt.[databasename] <> 'tempdb'
AND gt.[starttime] > dateadd(hour,-24,GETDATE());
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
December 22, 2011 at 4:48 pm
Ok, got it, i'm going to do that and back with more details, thanks
December 22, 2011 at 5:47 pm
I ran the last query without any results 🙁 still not idea about what is causing the size of transaction log backup.
December 23, 2011 at 8:36 am
islas.tonatiuh (12/22/2011)
I ran the last query without any results 🙁 still not idea about what is causing the size of transaction log backup.
You're saying the query against default trace returned no result? Perhaps that trace has been disabled for some reason. You may want to consult with the other DBAs about enabling the default trace, because it can be very useful when researching situations like this. It would have told you if and when DBCC or alter table / index commands had been executed.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
December 28, 2011 at 12:33 pm
Hi guys,
Two days ago i shrank the log and i could reclaim some space, after that i noticed that the log transaction backup was reduced dramatically almost 90% percent (past day was around 3 GB now 300 KB) and then tonight it happens again, the same size 3 GB, also i did a trace (with sql server profiler) and i'll check what's going on around that hour
Viewing 9 posts - 16 through 23 (of 23 total)
You must be logged in to reply to this topic. Login to reply