September 13, 2016 at 8:35 am
According to dm_io_virtual_file_stats.num_of_bytes_written a 5.8gb database wrote 1gb of data over a 16 hour period. Since it started last year the same database file has had 12tb written over 1.5 billion num_of_writes.
Combining the trace & other data I've found 1 problem table generating over 48mb/hour for pointless updates (error constantly appended to column by the application, rows now have 1mb of the same line). Is there an easier way to find out what statements or tables are generating the most redo though?
The Extended Events data has a "writes" option (taken from https://www.brentozar.com/archive/2014/04/collecting-detailed-performance-measurements-extended-events/) like sys.dm_db_index_usage_stats but seemingly nothing on redo/bytes.
September 13, 2016 at 8:40 am
Redo? The only time you get a redo (roll forward) is during crash recovery (after opening a DB/starting SQL Server) or restoring a backup.
Can you explain in more detail what you're looking for?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 13, 2016 at 9:09 am
GilaMonster (9/13/2016)
Redo? The only time you get a redo (roll forward) is during crash recovery (after opening a DB/starting SQL Server) or restoring a backup.Can you explain in more detail what you're looking for?
From 12:30 until now the following number of writes/updates have occured:
DATAFILE_BYTES342548480
TAB158404
DATAFILE_WRITES20189
TAB2 7343
TAB36543
Table info is taken from sys.dm_db_index_usage_stats.user_updates. Datafile info from sys.dm_io_virtual_file_stats(NULL, NULL). Although TAB1 has more individually counted writes (ridiculous number of unused indexes) from index_usage_stats it seems that TAB2 is the bigger issue (updating 1mb with 1mb + a bit).
How can I see how much logging or block changes a DML statement has caused?
E.g. DELETE FROM TAB1 where colA = 4; -- may generate 200kb logging despite lots of indexes
while UPDATE TAB2 set colB = '<stupid length string>'; -- generates a lot more
I can SUM/AVG the number of transactions/writes but not the cost
Edit: Nevermind I'm completely blind. Extended Events "writes" column shows what I'm looking for
Edit 2: Now I'm unsure. Sum of the total writes is only 81847 over a 3 hour period
September 13, 2016 at 9:39 am
sys.dm_tran_database_transactions gives information about how much log space was used for a transaction.
Extended events writes should give you total writes, if the correct events are selected (sql vs sp), sys.dm_io_virtual_file_stats should give totals per file.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 13, 2016 at 9:57 am
GilaMonster (9/13/2016)
sys.dm_tran_database_transactions gives information about how much log space was used for a transaction.Extended events writes should give you total writes, if the correct events are selected (sql vs sp), sys.dm_io_virtual_file_stats should give totals per file.
Do you know what the 81847 writes (doing sum(writes) on brent's table) from extended events equates to though? It can't be kb or mb else that or sys.dm_io_virtual_file_stats is incorrect as it won't match with the 100mb+/hour that num_of_bytes_written has been showing
September 13, 2016 at 10:33 am
Pages (which are 8kb in size). Most reads/writes shown by SQL, unless specified as in kb or bytes, will be in pages
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 13, 2016 at 11:05 am
GilaMonster (9/13/2016)
Pages (which are 8kb in size). Most reads/writes shown by SQL, unless specified as in kb or bytes, will be in pages
Thanks, I thought that's what it was after posting (although couldn't remember "page" for the life of me). 3mb column update!
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply