May 28, 2013 at 1:58 pm
how can i tell of transactions are written as 'all or nothing'...i realize they are acid, but for example, if there is an update that is going to affect a million rows, but the transaction is not broken down into say 100,000 row chunks, isnt going to take longer to do the update all in one gulp, instead of it being broken into ten units of 100 K each?
Follow up question, is it best to do that chunking in a loop based on some increasing id indexed value, or failing that, maybe using values gleaned from dbcc show_statistics to determine appropriate chunking for the transaction size?
thanks a lot
May 28, 2013 at 2:46 pm
drew.georgopulos (5/28/2013)
how can i tell of transactions are written as 'all or nothing'...i realize they are acid, but for example, if there is an update that is going to affect a million rows, but the transaction is not broken down into say 100,000 row chunks, isnt going to take longer to do the update all in one gulp, instead of it being broken into ten units of 100 K each?Follow up question, is it best to do that chunking in a loop based on some increasing id indexed value, or failing that, maybe using values gleaned from dbcc show_statistics to determine appropriate chunking for the transaction size?
thanks a lot
Let's be very clear here. Transactions are not written, they are either committed or rolled back. Any given transaction is ALWAYS "all or nothing". You can't commit or rollback part of a transaction.
For your example of a million rows, it is likely that breaking that into chunks will ease the pressure on the transaction log resulting in a faster completion. That does however now introduce the possibility of some of those rows being updated and others not because you would now have multiple transactions. Each of those 100,000 rows transactions are still "all or nothing". If you need to make the whole batch "all or nothing" you should not break it into multiple chunks.
The most common form of chunking is a loop based process. Often it is done by evaluating @@ROWCOUNT and having the contents get the top xxx rows.
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
May 28, 2013 at 5:28 pm
Yes, thanks.
I understand each 100K is a new all or nothing batch, and that there would be ten of them instead of one big one.
Thanks again
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply