September 11, 2006 at 9:40 am
I am trying to do an update on a table which has approx 4 million rows. To avoid transaction log from blowing up I was thinking to run the update in batches. Does anyone have the code for this or have done something similar . Any help will be greatly appreciated.
TIA
September 11, 2006 at 10:11 am
You could do it in a batch. Also have a job running (perhaps every minute) to truncate the log.
******************
Dinakar Nethi
Life is short. Enjoy it.
******************
September 11, 2006 at 10:21 am
Sorry but i cant truncate the log when thsi needs to be run in a prodn environment. Also I was looking if someone has written a code for this before.
TIA
September 11, 2006 at 10:32 am
Does ur table have an identity column or datetime column that can be used. If so u can easily write code to do update in chunks and u can set it up as job as well.
Thanks
Sreejith
September 11, 2006 at 10:40 am
Unfortunately my tables doesnt have an identity col so i trying to set up rowcount but that doesnt seem to work. Anythoughts or ideas.
Thanks
September 11, 2006 at 10:47 am
Can you export the primary keys into a Temp table that has identity. Once that table is populated you can use that as lookup table and update data in chunks.
Thanks
Sreejith
September 11, 2006 at 5:54 pm
...and, still, if you have FULL RECOVERY set, none of that will help the LOG file because all of the updates will be recorded whether they be single updates, batches, or 1 big update. The only thing you'll really gain on is NOT locking the whole table during the updates.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 13, 2006 at 3:20 am
Yes, but if you create a job to truncate the log every minute (for example), or to truncate it when it gets to 60% full, then at least you have a fighting chance of the log being truncated at least once before all the updates are completed.
John
September 13, 2006 at 5:00 am
Better make sure you do a backup before you truncate the log or you won't be able to recover to a point in time which is the purpose of Full Recovery....
Personally, I wouldn't worry about the log... let it do it's job and grow. When you're all done, do a backup and then shrink the log file if you find it necessary.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 13, 2006 at 5:07 am
I believe your question was answered here:
http://www.sqlservercentral.com/forums/shwmessage.aspx?forumid=8&messageid=286040#bm286129
Even name of topics are quite similar.
_____________
Code for TallyGenerator
September 13, 2006 at 5:10 am
Jeff
When I used the word "truncate", I was assuming that the original poster will make the correct choice as to whether to back up the log, or just truncate it. Sometimes the transaction log is on a disk that isn't big enough to be allowed to grow too much.
John
September 13, 2006 at 7:19 pm
John,
Understood... I just don't make those same assumptions... if the requester knew about those types of things, they might not have needed to ask the original question
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply