January 12, 2010 at 5:13 pm
Hi,
I have a Transactional Replication,which has two subscriber(All in SQL Server 2000),I want to update a column in a table which has 100,000,000 records(the update will apply to all the records).Does this update cause any problem for Replication.(I have used Set RowCount,so I do update in 100,000 row at a time).I would appreciate if someone help me on this problem.
Thanks
January 12, 2010 at 8:24 pm
Replication will be quite happy with 100k rows. Depending on network speed etc it might take a while.
January 18, 2010 at 2:06 pm
I strongly feel to create a stored procedure and run it on non peak hours to all servers.
also would suggest to do the following to control tlog size
1. simple recovery model
2. frequent tlog bkp
Thanx.
Vinay
http://rdbmsexperts.com/Blogs/
http://vinay-thakur.spaces.live.com/
http://twitter.com/ThakurVinay
January 19, 2010 at 2:03 am
You could write a stored proc to delete the data, add the stored procedure to the publication with the option set to replicate the execute of the stored procedure. If you do this, all that actually gets sent to the subscribers is the execution command for the stored procedure (not the deletion of each set of 100,000 rows).
This means, one record in the distribution database so the issue I suggested with network traffic etc won't exist. You still end up with the same rows being deleted at each server.
I would suggest that, since you have so many rows to delete, you may want to write the stored procedure with a parameter that you can use to restrict the number of rows deleted. This would allow you to run the stored procedure several times - meaning less chance of one huge transaction swamping your servers.
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply