April 28, 2008 at 6:19 pm
This tickle my brain cell a bit. I broke up my publication from one to 5, making it easier to recover when only 1 table/article failed rather then sending over all 5 tables/articles.
I have these two tables that has 1.3 million rows and the other one has 74 million rows.
The 74 million rows table was able to take a snapshot of everything in under 5 minutes and sent it over to its subscriber in roughly 3 minutes.
The other table with 1.3 million rows, however, took over 45 minutes to snapshot, and over 1:30mins to send it to a subscriber ...
Ahh crap, I just figured it out.
the 1.3 million rows is 37GB large, while the the 74million rows is only 6GB, that's why it is faster.
I guess sometimes it is good to type it out on a forum and the answers comes to you. hahahaha
-----------------------------
www.cbtr.net
.: SQL Backup Admin Tool[/url] :.
April 29, 2008 at 7:50 am
Yup - size does matter.
BLOB fields cause this problem a lot.
May 2, 2008 at 11:03 am
remember that if you split your table articles across multiple pubs that there should be no hierarchy overlap, eg don't have Customers on pub1, Orders on pub2 and OrderDetails on pub3
which will [eventually!] give race condition, unless you are using
sp_addpublication @independent_agent ='false'
which means the same DA will support all pub1-pub3
HTH
Dick
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply