March 30, 2010 at 12:57 pm
Kendal Van Dyke (3/30/2010)
David,Can you elaborate on the following statement you made:
the number of records in the MSRepl_transactions table is likely to be very high and once it has gone beyond a certain size
and provide guidance on what that "certain size" is and how someone knows if they've exceeded it?
Also, do you have any suggestions for other steps that someone should take to optimize their existing distribution database before deciding that it's time to create a new one?
Kendal Van Dyke
we're having this problem. first thing is we're looking to upgrade to 32GB of RAM soon on the distributor. we're at 14GB now and seeing memory pressure sometimes.
otherwise best thing is to buy a lot of disk and split everything up on it's own RAID5 volumes. with HP servers, the Proliant G6 supports up to 16 drives in a server
March 30, 2010 at 2:15 pm
If you have everything hammering through one database you are talking about one transaction table having to deal with all transactions. Ditto the commands table.
Try reindexing a very large table. TEMPDB will be thrashed as well, if nothing else. I did find an article (which I can't find now I need it) that showed that indexes are less effective when they get more than 'n' levels deep.
Yes you can have the different distribution databases on different drives with obvious benefits but separate databases have at least got separate files. It's a myth that separate files get separate threads but you are dealing with far smaller files and each distribution database has fewer records to deal with.
We implemented the policy of distribution database per publisher policy and most of our replication woes just went away.
If you reindex a table then the table is locked briefly. What do you want to do, reindex a huge table locking it up for a relatively long length of time or a small table locking it briefly?
March 30, 2010 at 6:19 pm
David,
Thanks for this wonderful article. Please share your comments.
1) Could we do this on SQL server 2000
2) Is there a way by which we can track at which point the distribution database is stuck and is unable to send the transactions to subscriber?
M&M
March 30, 2010 at 11:09 pm
David.Poole (3/30/2010)
If you have everything hammering through one database you are talking about one transaction table having to deal with all transactions. Ditto the commands table.
But if you have two, three, four, etc. distribution databases that doesn't change the number of commands and transactions that have to be written. The IO is still the same.
Yes you can have the different distribution databases on different drives with obvious benefits but separate databases have at least got separate files. It's a myth that separate files get separate threads but you are dealing with far smaller files and each distribution database has fewer records to deal with.
Yes, that is a myth. The CSS team has a great blog post about it here.
I think the discussion has more merit if we put some metrics out for comparison. My busiest distributor is a PowerEdge 2950 running SQL 2008 standard edition (x86) on Windows 2003 (x86) with 6 local drives (2 in RAID 1 for OS and transaction logs and the other 4 in RAID 10 (with 64KB partition offset, 64 KB block size, and 64 BK stripe size) for data) and 4 GB of RAM. It's responsible for 82 publications, 240 subscriptions, and based on distribution agent history for the last 48 hours it's pushing through just shy of 80 million commands per day. All distribution agents run continuously and there are rarely any latency issues that need to be dealt with.
How do those numbers and hardware compare to your distributor? (Please don't take this the wrong way - it's not a "prove it" tone I'm taking here...I'm just trying to understand how our environments compare to each other so I can better understand your reasoning for using multiple distribution DBs)
March 31, 2010 at 1:33 am
No offence taken.
We have four production distributors two of which are virtual servers.
We put roughly 24 billion transactions through them each day. We have spiked up to 370 billion transactions but this was exceptional.
On the subject of whether this works with SQL2000 distributors the short answer is "I don't know". Some of of publishers are SQL2000 but as the distributors have to be the highest version of SQL that are acting as publishers none of our distributors are SQL2000.
March 31, 2010 at 3:21 am
Kendal/David,
Could you also post your CPU configurations please and whether or not you are using Push or Pull subscriptions.
As I'm sure you know, Push subscriptions place more burden on the Distributor server resources.
March 31, 2010 at 5:53 am
Varies from
Physical 8CPU 64bit W2K8 SQL2008 standard edition with 12GB RAM to
Virtual 4CPU 32bit W2K3 SQL2005 Standard edition with 8 GB RAM.
Mainly push subscriptions but pull used for the larger tables.
March 31, 2010 at 9:25 am
David,
Thanks for the article. I love finding out new things about replication.
Also, thanks to Kendall and the discussion. I am trying to read the links as I type.
God Bless,
Thomas
http://thesmilingdba.blogspot.com/
Thomas LeBlanc, MVP Data Platform Consultant
March 31, 2010 at 10:00 am
Good discussion. David, that is a LOT of records!
One note on the expiration period. I can see why with x billion rows you want to shorten it, but for less extreme environments I've set it to 96 hours, allowing me to survive a replication stoppage over a 3 day weekend without having all my pubs expire. I agree that monitoring should be in place as well, but this saved me some pain a couple times.
Early in my SQL days I had 250 db's replicating into one distribution db. Ran into a lot of contention, ended up solving it by not running all the logreaders at once, didn't think to try the multiple distribution model. The one advantage of my approach (was on SQL 2000) was saving quite a bit of memory by only running a handful of logreaders at once.
March 31, 2010 at 10:06 am
Hey David,
Assuming the box is licenced per-processor, would you expect any advantage to running the separate distribution databases in separate instances?
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
March 31, 2010 at 11:00 am
Paul White NZ (3/31/2010)
Assuming the box is licenced per-processor, would you expect any advantage to running the separate distribution databases in separate instances?
I'm not up to date with licensing requirements on multi-instance installations.
Having multiple instances running on one box strikes me as a good way to detract from the available memory so anything you would gain from instance isolation is lost to the memory requirements of separate instances.
This is outside of my area of expertise.
March 31, 2010 at 11:29 am
John.Sansom (3/31/2010)
Kendal/David,Could you also post your CPU configurations please and whether or not you are using Push or Pull subscriptions.
As I'm sure you know, Push subscriptions place more burden on the Distributor server resources.
The distributor that I gave details for previously is a 2x Xeon E5430 (quad core, so 8 total cores) @ 2.66 Ghz. I should also mention that the HDDs are 15K RPM SAS drives.
CPU isn't the bottleneck on any of my distributors - it's almost always disk IO.
All of my subscriptions are push.
March 31, 2010 at 11:35 am
David.Poole (3/31/2010)
No offence taken.We have four production distributors two of which are virtual servers.
We put roughly 24 billion transactions through them each day. We have spiked up to 370 billion transactions but this was exceptional.
On the subject of whether this works with SQL2000 distributors the short answer is "I don't know". Some of of publishers are SQL2000 but as the distributors have to be the highest version of SQL that are acting as publishers none of our distributors are SQL2000.
Not sure if that's 24 billion per distributor or total, but either way that's a LOT of transactions to push through! Fair to say that your case is the exception to the norm? I think that while multiple distribution DBs may solve this kind of problem for the typical shop that has < 50 million transactions per day one distribution DB per publication as a standard practice is a bit excessive.
My point isn't to knock your article or to say you're wrong - but to try and put some better idea in place of when it's appropriate to consider multiple DBs as a solution to latency\performance issues.
Just curious - have you looking into replicating stored procedure execution to cut down on the transaction volume?
March 31, 2010 at 2:29 pm
Just curious - have you looking into replicating stored procedure execution to cut down on the transaction volume?
Absolutely, I was intending to write up an article on it. It cuts down on the number of records flooding through the transaction table dramatically. Particularly if you have heavy purge activity going on.
People forget that insert, updates and deletes still flow through to the distributor even if they don't make it to the commands table.
Having separate distribution databases, even on a weedy virtual box, doesn't do any harm. If you run perfmon you can see the transactions per second going into the distributor. If you monitor it over time then you can get a fair idea when you should add a separate distributor and what should be hived off onto it.
That said we are seeing a massive boost from running SQL2008 64 bit on W2K8.
March 31, 2010 at 2:48 pm
That said we are seeing a massive boost from running SQL2008 64 bit on W2K8.
I'm guessing that's in part dueto the rewritten network stack in Windows 2008. There's a great whitepaper that talks about that:
Geo-Replication Performance Gains with Microsoft SQL Server 2008 Running on Windows Server 2008
Viewing 15 posts - 16 through 30 (of 34 total)
You must be logged in to reply to this topic. Login to reply