November 6, 2007 at 3:36 am
Hi,
My company are interested in setting up a new non-production environment that is a complete mirror of the live production environment.
They want the latency between the production and non-production environments to be no greater than 10 minutes.
We are using Microsoft Windows 2003 Server Enterprise Edition and SQL Server 2005 Enterprise Edition.
The non-production and production environments are located in separate data centres, and due to the amount of data to transfer and the physical distance (and budget!) we are considering using High Performance Mirroring.
My question is ....
The amount of data that will be transferred from the production environment to the non-production environment is in the region of 20GB over a period of 24hrs. What sort of network bandwidth should I be looking at, and what is the best way to calculate it.
I have a basic throughput calculator that tells me that to move 20GB in 9.93 minutes I need a bandwidth of 300 Mb/s. The problem is, I don't want to move 20GB in one go, the 20GB will be moved over a period of 24hrs using SQL Server Mirroring.
I could base the bandwidth on the size of the largest log file, but I don't know what that size will be.
Please help! 🙂
www.sqlAssociates.co.uk
November 6, 2007 at 5:27 am
My first suggestion would be to run some tests. Set u[p your configuration to a server that is on-site and monitor the bandwidth used for a week.
You will have to make some assumptions if you cannot actually run any tests. If your company is open 8 hours a day and you only have new or updated data during working hours, do the division.
Since you know you cannot have latency more than 10 minutes, you need to be able to transfer 9 minutes of data within a minute.
This will give you a reasonable number if your data needs are pretty flat (not necessarily the case). If you have log files throughout the day that you can look at, the largest one and the interval for which it represents will be a better guideline.
Finally, whatever number you come up with, I would double with a redundant connection. Someone may not like to hear about the expense, but having a dedicated T1 and a frame relay for something like this is a pretty reasonable thing to do since a few hours of down time could mean a few hours of trying to catch up. Whatever connection type you use and whatever vendor provides it - it will go down a couple of times a year.
November 6, 2007 at 8:50 am
I like Michael's idea. The log sizes (backups) will tell you what needs to be moved to the other server. You can take the largest one, figure out the period since the last one, and that's the bandwidth you need.
You could also set up another server in the same data center, then enable a network monitor to watch traffic and see what the transfers in are across the period. Get avg and peak and you'll have an idea of what gets moved.
Lastly, be sure you do this on multiple days so you are pretty suer you have an average amount.
Are you saying that you have 20GB of new/changed data a day?
November 6, 2007 at 9:12 am
Hi,
Thanks for your comments, much appreciated.
The problem I have is that our company has a peak for 2 weeks of the year in July, the rest of the year the amount of transactions through the system is quite low. No one bothered to collect log file sizes etc... in July of this year, so I'm really struggling to work this one out.
At peak I estimate (rough estimate of 15GB so I've rounded it up for safety) 20GB of transaction logs in 24hrs. To ensure data loss does not exceed 10 minutes I think a bandwidth of 150MB is required.
Any thoughts would be appreciated.
www.sqlAssociates.co.uk
November 6, 2007 at 11:19 am
That sounds pretty reasonable to me, but testing is important and you should do what you can. I assume you can figure out the number of processed records (bills, or calls, or whatever it is your company does) during the busy season and now. If you bill 50% more people at that time, if you watch your transactions for a week and find the largest log file is 10mb, add the 50% to it and you should be reasonably close.
The only additional concern would be that you have some really large individual transactions. If you have 20gb of tiny transactions, mirroring will happily send them over to your second server. If you happen to have a few 1gb transactions, these get cimmitted in a single piece and will be sent all at once to the other server. This is not typical, but you should make sure that is not happening.
Remember, this is an estimate. Even with a year of data and lots of testing, something could change in a few months. If you get a fractional T1, or a frame relay, or a phone line and a modem, make sure it can be expanded later. At least you will know that if something is too slow you can call your vendor and tell them to step it up a bit.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply