September 19, 2005 at 12:30 pm
Is using of Double-Take or similar type of software, for SQL Server databases replication advisable?
I am most concern about the software interfering into the I\O path and the possibility of data corruption and performance degradation on the source server.
September 20, 2005 at 12:28 am
We are in a similar position. MS-SQL replication consistantly fails due to the design of our database which unfortunately I have inheritted. Currently we cannot change the database structure in any way due to the applications that in use. I have both Veritas and Double-Take software and would like to know from the Guru's out there if there is any better mirroring software out there?
Thanks in advance
September 20, 2005 at 9:54 am
James, what type of failures & design problems are you having ?
September 21, 2005 at 12:32 am
Hi there, We have had numerous problems. Firstly to tell you a bit about our environment. We have a 100GB MS-SQL2K SP4 database which we are trying to replicate to another machine. We have conformed top all MS recommendations in the database (as best we can). Some of the errors we have experienced are as follows:
Bear in mind each time we make a defrag the indexes or change any object the entire snap shot gets regenerated, which takes anything from 8 to 16 hours! Right now I am ready to throw SQL out the Window(s)!!!!
Maybe you can suggest something?
Kind regards
James
September 21, 2005 at 12:32 am
Hi there, We have had numerous problems. Firstly to tell you a bit about our environment. We have a 100GB MS-SQL2K SP4 database which we are trying to replicate to another machine. We have conformed top all MS recommendations in the database (as best we can). Some of the errors we have experienced are as follows:
Bear in mind each time we make a defrag the indexes or change any object the entire snap shot gets regenerated, which takes anything from 8 to 16 hours! Right now I am ready to throw SQL out the Window(s)!!!!
Maybe you can suggest something?
Kind regards
James
September 21, 2005 at 12:33 am
Whoops - Sorry about the double post!!!
September 21, 2005 at 9:58 am
How many tables are you replicating ? What type of replication are you doing ?? Transactional I assume. What are your business requirements for your subscriber DB in terms of up-time & latency ?
We replicate a subset of our DB, about 60 G out of 140 G, but only about 30 tables out of 1,700. I break them out into different publications based on application, because I encountered problems when one publication tried to push too much. I still had to increase the agent timeout. I wouldn't think defragging indexes would cause the snapshot to regenerate. I run dbcc dbreindex during inactive periods and change recovery model to simple to avoid huge transaction logs.
Maybe one of the experts around here will have some ideas of a better way.
September 22, 2005 at 12:07 am
Hi Homebrew01,
We have about 600 tables. 12 of these tables are over 10Million records in size, and yes it is transactional. The business requirements are two fold, firstly to create a reporting server for the data (move reports off the live server). secondly to create multiple replicated machines for various hard and long running processes, that typically require huge amounts of reads across the database (namely the large tables I mentioned). I have tried at least 5 times to split the tables into the last two years records, which is a business requirement, but this caused errors each time, from timeouts and missing parameter errors to cannot run SP errors.
I run dbcc dbreindex once a month, it was immediately after this that the Log reader caused errors.
I will try create multiple publicates as you suggest!
Thanks!
September 22, 2005 at 8:33 am
" ....... I have tried at least 5 times to split the tables into the last two years records ...... "
Does this mean you only need the last 2 years data at the subscriber ? If you only need a subset of data, you can use row filtering to limit what you replicate. A row filter is a "where clause" that you can write. That's what we do for some of our tables, only bringing over data from the last few years for the users to query.
September 22, 2005 at 9:05 am
Yes - that’s exactly what I mean. I have checked the tables that we are trying to replicate, and I believe these tables were designed without replication in mind. We have numerous TEXT/nTEXT type fields, and wide tables (large number of columns).
The original developers seemed to ignore normalisation in many ways, which I have a feeling contributes to the problem. With your experience did you also go through this type of teething problems?
September 22, 2005 at 9:29 am
I think our situation is somewhat simpler than yours, so I went through a learning curve, but haven't had much problem since. I'm no expert, I've just managed to get what I need working through trial & error.
You can also filter on colums if you don't need them all at the subscriber. That would cut down on the amount of data to transfer.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply