June 24, 2010 at 9:28 am
Hi guys,
a bit of background :
I am attempting to configure transactional replication for a database. The intention is for the Live database to replicate to a database on another server which is to be used for Business Objects reporting thus reducing the load on the "Live" system. In line with best practices (i think) I am using the reporting database server to host the distribution database and am trying to set up a pull subscription. I would like to initialise the reporting database for transactional replication using a backup rather than a snapshot as it is my understanding that this method will be much quicker.
I started to create the publication (adding articles) yesterday at 15:00. The articles were still being created this morning at 09:00 and approx 91,000 of 140,000 had been created. Since the remaining articles were being created extremely slowly I decided to cancel the publication creation and modify a memory setting on the server (it's x64 and max memory hadn't been configured....don't ask). I cancelled the publication and set the max server memory to 14GB of an available 16GB thus leaving 2GB for the OS. The server then felt a lot less sluggish.
I decided I would now delete the publication and recreate it. First I attempted to do this via the GUI. The result - SQL Server locked a stack of objects and users started to experience connectivity issues to the live system....great!! I killed the process, apologised to the ServiceDesk for the increase in their call volume and waited until a quieter time in the day to remove the publication, this time using sp_droppublication. This has currently been running for an hour and a half.
My questions are two-fold.
1. Have I gone about removing the publication the wrong way, or does it usually take this long to drop a publication with this many articles (it is my understanding it has to drop lots of triggers and views in the live system)
2. Is transactional replication the best solution to maintain a copy of the live database for reporting purposes and does anyone out there have practical experience of using it in this way?
I would really appreciate any advice, condolence or support you can offer me.
Thanks in advance
June 24, 2010 at 10:25 am
Are you replicating 140,000 tables, or is that split among SPs, views etc?
You can stage it in several publications so that it is less of a strain to initialize. Also, you don't have to initialize through a snapshot.
Also, I would not replicate non table objects unless it was truly required.
June 24, 2010 at 10:56 am
Thank you for your reply.
That is split between tables and other objects. When I recreate the publication I will omit the non-table objects. Do you have any idea why it could be taking so long to drop the publication? I'm not familiar with the changes (if any) made to repliacted objects, or conversely those removed when the publication is dropped. Does in excess of two hours seem reasonable to you to drop a publication? The process I used to run sp_droppublication seems to be acquiring locks on large numbers of objects. Through observing the SQL running against the process in Activity Monitor, the process also seems to be dropping views in the replicated database. Does this sound like normal behaviour?
Once again, thanks for any help you can give me.
June 24, 2010 at 8:22 pm
All that sounds normal to me.
June 24, 2010 at 11:03 pm
Thanks Steve. Hopefully it'll be ok this morning. I appreciate you taking the time to advise me.
Kind regards
You were correct Steve. The publication was finally dropped. 😀 There are 171880 tables in the database I want to clone. Perhaps replication isn't the best way to go about this. I may tell the users an overnight restore from live will have to suffice and they'll have to make do with day old data.
July 5, 2010 at 12:38 pm
Why not initialize using Back up? That should solve your problem.
-Roy
July 5, 2010 at 2:28 pm
Thanks for the suggestion Roy. I did try this to start with but the publication started generating lots of articles which is the part of the process that took a long time. I'm not very familiar with replication, so I probably did something wrong. Also the tutorials which I could find all related to using a default configuration (push subscription, local distributor), whereas I wanted to use a remote distribution database (on the reporting server) and set up a pull subscripton. I'm sure that given time and a suitable test environment, I could have got it working, unfortunately I had neither of these luxuries. I eventually settled for sharing the backups volume of the live system with the reporting server and configuring an overnight restore job to restore the reporting database from the live server. Not an ideal solution but an acceptable one.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply