August 17, 2004 at 10:38 am
August 17, 2004 at 10:54 am
I'm not the replication expert, but I had a similar situation years ago in v6.5. I used a trigger to move the inserts and updates to a separate table. Then I replicated that table to the other server.
August 18, 2004 at 4:53 am
My approach would be different in that I would continue to use the replication stored procedures on the subscriber server, but I would edit the delete stored procs to take no action on the subscriber. I've never done this before, but I believe that it is entirely possible. This would make the most sense for me, as replication would continue to work, except that the deletes would not be processed on the subscriber.
The only thing you;d have to remember to do is re-edit the delete stored procs in the event that you have to regenerate the replication stored procs on the subscriber.
Hope this helps.
"See how deep the rabbit hole goes..."
August 18, 2004 at 8:22 am
August 18, 2004 at 3:48 pm
Hi Chad, Guess who I am
Before I go over both methods about pro and con, I have to say, I don't have good solution either. I am just trying to telling you what might be you are going to be against when you choose either way.
Steve's way certainly works only potential problem might be locking but if you have secondary table on the same server, most likly, it won't happen. Another bad thing that I can think is that now you have three set of same data. Two sets on publisher and one sets on subscriber. So wasting the space here.
Morpheuz way is actually really slick way and like Mr. M mentioned, you have to remember to do is to re-edit the delete store procs in the event and that has another potential problem as well. If something happens to the subscriber database and you have to resnapshot, you will loose all not-deleted data and get the duplicated data from publisher again. Even though Morpheuz way is better at performance wise, to have the data that you want, you need to do Steve's suggestion.
Since Trigger does lock while it is inserting to other table, you might want to change your app to insert,update and delete. If you are using store proc, you can just insert into two different table for that purpose and have manually move over the data to subscriber every minitues if you can affort some time gap if not, setup the continuose replication on the 2nd table, etc.
Good thing about moving to subscription database maually is that you can remove the data that you inserted from publisher so that you don't have to keep three sets of data.
Sorry about not giving a good answer but if you don't have to worry about space, I'd recommend the Steves way.
August 18, 2004 at 6:17 pm
Agree that either way will work, though I like modifying the proc more than using the trigger - cheaper to not do the work. You could also use an instead of trigger on the subscriber table and just throw away the commands there.
If you do the subscriber trigger or the proc change, it will break the next time you snapshot unless you add a post snapshot script to put the changes back.
Also, if you snapshot you'll "lose" the records on the subscriber that were deleted from the publisher before. Might need some type of pre snapshot script to move those out and put back. Convoluted, but you can make it work.
As I think about that, I think the instead of trigger might wind up being cleaner, you could move the deleted rows to a separate table on the subscriber, just use a view to combine with the real table. Then you have no snapshot headaches (other than needing to apply the trigger each time).
June 10, 2005 at 1:34 pm
I am trying to implement the concept of only pushing inserts and updates but not deletes.
I have written a "pre snapshot" script.
Basically, execute a sp_rename for each table in the snapshot.
exec sp_rename 'table1', 'table1_backup'
Then execute a new snapshot.
The question is what is the fastest way to push the data back in to the replicated table??
In my situation, the volume of data in the publisher will be a small fraction of the volume of the data in the subscriber (think daily data vs. lifetime data).
So I imagine that I would be better off moving the data from the newly created snapshot into the renamed table rather than the other way around.
First -Turn off Log Reader
Second - DELETE FROM table1_backup WHERE id IN (SELECT id FROM table1)
Third - INSERT INTO table1_backup SELECT * FROM table1
Fourth - sp_rename 'table1_backup', 'table1'
Fifth - Turn on Log Reader
This isn't working due to the following error:
Server: Msg 15051, Level 11, State 1, Procedure sp_rename, Line 234
Cannot rename the table because it is published for replication.
Anyone know any way around this?
Thanks,
-jmr
June 13, 2005 at 10:08 am
There is no need to implement a custom solution. Have a look at the article properties, Commands tab, and type NONE for the 'Replace delete commands with...' section.
Rgds,
Paul Ibison, SQL Server MVP
Paul Ibison
Paul.Ibison@replicationanswers.com
June 13, 2005 at 12:28 pm
I am using the option to ignore delete given at that tab. I'm more concerned with generating a new snapshot if, for any reason, the entire process needs to be restarted.
In the article properties, Snapshot tab, there are a number of options:
Keep existing table unchanged.
Drop and recreate existing table
Delete data in the existing table that matches the row filter statement
Delete all data in the existing table.
I'm guessing that these correspond with sp_addarticle: @pre_creation_cmd options ('none', 'delete', 'drop', 'truncate')
I will run some tests to see if the 'NONE' option does what I'm looking for.
Thanks,
-jmr
June 13, 2005 at 1:38 pm
OK - as you are not replicating the deletes, you don't want to remove the accumulated data at the subscriber? In that case the 'Keep existing table unchanged' would be the desired one (much like multiple publishers/one subscriber).
Rgds,
Paul Ibison, SQL Server MVP
Paul Ibison
Paul.Ibison@replicationanswers.com
June 13, 2005 at 2:40 pm
I altered my scripts to reflect that option.
Now, I can run the snapshot, but when the distributor starts, I get the following error: "The process could not bulk copy into table '<TableName>' "
I found the following kb article: http://www.kbalertz.com/kb_885460.aspx#toc
This says that if I upgrade to MDAC 2.8 it will fix the problem.
MDAC 2.8 sp1 comes with MSSQL SP4 and I have SP4 installed, so I am a little skeptical.
reagardless, I downloaded and installed MDAC 2.8 sp1, I'll see what happens next...
June 14, 2005 at 3:33 pm
Hi, still no luck.
I looked into the distibution db to determine the actual line that is causing the error:
sync -t"test_requests_floors"
-d"c:\Program Files\Microsoft SQL Server\MSSQL\ReplData\unc\SQLDEV2_tbis_dev_requests\20050614203617\test_requests_floors_12.bcp"
-hORDER( [request_id],[floor_id] ASC)
I even tried to run the original script that I had with
@pre_creation_cmd options = 'drop'
Then I update that option through Enterprise Manager. That resulted in the same error.
What's going on here?
-jmr
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply