May 21, 2012 at 9:20 am
Hi all,
I need to remove an older table (article) from a publication and add a new table back in using 'exec sp_addarticle'.
The new table is Very large and is has been duplicated on the subscriber side already. I just want to add replication to keep them in sync.
How do I avoid a snapshot happening? Going to use the option '@pre_creation_cmd = N'none'' in sp_addarticle.
Any gotchas I need to cater for?
I'm using transactional replication pulling to the subscriber. Distribution DB is on the subscriber.
Thanks
P
May 21, 2012 at 9:28 am
pjl0808 (5/21/2012)
Hi all,I need to remove an older table (article) from a publication and add a new table back in using 'exec sp_addarticle'.
The new table is Very large and is has been duplicated on the subscriber side already. I just want to add replication to keep them in sync.
How do I avoid a snapshot happening? Going to use the option '@pre_creation_cmd = N'none'' in sp_addarticle.
Any gotchas I need to cater for?
I'm using transactional replication pulling to the subscriber. Distribution DB is on the subscriber.
Thanks
P
you might also want to look at @sync_type='none' when you run your sp_addsubscription - assuming you are specifying each article in your subscription rather than the default value of @article='all'
MVDBA
May 21, 2012 at 9:47 am
Yes Mike, I ususlly do that but in this case do I need to drop and add the subscription? I guess so, right.
Thanks
May 21, 2012 at 10:09 am
yep - you need to drop and add the subscription for that table only. - although the article has the same name i think it gets a different internal identifier
MVDBA
May 23, 2012 at 2:49 am
michael vessey (5/21/2012)
yep - you need to drop and add the subscription for that table only. - although the article has the same name i think it gets a different internal identifier
You can try this but I believe by default it isnt allowed due to a snapshot setting. Using @article='all' doesnt force all articles to be resynced, only newly added ones so this is safe to use.
@sync_type='none' is correct although you may need to create the destination schema and replication stored procedures. You can backfill data later if you wish but you will need to handle delete and update errors in the mean time you experience any.
May 23, 2012 at 4:45 am
The table is part of an existing publication with other smaller tables. We have created/duplicated a new table_2012 (partitioned) with a better partition layout etc and have it setup on both servers. It's a very large table, something like 30 Billion rows with 4 TB of data. ( I don't have access at the moment as the connections are down and we are not gone live with it yet).
Want to remove the old tables from replication, then drop them, rename new tables back and re-add back into replication to keep them in sync. The 3 replication procs are on the subscriber so that shouldn't be an issue. I have just never played with dropping and re-adding a table from a publication before.
Thanks
May 23, 2012 at 6:16 am
May 23, 2012 at 7:59 am
Thank you. I appreciate that info.
BTW: I'm not trying to gloat about the size of the table or anything, just trying to emphasise we can't let this do a snapshot or anything of the sort. That would be a nightmare :).
P
May 23, 2012 at 10:22 am
I'm testing the code on a dev server.
The "exec sp_addsubscription @sync_type = 'none' @article='all'...." is failing because I haven't dropped the subscription, I just dropped the article from the subscription as per your first command.
When I run the above "exec sp_addsubscription @sync_type = 'none' @article='BillOfMaterials'" it also fails saying :
'Specify all articles when subscribing to a publication using concurrent snapshot processing.'
Would it not be better dropping the entire subscription instead of running your first command "exec sp_dropsubscription @article ='bigtable'....."
Thanks
May 26, 2012 at 3:03 am
pjl0808 (5/23/2012)
I'm testing the code on a dev server.The "exec sp_addsubscription @sync_type = 'none' @article='all'...." is failing because I haven't dropped the subscription, I just dropped the article from the subscription as per your first command.
You need to ensure all subscriptions are dropped. I dont understand the information. Why are you quoting sp_addsubscription? The statement order is explicit in the original post as are the parameters I'm explicit in mentioning.
Try using this dynamic sql generator to generate the drops.
select 'exec sp_dropsubscription @publication = N''' + sp.name + '''' + ', @article = N''' + sa.name + '''' + ', @subscriber = N''' + ss.srvname + '''' + char(10) +'GO'
from syssubscriptions ss
inner join sysarticles sa on sa.artid = ss.artid
inner join syspublications sp on sp.pubid = sa.pubid
where sa.name in ('')
order by ss.srvname, sp.name, sa.name
pjl0808 (5/23/2012)
When I run the above "exec sp_addsubscription @sync_type = 'none' @article='BillOfMaterials'" it also fails saying :'Specify all articles when subscribing to a publication using concurrent snapshot processing.'
Yes. Thats why I originally posted this. @article = 'all'
It simple re-adds the missing article(s) subscription and does not cause a resync.
exec sp_addsubscription @sync_type = 'none' @article='all'....
pjl0808 (5/23/2012)
Would it not be better dropping the entire subscription instead of running your first command "exec sp_dropsubscription @article ='bigtable'....."
You could do this but it can get messy. In my environment we cant guarantee there will be zero changes to the data so its best limited to just the table
May 26, 2012 at 3:22 am
pjl0808 (5/23/2012)
Thank you. I appreciate that info.BTW: I'm not trying to gloat about the size of the table or anything, just trying to emphasise we can't let this do a snapshot or anything of the sort. That would be a nightmare :).
P
BTW I never thought you. I was inspiring confidence that this can be done and that I do it regularly 🙂
May 30, 2012 at 3:32 am
Thanks. I was actually quoting your code, not using dynamic code.
My contract has ended, so I won't be putting this into production. Thanks anyway.
P
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply