December 20, 2005 at 6:48 am
Hi,
I want to know the alternate and easy way of generating snapshot file when taking snapshot.
we have huge database and it is increasing day by day. so it is very diffiicult to upload the snapshot file of 970 MB. it will take 5 to 6 hrs and much risky as well.
Is there any alternate or any way to generate snapshot file not of all database but only the part of the DB which have changed.
or any way that anyone of you feel that would be easy and less ridky and less time taking.
Thanx for your contributions.
Noman
December 20, 2005 at 9:22 am
Sounds like this is Snapshot replication? If the bulk of that data isn't being updated, perhaps you might consider moving to Merge replication with only a daily merge task. You'll still need to peroidically run the snapshot agent to keep the local snapshot data current (I have one client that does it once per week).
Yes, merge replication is more complicated and not without it's pitfalls, but depending on what you're trying to accomplish, it may be your best bet. If the table structure is static and only the data changes, it may be a good fit.
BOL has some pretty good documentation on how it works and such, so I'd start there.
If on the off chance this is merge replication, snapshots only need to pushed out if there is a schema change. Simply changing the processing schedule to fit your maintenance windows may suffice.
Without knowing more about your specific topology and exactly what is driving the need for replication, it's difficult to give a better recommendation than that.
Hope that helps a bit.
December 20, 2005 at 10:16 am
Thanx for your considerations, actually we have three distributed databases and we have frequent updations in the db schema after a week.
and we have merge replication as well. you means there is no other method to run snapshot agent on the whole Database of publisher.
Thanks again
December 20, 2005 at 2:50 pm
Are replicating every table in the database? The snapshot agent should only be working with "articles" (tables) within the publication itself. Merge replication should only use the snapshot taken on the publisher as a baseline once the initial publication is up and running.
If you are replicating every table, perhaps there's an opportunity to narrow down exactly what is replicated -- that could reduce your processing time...
December 21, 2005 at 11:42 am
Based on 2 points in your initial problem:
"... the snapshot file of 970 MB. it will take 5 to 6 hrs ..."
I'd also look into the network performance and throughput. The reason I make that statement is that I've performed this same type of activity with snapshots. The difference is that my snapshots were 10 Gb and the time was still less tha 5-6 hours.
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply