Viewing 15 posts - 16 through 30 (of 54 total)
Roy, while i agree that the solution you're talking about would work, why spend the time designing a replication-like technology from scratch when SQL server includes replication at no extra...
February 11, 2009 at 9:24 am
Another thing to keep in mind about merge replication is that it will add a CPU overhead to your servers. In our production environment, we saw about a 5% increase...
February 11, 2009 at 8:22 am
From what you've described, it sounds like you should be going with either Merge or Peer-To-Peer transactional replication.
Regular transactional replication does not seem to fit your situation due to...
February 11, 2009 at 7:28 am
Brilliant question! The nested comment scenario is probably more common that one might think. This is a non-intuitive behavior, yet great to be aware of. Thank You!
February 11, 2009 at 6:43 am
I'm not sure that I would have a use for this code. I would be too worried about properly re-buildling ALL of my default constraints. In this particular case, it...
February 10, 2009 at 6:45 am
Hi Gary,
You may want to try these items.
1. Run DBCC FREEPROCCACHE and DBCC DROPCLEANBUFFERS to clean out the cached query plans.
2. Run a checkpoint command, or backup the...
February 9, 2009 at 9:01 am
I might be mis-interpreting this, but it sounds like the purpose of replication here is for disaster recovery. Is this correct?
If it is meant to be used for DR,...
February 2, 2009 at 7:49 am
Are the errors you're getting related to publications or subscriptions failing? Can you post the errors here?
If you dont need replication working on your test machine, you could drop...
February 2, 2009 at 6:59 am
Thanks Frank.
There are 2 other posts i've found regarding this that helped me reach a solution.
http://sqljunkies.com/WebLog/ashvinis/archive/2005/05/25/15653.aspx
October 18, 2007 at 3:08 pm
Hi David,
Did you ever find a resolution for this? I'm faced with the same issue.
Any help would be appreciated!
Thanks,
Jim
October 17, 2007 at 3:50 pm
Hi All,
I've ended up using the incremental update solution with a staging table. It turns out that there are fewer than 5% of the rows that change at any given...
September 4, 2007 at 6:57 am
The distances are actually not physical distances. The overall picture is a search system for objects that have multiple attributes. the grouping or "clustering" as we call it, groups the objects by...
August 24, 2007 at 9:31 am
Thanks for the input Jeff. I have started working on the view based approach, but i will look into Synonyms and see how it can benefit this.
In regards to your...
August 24, 2007 at 6:39 am
Matt,
I've tried your method of doing the deletes before the insert and update. The problem with this is that the indexes are being restructured during the delete. On 5 million...
August 22, 2007 at 9:50 am
Hi Jeff,
Thanks for the input. The developers are generating these distances based on some clustering algorithms that they say can't be called at run-time on individual distances. My query on...
August 22, 2007 at 9:36 am
Viewing 15 posts - 16 through 30 (of 54 total)