August 26, 2004 at 7:55 pm
This is going to sound really odd but I'm asking anyhow. I'm brand new to replication and I'm only looking at it right now as a tool. 2 questions.
1. When you pull from a distributer transactionally, does it send the updates to the subscriber in the same order they occurred or does the distributer somehow consolidate the information. I.E. If a row is inserted, updated 10 times and then deleted are the 12 transactions sent or are they filtered out partially or wholely? Also, if many rows are inserted out of order, are they sent as they occurred or potentially ordered?
2. If you are creating a new subscriber can you initiallize it to the snapshop point without getting any of the updates that occurred since the snapshop (I'm REALLY new to this). Then later apply all the updates.
Why? I am the closest thing to a SQL Server DBA my shop has (I'm a programmer) and after 4 years of Production activity I finally managed to talk management into allowing us to do some maintenance (CPU = 65% average). We've never defragged an index (unless accidentally due to a table alter that rebuilt the index or possibly the SQL Server 2000 upgrade). All our data tables are heaps. All fill factors are defaulted (I'm assuming to 70) unless almost randomly assigned to 90 (granted its moot since we don't defrag). And so on. I have a LOT of assessment to do and proving effects prior to throwing into Production would be nice. Also, updating guesses week after week will take months and involve working periodically with a couple of bottlenecks....I mean busy people . So, what I'd like to do is get a snapshop of the Production database, log it for a day/days/week and then create a version on a development server of the snapshot, apply fillfactors and such, apply updates via replication, and assess fragmentation and query performance against the results. Then drop the database and start over with new fillfactors and such. If (big if) that is possible it would cut the initial refinement time down to days instead of months and allow me to prove benefits (since I could start with a baseline). Since total database sizes are 2-4 gig each without a tremendous amount of update activity (otherwise we'd have been SOL a long time ago) I'm guessing the replication can be done without harming performance on the server noticeable worse than it is. Any red flags?
August 27, 2004 at 11:07 am
1. SQL Server does not consolidate transactions in any way. In your first example - all 12 transactions are distributed to the subscribers. As far as rows inserted "out of order", SQL Server's using the transactions as they appear in the log, and not reordering them in any way.
2. The generation of a snapshot locks the table, so if you generated snapshots continuously, you'd run into performance problems on the proudction server. So snapshots are generated occasionally, and then the transactions necessary to bring the subscriber up to date are applied later.
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply