April 3, 2007 at 10:54 am
It seems like this should be simple to figure out.
I have a distribution server with the CPUs pegged at 100%. Every night between 3AM and 7:45AM, jobs running on the publisher create what I presume is an overload of activity, causing the log reader agents on the distributer to wait for hours to read the log (my monitoring app, symantec precise, reports it as NEWTORKIO wait). Distributer agents time out, subscriptions go out of synch, all the bad stuff you would expect.
Then at 7:45AM, the floodgates open and a torrent of transactions bog down my distributer till 4pm. All the above bad stuff continues.
I'd like to track down the jobs (not necessicarily sql scheduled) that are causing this flood, and want to start by viewing the insert/update/delete transactions and find out which articles are getting pounded.
Is there a preferred method for doing this, such as a log reader app like Lumigent or system tables/views/procs I am not aware of?
Thanks in advance
dnash
April 4, 2007 at 9:09 am
You can set SQL Profiler to run for a certain amount of time and log the results to a SQL table and then run queries on that table to find out what transactions were causing the most CPU usage and/or were the longest running. That's a place to start - hopefully from there you can pin down the application or import/export job and start examining why it's taking so much resources to do its job.
Run profiler on a different machine than the server (your workstation, a different server, etc).
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply