November 17, 2010 at 7:25 am
what about just running perfmon all the time? on our DBA server we monitor a few hundred counters from all our servers
November 17, 2010 at 7:47 am
Hi Larry,
there is a SQL query that can tell you which query is taking lots of CPU utilization
http://www.revenmerchantservices.com/post/2010/04/23/Sql-server-cpu-utilisation.aspx
Thanks, Satalaj.
November 17, 2010 at 7:52 am
Does the second call of the SP wait for the first call to end? It should skip the rows locked by the first call and delete the next ones.
Have the proper indexes on your table and make sure your queries always use that index. Are you missing ROWLOCK (which will use more resources in this case)?
Let me know if this helps.
November 17, 2010 at 7:55 am
Does the second call of the SP wait for the first call to end? It should skip the rows locked by the first call and delete the next ones.
Have the proper indexes on your table and make sure your queries always use that index. Are you missing ROWLOCK (which will use more resources in this case)?
Let me know if this helps.
November 17, 2010 at 7:57 am
This is great info and something we encounter quite often.
Is anyone aware of other articles that might describe ways to determine specific queries or processes that might be causing CPU utilization to be high?
November 17, 2010 at 8:00 am
Hi Boxer,
If you see that Delete statement it uses 2 SQL hints (Updlock and readpast)
meaning is if first SP locks first 100 rows the second SP will call next 100 rows with readpast which are commited and not locked.
Thanks, Satalaj.
November 17, 2010 at 8:06 am
Reply to rance.malcom: Why dont you use MS profiler to run a trace and check instantly?
November 17, 2010 at 8:12 am
If its SQL 2k5 or 2k8, then DMVs like dm_os_ring_buffers,dm_os_waitstats,dm_os_schedulers can help.
If interested, refer to the following two whitepapers.
Waits and queues - Check cpu pressure section http://msdn.microsoft.com/en-us/library/cc966413.aspx
Troubleshooting performance problems - check cpu bottleneck section http://msdn.microsoft.com/en-us/library/cc966540.aspx
Regards,
Raj
November 17, 2010 at 8:41 am
Seems like the idea of just checking SQL Server's cpu utilization instead of the whole box's cpu utilization is a somewhat flawed concept. Heaven forbid there be other apps running on the same box, but if there were, and the app did soak up the cpu from sql server, that would still adversely affect the sql server in a way that would require dba attention, don't you think?
To me, monitoring the whole box's cpu utilization and being alerted regardless of *what* is using the cpu, as Geoff's approach does, is the best way to go... is there a compelling reason to only care about sql server's cpu utilization?
November 17, 2010 at 10:02 am
Hi Satalaj,
Using the UPDLOCK in conjunction with the READPAST gives the best performance for processing queues. ReadPast however, needs an index on the order by column to work properly. Do you have a clustered indexing on the tables?
It could be that without the index, a sort operation is needed to satisfy the "order by" clause. The engine might be escalating the lock to a less granular level (page / extend / table), increasing the probability of concurrency contention, locking resources needed by the second session. Just a guess..
Rgds!!
November 17, 2010 at 10:03 am
Interesting article.
But CPU utilization is just one of very many counters to be tracking. Virtual memory, page table entries, page faults, disk volume latency, network - not to mention scores of SQL server specific metrics that should be trended so you can see issues over time before they bite you (page splits, etc).
Rather than scripting your own monitoring, it seems a far better use of a DBAs time to just use a system that does all the above for you, and alerts you in the event of issues, and allows you to see trends of your systems and metrics. IMO, DBAs should focus on where they add the most value - business logic, improving query executions, reports (and digging into what reports are supposed to do, from a business point of view, so they can present actionable data.)
That is truly valuable stuff, that can't be automated.
Leave things that can be automated to tools that automate them, rather than spending expensive DBA time on them.
For a few hundred a month to cover tens of databases (or other systems), something like LogicMonitor will save you a ton of time in monitoring, alerting and trending. Not to mention better uptime and availability.
http://www.logicmonitor.com/monitoring/databases/sql-server-monitoring/
November 17, 2010 at 10:06 am
Satalaj, I think your problem most probably is indexing. Make sure you have the proper indexes on your table and make sure your queries always use that index. Having no, or the incorrect index can make SQL Server take a page lock or a table lock, which invalidates the whole model.
November 17, 2010 at 10:10 am
sfrancis: Does it load the system performance? How often does this run?
November 17, 2010 at 10:27 am
sfrancis 18850 (11/17/2010)
Interesting article.But CPU utilization is just one of very many counters to be tracking. Virtual memory, page table entries, page faults, disk volume latency, network - not to mention scores of SQL server specific metrics that should be trended so you can see issues over time before they bite you (page splits, etc).
Rather than scripting your own monitoring, it seems a far better use of a DBAs time to just use a system that does all the above for you, and alerts you in the event of issues, and allows you to see trends of your systems and metrics. IMO, DBAs should focus on where they add the most value - business logic, improving query executions, reports (and digging into what reports are supposed to do, from a business point of view, so they can present actionable data.)
That is truly valuable stuff, that can't be automated.
Leave things that can be automated to tools that automate them, rather than spending expensive DBA time on them.
For a few hundred a month to cover tens of databases (or other systems), something like LogicMonitor will save you a ton of time in monitoring, alerting and trending. Not to mention better uptime and availability.
http://www.logicmonitor.com/monitoring/databases/sql-server-monitoring/
third party tools are obviously the best choice. Idera, Confio and RedGate all have tools far superior than my little VB Script. But at the begining of the article, I stated this is for places where $$$ for 3rd party tools is not available.
For example, where I am currently contracted, I use this while I wait for the PO to be approved for SQL Monitor 2.
Once I have SQL Monitor 2 up and running, I will no longer use this method.
November 17, 2010 at 10:34 am
Agree with Geoff 🙂
Viewing 15 posts - 16 through 30 (of 59 total)
You must be logged in to reply to this topic. Login to reply