April 16, 2022 at 6:09 pm
This is going to depend a whole lot on what you mean by a "batch". What do you mean by "batch" here?
--Jeff Moden
Change is inevitable... Change for the better is not.
April 16, 2022 at 6:36 pm
By "batch", I mean one or more statements. I want to know if I can add any SQL statements to the batch, send that new batch to the SQL Server, and find out how many milliseconds of CPU utilization the whole batch consumed.
April 16, 2022 at 7:03 pm
On way is to use the following dynamic management view...
https://docs.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views
The "best" way would be to sample the view for the current SPID (@@SPID) that will run the "batch" for the current amount of CPU time, sample it again at the end of the "batch", and take the difference between the two values.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 18, 2022 at 5:27 am
Hi Jeff. Thanks, but unfortunately, the CPU usage in sys.dm_exec_requests and sys.dm_exec_query_stats are only accurate to the millisecond, and I need more accuracy, since otherwise the result will almost always be zero.
April 18, 2022 at 3:48 pm
This is the third post related to CPU usage you have made.
What are you trying to determine? What issue are you trying to solve?
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
April 18, 2022 at 4:00 pm
Hi Michael. Without going into my reason, I need to know the sum of CPU usage for ad-hoc batches of SQL statements I execute, on a per-database and per-time-window basis. I'm currently doing this using the query store, and I can get the CPU usage, per database, in 1- or 5-minute windows. That's great and meets my needs, but the query store stores plans, which I don't need, and as plans get discarded by the query store's automatic memory cleanup, which is based on age and importance (so a little vague, especially when dealing with ad-hoc batches), I'm concerned that I'll lose the CPU usage of those discarded plans. I'd much rather know the CPU usage on the basis of batches of statements executed, but it appears that the options for doing that is in millisecond resolution, so the results are often zero, which is no help. I have, indeed, asked multiple questions related to the same issue, to find out if there are approaches I wasn't aware of.
April 18, 2022 at 4:10 pm
Hi Michael. Without going into my reason, I need to know the sum of CPU usage for ad-hoc batches of SQL statements I execute, on a per-database and per-time-window basis. I'm currently doing this using the query store, and I can get the CPU usage, per database, in 1- or 5-minute windows. That's great and meets my needs, but the query store stores plans, which I don't need, and as plans get discarded by the query store's automatic memory cleanup, which is based on age and importance (so a little vague, especially when dealing with ad-hoc batches), I'm concerned that I'll lose the CPU usage of those discarded plans. I'd much rather know the CPU usage on the basis of batches of statements executed, but it appears that the options for doing that is in millisecond resolution, so the results are often zero, which is no help. I have, indeed, asked multiple questions related to the same issue, to find out if there are approaches I wasn't aware of.
You have told us what you want to do, but what kind of decisions are you trying to make based upon this data? Why is something that executes in less than a millisecond a concern?
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
April 18, 2022 at 5:27 pm
Hi Michael. Without going into my reason, I need to know the sum of CPU usage for ad-hoc batches of SQL statements I execute, on a per-database and per-time-window basis. I'm currently doing this using the query store, and I can get the CPU usage, per database, in 1- or 5-minute windows. That's great and meets my needs, but the query store stores plans, which I don't need, and as plans get discarded by the query store's automatic memory cleanup, which is based on age and importance (so a little vague, especially when dealing with ad-hoc batches), I'm concerned that I'll lose the CPU usage of those discarded plans. I'd much rather know the CPU usage on the basis of batches of statements executed, but it appears that the options for doing that is in millisecond resolution, so the results are often zero, which is no help. I have, indeed, asked multiple questions related to the same issue, to find out if there are approaches I wasn't aware of.
Then, instead of reinventing the wheel, you should identify the monitoring software that fits your needs and buy it. It'll be cheaper in the long run and it will be virtually error free.
Even then, keeping every execution plan "forever" probably isn't an option anywhere and it doesn't make sense to keep "the good ones" because they're likely not a part of the problem you're trying to solve, which you haven't actually told us about. On that note...
I'll also tell you that us knowing the reason for your request is like so many other requests... it's usually the key to providing the correct answer. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
April 18, 2022 at 8:59 pm
My goal is to know how much CPU time is used on a per-database, per-time-window basis. I'm not comparing anything, although I may decide to move a database to a new server if I find that a database is a resource hog. Currently, using the query store, with my software periodically getting from it the CPU usage values for any 5-minute time windows that it hasn't previously retrieved, and storing those values in a database. Works great. Would be better if I could store the CPU usage for each ad-hoc batch of statements sent to each database, which I'd store by date/timestamp of when each batch started completed, which is why I'm asking this particular question.
April 18, 2022 at 9:10 pm
To explain...
Each database is used for a different customer. My software processes stuff for each customer. If there's excessive CPU usage, that bumps up their invoice, based (among other things) on the total CPU utilization each month. I'm storing that usage in a database by time, which makes it easy to sum up all the time windows for that month when invoicing, but can also serve as justification if a customer asks for proof (e.g. showing spikes). Since my software is sending the queries (which are ad-hoc, so single-use execution plans), it would be ideal to tie CPU usage to each individual query by date/time. Hence my original question.
April 19, 2022 at 4:56 am
Ah... got it. Now I know why the concern over milliseconds. Thanks for the feedback.
One of the issues is, of course, not bumping up the CPU time with a utility that runs quite a bit to measure such a thing OR be able to trap those out so that you don't bill the customer for those. And, that's one of the things it appears that you're also trying to accomplish.
The only way that I've seen folks do this accurately is to stand up an instance per customer rather than a database per customer. I don't know of a dmv that accumulates CPU time used by database.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 19, 2022 at 5:52 am
The DMVs I've looked into have the CPU usage with only millisecond accuracy, which is fine for a large enough time window, but not at a batch level. It looks like I'll be sticking with the query store, and maybe switching from a 5-minute to a 1-minute window. I think that's as close as I'm going to be able to get.
April 19, 2022 at 1:34 pm
The DMVs I've looked into have the CPU usage with only millisecond accuracy, which is fine for a large enough time window, but not at a batch level. It looks like I'll be sticking with the query store, and maybe switching from a 5-minute to a 1-minute window. I think that's as close as I'm going to be able to get.
Having faced something similar a few years ago when it came to billing clients, I kind of threw up my hands in desperation with some of the miniscule calculations that were required. The system was designed that a unit price was required. So, when there was something like a box of 100,000 screws that cost 12.50, the unit prices had lots of decimal places.
I made a suggestion that we needed a minimum billing amount. I did some calculations based upon a couple different minimum amounts, and provided the bean counters with the figures. Showed them that we would make more money, and reduce costs, etc. . The code got a lot easier when I flashed the dollar signs at them.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply