May 23, 2022 at 11:09 am
Hi,
Recently, we got high memory incidents from monitoring tool. I have lowered the max server memory for time being.
Using SQL Server 2016 Enterprise Edition.
From task mgr I see sqlservr.exe is the one taking up 85% memory (i.e. working set value).
Now I want to know withing sql server , what are the sql queries are taking up more memory.
I used sp_whoisactive but I am seeing "used_memory" column values as 2,3,9, 79,243 , 4,221. I am not sure what is the value indicate, is it number of 8K pages?
Is there are proper way to tell which statements taking most memory in MB/GB and I wanted to store them in a table. This information will be share to developers.
Thank You!
May 23, 2022 at 11:39 am
Try taking a look at theses 2 management views. Change the ORDER BY in your queries to get highest use per I/O, CPU, Memry
May 23, 2022 at 12:02 pm
This was removed by the editor as SPAM
May 23, 2022 at 1:47 pm
Memory used is not a particularly useful metric in SQL Server. It grabs as memory and never releases it so that it can keep as much as possible in memory.
May 23, 2022 at 6:09 pm
Try taking a look at theses 2 management views. Change the ORDER BY in your queries to get highest use per I/O, CPU, Memry sys.dm_exec_query_stats
Just curious... I don't see how either of those relate to actual memory used. Logical READs <> Memory used. I can write a query against a Tally Table (for example) that will consume millions of reads even though the table itself is only 19 pages long. If it's already in memory, the physical reads will be 0.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 24, 2022 at 7:19 am
DesNorton wrote:Try taking a look at theses 2 management views. Change the ORDER BY in your queries to get highest use per I/O, CPU, Memry sys.dm_exec_query_stats
Just curious... I don't see how either of those relate to actual memory used. Logical READs <> Memory used. I can write a query against a Tally Table (for example) that will consume millions of reads even though the table itself is only 19 pages long. If it's already in memory, the physical reads will be 0.
DesNorton wrote:Try taking a look at theses 2 management views. Change the ORDER BY in your queries to get highest use per I/O, CPU, Memry sys.dm_exec_query_stats
Just curious... I don't see how either of those relate to actual memory used. Logical READs <> Memory used. I can write a query against a Tally Table (for example) that will consume millions of reads even though the table itself is only 19 pages long. If it's already in memory, the physical reads will be 0.
So, if I had to troubleshoot high memory , then what queries should I be tuning or what thing I should be looking at?
May 24, 2022 at 11:12 am
DesNorton wrote:Try taking a look at theses 2 management views. Change the ORDER BY in your queries to get highest use per I/O, CPU, Memry sys.dm_exec_query_stats
Just curious... I don't see how either of those relate to actual memory used. Logical READs <> Memory used. I can write a query against a Tally Table (for example) that will consume millions of reads even though the table itself is only 19 pages long. If it's already in memory, the physical reads will be 0.
I am not a DBA, so never get to look see this stuff.
That said, if I were looking for high memory usage, I would look at the *_grant_kb and *_used_grant_kb fields
May 24, 2022 at 12:02 pm
The queries would look something like this
SELECT TOP ( 10 )
dbname = DB_NAME( qt.dbid )
, qt.objectid
, qs.execution_count
, query_text = SUBSTRING(
qt.text, qs.statement_start_offset / 2 + 1
, ( CASE
WHEN qs.statement_end_offset = -1 THEN LEN( CONVERT( nvarchar(MAX), qt.text )) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset ) / 2 )
, avg_worker_time = qs.total_worker_time /qs.execution_count, qs.total_worker_time, qs.last_worker_time, qs.min_worker_time, qs.max_worker_time -- CPU TIME - in microseconds (but only accurate to milliseconds)
, avg_elapsed_time = qs.total_elapsed_time /qs.execution_count, qs.total_elapsed_time, qs.last_elapsed_time, qs.min_elapsed_time, qs.max_elapsed_time -- DURATION - in microseconds (but only accurate to milliseconds)
, qs.total_grant_kb, qs.last_grant_kb, qs.min_grant_kb, qs.max_grant_kb
, qs.total_used_grant_kb, qs.last_used_grant_kb, qs.min_used_grant_kb, qs.max_used_grant_kb
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text( qs.sql_handle ) AS qt
ORDER BY qs.total_grant_kb DESC -- Total GRANT memory
--ORDER BY qs.total_worker_time DESC -- Total CPU Time
--ORDER BY qs.total_elapsed_time DESC -- Total query DURATION
May 24, 2022 at 1:34 pm
Jeff Moden wrote:DesNorton wrote:Try taking a look at theses 2 management views. Change the ORDER BY in your queries to get highest use per I/O, CPU, Memry sys.dm_exec_query_stats
Just curious... I don't see how either of those relate to actual memory used. Logical READs <> Memory used. I can write a query against a Tally Table (for example) that will consume millions of reads even though the table itself is only 19 pages long. If it's already in memory, the physical reads will be 0.
Jeff Moden wrote:DesNorton wrote:Try taking a look at theses 2 management views. Change the ORDER BY in your queries to get highest use per I/O, CPU, Memry sys.dm_exec_query_stats
Just curious... I don't see how either of those relate to actual memory used. Logical READs <> Memory used. I can write a query against a Tally Table (for example) that will consume millions of reads even though the table itself is only 19 pages long. If it's already in memory, the physical reads will be 0.
So, if I had to troubleshoot high memory , then what queries should I be tuning or what thing I should be looking at?
Once again why have you determined that this is a problem that needs to be troubleshot?
May 25, 2022 at 3:01 am
I've not had to do such a thing because I've never had the same problem. If I did, I'd probably start with the following search...
--Jeff Moden
Change is inevitable... Change for the better is not.
May 25, 2022 at 3:03 am
Maybe a bit more directly....
https://www.google.com/search?q=how+to+find+queries+taking+the+most+memory+in+sql+server
--Jeff Moden
Change is inevitable... Change for the better is not.
May 25, 2022 at 6:32 pm
I think you are wasting your time. SQL should use all the memory made available to it. You WANT it to use the available memory. I would be concerned if it dropped below 100% CPU usage on a database server
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply