Memory Management

  • Hello DB Gurus,

    We have about 700 databases on a production SQL Server instance, some are small but they consume memory . Host has 96 GB,

    SQL Server has 75 GB and I see over all Memory Utilization 90%.  What are the steps to reduce  / watch SQL Server Memory.

    I know once the Memory Utilization reaches 98 % , I will reduce Max SQL Server Memory  to 70 GB and over all Memory Utilization will reduce. Need to know more. and this is a Virtual Machine on a physical host and there is no more memory that can be given to this VM.

     

    Thanks in Advance

  • What problem are you attempting to solve here?

    What you're observing is normal SQL Server memory behavior. SQL Server will, if it deems necessary, use all the memory it's configured to use (via the Maximum Server Memory setting). If you reduce this setting, it will remove cached objects from memory.

    Going deeper: SQL Server can only operate on data that's cached in memory. Which means that the more memory you give it, the more it can cache, and therefore the faster it can (potentially) operate on that data. With less memory, you have to perform more disk I/O to read and operate upon data, which may translate to higher wait stats and slower response times for your users.

    The mere presence of a database on a SQL Server instance doesn't require much memory. Using the database will start to ask for memory in larger amounts. Once you start using the database, data will be read from disk and cached in memory to it can be used. If SQL Server fills that memory, the least-recently-used data will be evicted first to make room.

    With 700 databases, your instance would probably happily use twice the memory you have now and just let data live in memory longer before it has to be flushed out. If your VM host is memory constrained and you're being asked to reduce your footprint, you'll need to test memory size vs. acceptable response time to find out how low you can go before your systems start feeling the effects of memory pressure in SQL Server.

    • This reply was modified 4 years, 8 months ago by  alevyinroc.
  • Are there a lot of OS level page swaps happening?  If not there may not be any problems.  Here's an article about monitoring that:

    https://docs.microsoft.com/en-us/azure/monitoring/infrastructure-health/vmhealth-windows/winserver-memory-pagespersec

    Typically you want most of your RAM to be used by SQL Server.  If you are going in and lowering Max Server Memory then you are forcing data out of RAM that SQL Server will just have to read in again, causing more disk I/O.  If the server has 96 GB of RAM in it setting Max Server Memory to 70 GB feels a bit low to me.  What else were you going to do with that memory?

    Here's some advice about Max Server Memory:

    https://www.brentozar.com/blitz/max-memory/

     

  • Thank you for the responses . Also is there a connection between CPU pressure and Memory , say I notice constant High CPU- 90% on a sql server instance, and cannot add more CPU's will increasing the memory help .  I read somewhere but wanted to get the internals and connection.

     

    Thanks,

  • Also sql server error log is consuming memory and space. I noticed it was a 2 GB file and consuimg 1-2% memory , after I cleared it . memory reduced to 80%. How to prevent it from happening again?, I mean how to have 10 small error log files and not 1 giant file?

    sp_cycle_errorlog

  • sqlguy80 wrote:

    Also is there a connection between CPU pressure and Memory , say I notice constant High CPU- 90% on a sql server instance, and cannot add more CPU's will increasing the memory help .

    If it's just a short spike to 90% CPU usage that isn't bad, but sustained 90% usage is a problem.  How often does this happen and for how long?

    If you think it's SQL Server, look at the queries and their execution plans that are causing the most CPU usage:

    SELECT TOP 10
    qs.total_elapsed_time / qs.execution_count / 1000000.0 AS average_seconds,
    qs.total_elapsed_time / 1000000.0 AS total_seconds,
    qs.total_worker_time / qs.execution_count / 1000000.0 AS avg_cpu_seconds,
    qs.total_worker_time / 1000000.0 AS total_cpu_seconds,
    qs.total_logical_reads, qs.total_logical_writes,
    qs.total_logical_reads / qs.execution_count AS average_logical_reads,
    qs.total_logical_writes / qs.execution_count AS average_logical_writes,
    qs.execution_count, qs.last_execution_time, qs.creation_time,
    OBJECT_SCHEMA_NAME(qt.objectid, qt.dbid) AS schema_name, OBJECT_NAME(qt.objectid, qt.dbid) AS object_name, o.modify_date,
    qp.query_plan, qs.sql_handle, qs.plan_handle,
    DB_NAME(qt.dbid) AS database_name,
    SUBSTRING(qt.text, (qs.statement_start_offset/2)+1, ((CASE qs.statement_end_offset WHEN -1 THEN DATALENGTH(qt.text) ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) + 1) AS statement_text
    FROM sys.dm_exec_query_stats qs
    OUTER APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
    LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
    OUTER APPLY sys.dm_exec_query_plan(qs.plan_handle) as qp
    ORDER BY total_cpu_seconds DESC;

    There is a good chance it could be things like hash joins and sorts happening causing the CPU usage.

  • Well, having to read data from disk (unless you're on SQL 2019 BDC) introduces the need for CPU cycles to do so (I would for arguments sake assume the PLE is close to non existent). Higly concurrent DB file access (as in: 1 DB file for a large / heavily utilized DB) would aswell increase CPU usage even if some threads will be only waiting for their slot to access that file.

    SQL Server is able to handle around 200MB/s sequential DB file access per core, so let's say your storage is capable of delivering 1GB/s storage throughput, going beyond 5 cores for pure "file access speed" will not help either. It's a bit different if there is another load happening concurrently throughout execution (say a sort on the resultset), just don't expect by adding a metric ton of CPU cores to be able to migitate memory pressure very well. Might work to a certain extend but consider that an edge case.

    That comes before any join or sort would be happening (which doesn't mean these couldn't be the root cause for the CPU usage).

  • sqlguy80 wrote:

    Also sql server error log is consuming memory and space. I noticed it was a 2 GB file and consuimg 1-2% memory , after I cleared it . memory reduced to 80%. How to prevent it from happening again?, I mean how to have 10 small error log files and not 1 giant file?

    sp_cycle_errorlog

     

    On a regular basis, cycling the log is a good thing.  On some servers, I do it nightly.  On others, I do it weekly.  How often you do it will depend upon what works for you.

    Michael L John
    If you assassinate a DBA, would you pull a trigger?
    To properly post on a forum:
    http://www.sqlservercentral.com/articles/61537/

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply