tempdb memory

  • I can understand tempdb getting to be bigger if certain queries are running, etc. Usually I see that in terms of disk space, not in memory consumption. However, given the sizes of the databases we have on this server, it seems out of place to have tempdb that large.

    I came from an environment whose main database was over 100GB in size, with a couple of ancillary databases adding probably another 100GB. These ran on a server with 32GB of memory. The tempdb would grow significantly in file size, but I don't ever recall it consuming more than 1GB of memory in the buffer pool.

    Needless to say, my main goal is still to determine what tempdb is doing with the memory it is consuming. If it's because of one of the databases processing something heavy stuff, that's fine, but I want to know which one so I can isolate the database and not have it impact performance. Perhaps 3-4 GB isn't unusual for whatever it's processing, but I need to know why it's using that much and which database is making the biggest request of it so that I can isolate it.

  • Ok, so I'll go back to looking at expensive queries, looking at waits, and possibly running a server-side traace if need be.

    Jared

    Jared
    CE - Microsoft

  • Remember, TempDB is used by all the databases, not just for temporary tables and table variables, but also for queries that require sorting (ORDER BY, DISTINCT, GROUP BY). I have to agree with Jared, start looking at the queries being run by the applications, they may be the culprits causing the excessive memory use by TempDB.

  • Lynn Pettis (11/3/2011)


    Remember, TempDB is used by all the databases, not just for temporary tables and table variables, but also for queries that require sorting (ORDER BY, DISTINCT, GROUP BY). I have to agree with Jared, start looking at the queries being run by the applications, they may be the culprits causing the excessive memory use by TempDB.

    I'm learning! 🙂

    Jared

    Jared
    CE - Microsoft

  • Lynn Pettis (11/3/2011)


    Remember, TempDB is used by all the databases, not just for temporary tables and table variables, but also for queries that require sorting (ORDER BY, DISTINCT, GROUP BY). I have to agree with Jared, start looking at the queries being run by the applications, they may be the culprits causing the excessive memory use by TempDB.

    Lynn,

    But are you sure that when server does hash or sort in tempdb you can see these objects in the cache? I tested the following scenario:

    1. Run a script with very bad estimates about record count. Estimates were around 300K rows while it actually were 20 millions.

    2. While the script has been running I queried sys.dm_os_buffer_descriptors a lot of times

    I would expect to see at least one big object in tempdb which sql server uses internally (so allocation_unit_id may be referencing to something weird), but I found nothing. It looks like SQL Server using tempdb for some internal processes don't use memory for caching these hidden objects. It makes sense because if it can possibly use the memory why should it go to tempdb? Memory is much faster than tempdb anyway.

    So I may assume that the issue OP has raised has nothing to do with sorting/grouping in tempdb. It is only related to temp tables/variables explicitly created in stored procedures or ad-hoc queries.

    If amrinea wants to be sure that there are no sort/hash issues it can be done by profiling for Hash warning/Sort warning events or by reading from sql server log file using something like this:

    DECLARE @filename VARCHAR(500)

    SELECT @filename = CAST(value AS VARCHAR(500))

    FROM fn_trace_getinfo(DEFAULT)

    WHERE property = 2

    AND value IS NOT NULL

    select @filename

    --Check if you have any Sort/Hash Warnings

    SELECT gt.HostName,

    gt.ApplicationName,

    gt.NTUserName,

    gt.NTDomainName,

    gt.LoginName,

    gt.SPID,

    gt.EventClass,

    te.Name AS EventName,

    gt.EventSubClass,

    gt.TEXTData,

    gt.StartTime,

    gt.EndTime,

    gt.ObjectName,

    gt.DatabaseName,

    gt.FileName

    FROM fn_trace_gettable(@fileName, DEFAULT) gt

    JOIN sys.trace_events te ON gt.EventClass = te.trace_event_id

    WHERE EventClass IN(55, 69) -- Hash/Sort warnings

    ORDER BY StartTime desc;

    Thanks,

    Alex


    Alex Suprun

  • amrinea (11/3/2011)


    I can understand tempdb getting to be bigger if certain queries are running, etc. Usually I see that in terms of disk space, not in memory consumption. However, given the sizes of the databases we have on this server, it seems out of place to have tempdb that large.

    I came from an environment whose main database was over 100GB in size, with a couple of ancillary databases adding probably another 100GB. These ran on a server with 32GB of memory. The tempdb would grow significantly in file size, but I don't ever recall it consuming more than 1GB of memory in the buffer pool.

    Needless to say, my main goal is still to determine what tempdb is doing with the memory it is consuming. If it's because of one of the databases processing something heavy stuff, that's fine, but I want to know which one so I can isolate the database and not have it impact performance. Perhaps 3-4 GB isn't unusual for whatever it's processing, but I need to know why it's using that much and which database is making the biggest request of it so that I can isolate it.

    Why are you so sure that all your performance problems are because of the tempdb is taking 3GB in cache? Maybe other databases don't need these extra 3GB of memory at all? Have you checked PLE?

    SELECT OBJECT_NAME, cntr_value AS [Page Life Expectancy]

    FROM sys.dm_os_performance_counters

    WHERE OBJECT_NAME like '%:Buffer Manager%' -- Modify this if you have named instances

    AND counter_name = 'Page life expectancy';

    If it's less than 900 for your server then we are talking about some memory pressure. In this case you may want to see more detailed level of buffer cache, what objects are consuming so much memory. Maybe it will give you some clues:

    use tempdb

    go

    SELECT ISNULL(OBJECT_SCHEMA_NAME(p.[object_id]) + '.' + OBJECT_NAME(p.[object_id]), '<<<Unknown>>>') AS [ObjectName],

    p.[object_id], p.index_id, i.name AS [IndexName], COUNT(*)/128 AS [buffer size MB],

    SUM(CAST ([free_space_in_bytes] AS BIGINT)) / (1024 * 1024) AS [Empty size MB], COUNT(*) AS [buffer_count]

    FROM sys.dm_os_buffer_descriptors AS b (NOLOCK)

    LEFT JOIN sys.allocation_units AS a (NOLOCK) ON a.allocation_unit_id = b.allocation_unit_id

    LEFT JOIN sys.partitions AS p (NOLOCK) ON a.container_id = p.hobt_id AND p.[object_id] > 100

    LEFT JOIN sys.indexes AS i (NOLOCK) ON i.object_id = p.object_id AND i.index_id = p.index_id

    WHERE b.database_id = DB_ID()

    GROUP BY p.[object_id], p.index_id, i.name

    ORDER BY buffer_count DESC;

    Note that in tempdb some tables can be dropped already but memory may still be allocated.


    Alex Suprun

  • You've given me a lot to look at. I'll do some more digging on Monday. The reason I considered memory pressure is because processes that used to finish quickly or that run in less than a second in SSMS end up taking longer than usual or time out randomly in the application. I'be been monitoring with SQL Profiler to make sure it's the database and not the app.

    Thanks,

    Adam

Viewing 7 posts - 16 through 21 (of 21 total)

You must be logged in to reply to this topic. Login to reply