May 9, 2013 at 6:46 am
Hi,
My shop is running 2008R2 SP2. Our monitoring tool indicated large tempdb data file growth over a short period, then later showed over 90% free space. We are trying to determine who / what caused the growth to determine if it is likely to be regular (job-based), or one-time (bulk load).
I've tried using dynamic management views/functions (dm_xe_*) as well as a bit of extended events (event databases_data_file_size_changed), but can't seem to come up with exactly what I want.
If anyone knows a method to do this, or can help me with one of the methods I've tried, I'd appreciate it.
~ Jeff
May 9, 2013 at 3:03 pm
jhager (5/9/2013)
Hi,My shop is running 2008R2 SP2. Our monitoring tool indicated large tempdb data file growth over a short period, then later showed over 90% free space. We are trying to determine who / what caused the growth to determine if it is likely to be regular (job-based), or one-time (bulk load).
I've tried using dynamic management views/functions (dm_xe_*) as well as a bit of extended events (event databases_data_file_size_changed), but can't seem to come up with exactly what I want.
If anyone knows a method to do this, or can help me with one of the methods I've tried, I'd appreciate it.
~ Jeff
1st, tempdb is like a spring. It will be used at will and usage depends of current workload. So do not be surprised if you see spikes sometimes, and other days, very low utilization.
Having said that and if you have a monitoring system or tool in place, you can use this T-SQL code, which I called "tempdbWatcher"
SELECT
SD.name,
MF.database_id,
SUM( CONVERT(decimal(10,2),(DF.size/128.0)) ) as Size,
SUM( CONVERT(decimal(10,2), (CAST(FILEPROPERTY(DF.name, 'SpaceUsed') AS INT)/128.0 ) ) ) AS UsedSpace
FROM sys.master_files MF JOIN sys.databases SD
ON SD.database_id = MF.database_id
JOIN sys.database_files DF
ON DF.physical_name collate DATABASE_DEFAULT = MF.physical_name collate DATABASE_DEFAULT
WHERE MF.type = 0
GROUP BY SD.name, MF.database_id
Run it on tempdb and save it on a table. Make a job and schedule for every min or so. After a while, you will see any spikes and you can correlate with specific queries or jobs that usually run at your work.
This one is also nice, taken from SQL Server DMVs in action, fantastic book by Ian W. Stirk
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT es.session_id
, ec.connection_id
, es.login_name
, es.host_name
, st.text
, su.user_objects_alloc_page_count
, su.user_objects_dealloc_page_count
, su.internal_objects_alloc_page_count
, su.internal_objects_dealloc_page_count
, ec.last_read
, ec.last_write
, es.program_name
FROM sys.dm_db_session_space_usage su
INNER JOIN sys.dm_exec_sessions es
ON su.session_id = es.session_id
LEFT OUTER JOIN sys.dm_exec_connections ec
ON su.session_id = ec.most_recent_session_id
OUTER APPLY sys.dm_exec_sql_text(ec.most_recent_sql_handle) st
WHERE su.session_id > 50
Very cool for identifying the job or person that’s using most of tempdb.
But I would say, do not worry about tempdb space but tempdb contention. If you properly allocate tempdb space but have no contention, you should not worry about it. Just focus on tune up your queries so they can be more efficient instead.
May 10, 2013 at 5:04 am
Thanks for replying, and the monitoring queries as well.
As our "production" system is typically fairly low-use, I don't typically spend much time checking tempdb, and hadn't had any noticeable issues until this one. In fact, I had run Brent Ozar's Blitz procedure, which identified that all of the tempdb files (8) were not all allocated the same, so I set them all to 5500 MB. Four hours later, they were at 8500 each (but didn't show autogrowth in the Standard Disk Usage report).
Thanks again,
Jeff
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply