December 12, 2012 at 11:00 pm
Comments posted to this topic are about the item Find SQL Statements Consuming tempdb Space
December 13, 2012 at 8:10 am
Luckily, I'm getting a bunch of zero counts with this script.
Might be nicer to filter out queries that don't have any pages allocated/deallocated?
SELECT
er.session_id,
er.request_id,
er.sql_handle,
er.statement_start_offset,
er.statement_end_offset,
er.plan_handle,
counts.task_alloc,
counts.task_dealloc
FROM sys.dm_exec_requests AS er
INNER JOIN
(SELECT
session_id,
request_id,
SUM(internal_objects_alloc_page_count) AS task_alloc,
SUM(internal_objects_dealloc_page_count) AS task_dealloc
FROM sys.dm_db_task_space_usage
WHERE internal_objects_alloc_page_count<>0
OR internal_objects_dealloc_page_count<>0
GROUP BY session_id, request_id
) AS counts
ON counts.session_id = er.session_id AND counts.request_id = er.request_id
ORDER BY counts.task_alloc DESC
December 13, 2012 at 8:39 am
I recently had a number of problems with tempdb (it suddenly filled full), before we can use the BACKUP LOG with truncate ONLY, but in SQL Server 2008 this no longer works, is there a way to force to clean the tempdb????
December 13, 2012 at 9:14 am
This seems like it would be really great, but I don't understand the results. I am having frequent problems with TempDB growing out of control, consuming an entire 200GB drive, even though the source db of the majority of queries on the server is < 100GB. Can you provide a brief explanation?
-- RTW
Be curious!
December 13, 2012 at 9:31 am
It might be that your stored procedures aren't cleaning up the temp tables they use properly.
It's a good idea to use
IF object_id('tempdb..#yourTempTableName') IS NOT NULL DROP TABLE #yourTempTableName
at the beginning and end of any procedures/DTS/jobs that use temp tables.
Check this article (or search Google) for more info: http://stackoverflow.com/questions/6623846/why-are-temporary-tables-not-removed-from-tempdb-in-sql-server
December 13, 2012 at 9:37 am
Of course - makes total sense! Thanks for the tip.
-- RTW
Be curious!
April 14, 2013 at 5:16 pm
The Learner (12/13/2012)
Of course - makes total sense! Thanks for the tip.
I know it's an old post but you don't usually need to drop Temp Tables in a stored procedure. They usually clean themselves up.
If your Temp DB is consuming a 200GB drive, you have a larger problem with some seriously bad code. Look for code with DISTINCT in it which is a "cover" for bad code that has accidental Cross Joins in them. Some call thes "Many-to-Many" joins and they're usually the result of a poorly designed database or someone writing criteria for code without a full understanding of what the data actually contains.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 15, 2013 at 3:38 am
Ah yes, good point Jeff!
I've seen plenty of DISTINCT code that had some dodgy JOINs, or bad assumptions about the underlying data relationships.
I was only hazarding a guess with the "delete your temp tables after processing"...
(as a side note, your articles are amazing! Keep up the writing and the propagation of set-based thinking! 😀 )
April 15, 2013 at 7:40 am
Thanks for the follow-up. Having analysts drop temp tables at the end of their stored procedures did seem to help, but I would still see sporadic out of control growth. I just reviewed some of the more often run stored procedures and several of them do have Distinct statements, which I will have the analysts review. Thanks again for the input.
-- RTW
Be curious!
April 15, 2013 at 4:06 pm
The Learner (4/15/2013)
Thanks for the follow-up. Having analysts drop temp tables at the end of their stored procedures did seem to help, but I would still see sporadic out of control growth. I just reviewed some of the more often run stored procedures and several of them do have Distinct statements, which I will have the analysts review. Thanks again for the input.
The problem with dropping Temp Tables at the end is if the proc is executed in the near future while their plan is still cached, the code could actually run a bit slower. SQL Server keeps the "skeleton" of the Temp Tables in some form of cache unless you do an explicit drop.
Shifting gears, the sproadic out of control growth you speak of might be easy to find. Using a server side profiler run, set it up to look for anything that has more than, say, 10 million reads for both RPC and Batch events. It may take a while to rear its ugly head but the bugger will show up.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 15, 2013 at 4:08 pm
The Wizard Of Oz (4/15/2013)
Ah yes, good point Jeff!I've seen plenty of DISTINCT code that had some dodgy JOINs, or bad assumptions about the underlying data relationships.
I was only hazarding a guess with the "delete your temp tables after processing"...
(as a side note, your articles are amazing! Keep up the writing and the propagation of set-based thinking! 😀 )
Not a problem. I was just adding to that. And thank you very much for the very kind words about the articles. I'm humbled. :blush: Like I just told someone else, I aim to please... I sometimes miss but I'm always aiming. 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply