July 3, 2013 at 2:00 am
hello All ,
we have issue in our environment where temp db grows very large in 6-8 hrs, the settings are fine and as per standards such as temp db should be set simple recovery, unristrced autogrowth. there is no no open transaction etc. we have tempdb filegroups divided into 7 FG to increase performence and these files increases they dont get truncated automatically, everytime we have option left is to restart the sql server, which i think is not the right choice. during my analysis what I found was we use lots of temp tables in Stored procs, but we also use drop command at the end of SP, is there any other way or anything causing this issue..
Your help is appreciated ...
Thanks...
July 3, 2013 at 3:53 pm
sqlpanther (7/3/2013)
hello All ,we have issue in our environment where temp db grows very large in 6-8 hrs, the settings are fine and as per standards such as temp db should be set simple recovery, unristrced autogrowth. there is no no open transaction etc. we have tempdb filegroups divided into 7 FG to increase performence and these files increases they dont get truncated automatically, everytime we have option left is to restart the sql server, which i think is not the right choice. during my analysis what I found was we use lots of temp tables in Stored procs, but we also use drop command at the end of SP, is there any other way or anything causing this issue..
Your help is appreciated ...
Thanks...
What do you consider to be "very large"?
--Jeff Moden
Change is inevitable... Change for the better is not.
July 3, 2013 at 10:00 pm
It consumes around 60gb and thing is we do not have any transactions which are 60gb my analysys says out tempdb is not flushing the commited transaction but recovery mode is set to simple though.. Does it has to do anything with the temp tables ? Coz we use a lot of temp tables
July 3, 2013 at 10:55 pm
Run this to see what's been using Tempdb and how much space is allocated to that session
-- how much was allocated and by what user
SELECT
sys.dm_exec_sessions.session_id as [Session ID],
DB_NAME(database_id) as [Database Name],
host_name as [System Name],
program_name as [Program Name],
login_name as [User Name],
status,
cpu_time as [CPU Time (in milisec)],
total_scheduled_time as [Total Scheduled Time (in milisec)],
total_elapsed_time as [Elapsed Time (in milisec)],
(memory_usage * 8) as [Memory Usage (in KB)],
(user_objects_alloc_page_count * 8) as [Space Allocated for User Objects (in KB)],
(user_objects_dealloc_page_count * 8) as [Space Deallocated for User Objects (in KB)],
(internal_objects_alloc_page_count * 8) as [Space Allocated for Internal Objects (in KB)],
(internal_objects_dealloc_page_count * 8) as [Space Deallocated for Internal Objects (in KB)],
case is_user_process
when 1 then 'user session'
when 0 then 'system session'
end as [Session Type], row_count as [Row Count]
from sys.dm_db_session_space_usage
inner join
sys.dm_exec_sessions
on sys.dm_db_session_space_usage.session_id = sys.dm_exec_sessions.session_id
Then match the spid to this query, check the query_text and query_plan, you'll find out what's eating all your space
select top 10
t1.session_id,
t1.request_id,
t1.task_alloc,
t1.task_dealloc,
(SELECT SUBSTRING(text, t2.statement_start_offset/2 + 1,
(CASE WHEN statement_end_offset = -1
THEN LEN(CONVERT(nvarchar(max),text)) * 2
ELSE statement_end_offset
END - t2.statement_start_offset)/2)
FROM sys.dm_exec_sql_text(sql_handle)) AS query_text,
(SELECT query_plan from sys.dm_exec_query_plan(t2.plan_handle)) as query_plan
from (Select session_id, request_id,
sum(internal_objects_alloc_page_count + user_objects_alloc_page_count) as task_alloc,
sum (internal_objects_dealloc_page_count + user_objects_dealloc_page_count) as task_dealloc
from sys.dm_db_task_space_usage
group by session_id, request_id) as t1,
sys.dm_exec_requests as t2
where t1.session_id = t2.session_id and
(t1.request_id = t2.request_id) and
t1.session_id > 50
order by t1.task_alloc DESC
July 4, 2013 at 11:10 am
sqlpanther (7/3/2013)
It consumes around 60gb and thing is we do not have any transactions which are 60gb my analysys says out tempdb is not flushing the commited transaction but recovery mode is set to simple though.. Does it has to do anything with the temp tables ? Coz we use a lot of temp tables
Yes. Temp Tables, Table Variables, "work" tables created by SQL Server for things like "Hash Joins" and recursive CTEs, Cursors, certain forms of index maintenance, and a thousand other things all use TempDB.
If you have any "accidental cross-joins" (accidental many-to-many joins, in this case), those can use a LOT of TempDB. People try to overcome the dupes returned by such joins by using (usually) DISTINCT or (sometimes) GROUP BY.
If your system is mostly for "batch processing" of huge amounts of data, 60GB might not be so bad.
Tools like what Andrew provided above can help you find these things out.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 5, 2013 at 12:37 am
If you have default traces enabled ( should be the sqlserver default unless your installation dated pre RTM ), you can find information what extended your (temp)db.
/*
* Import Default Trace
*/
/*
* read SQL default trace file name
*/
SET nocount ON
declare @wrkTraceFilename nvarchar(500)
-- read trace info
SELECT @wrkTraceFilename = convert(nvarchar(500), [value])
FROM fn_trace_getinfo(1) -- 1 = default trace
where [property] = 2
-- Property
-- 1 = Trace options. For more information, see @options in sp_trace_create (Transact-SQL).
-- 2 = File name
-- 3 = Max size
-- 4 = Stop time
-- 5 = Current trace status
print @wrkTraceFilename
if object_id('tempdb..#tmpTrace') is not null
begin
drop table #tmpTrace
end
select *
-- used this to be sure I have ALL available columns ( never mind edition, servicepack, CU, HF, ...)
into #tmpTrace
from fn_trace_gettable(@wrkTraceFilename, default)
;
/* Show unfiltered trace data
Select TE.name as EventName
, db_name(T.DatabaseID) as DatabaseName0
, T.HostName as TheHostName
, T.ApplicationName as TheApplicationName
, T.LoginName as TheLoginName
, T.StartTime as TheStartTime
, T.EndTime as TheEndTime
, Object_schema_name(T.ObjectID, T.DatabaseID) as TheObjectSchema
, Object_name(T.ObjectID, T.DatabaseID) as TheObjectName
, T.ServerName as TheServername
, T.*
from #tmpTrace T
inner join sys.trace_events TE
on TE.trace_event_id = T.EventClass
order by TheStartTime
, DatabaseName
, LoginName
, HostName
, ApplicationName ;
*/
/* Show Auto Grow events */
Select TE.name as EventName
, T.StartTime
, T.EndTime
, T.Duration/* in microseconds ! */
, DatabaseName
, FileName
, T.LoginName
, T.HostName
, T.ApplicationName
, T.ServerName
, T.SessionLoginName
from #tmpTrace T
inner join sys.trace_events TE
on TE.trace_event_id = T.EventClass
Where TE.name in ('Data File Auto Grow',
'Log File Auto Grow')
order by StartTime desc
;
If @@rowcount = 0
begin
Select 'No Auto Grow events occurred in the loaded default trace file. (hence empty result set in previous query)' as Remark
end
/* Show only where objects involved */
Select TE.name as EventName
, Object_schema_name(T.ObjectID, T.DatabaseID) as TheObjectSchema
, Object_name(T.ObjectID, T.DatabaseID) as TheObjectName
, T.*
from #tmpTrace T
inner join sys.trace_events TE
on TE.trace_event_id = T.EventClass
Where Object_name(ObjectID, DatabaseID) is not null
order by T.StartTime
, TheObjectSchema
, TheObjectName
;
/*
drop table #tmpTrace
*/
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
July 5, 2013 at 3:52 am
Thanks ALZDBA, what exactly does this script do and what I have to fig out in it?
July 5, 2013 at 5:09 am
it loads the default trace data into a temp table and queries it so it only shows Auto Grow events in a first query en in a second query it shows all event related to a translatable object_id.
Have a look at the script content and you'll see it's documented per step what it does.
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
July 5, 2013 at 6:17 am
sqlpanther (7/3/2013)
my analysys says out tempdb is not flushing the commited transaction
Won't be the cause.
If TempDB is growing to 60GB, then you have stuff (temp tables, table variables, internal query work tables, spills, etc) that needs 60GB of size. Use the task_space_usage and session_space_usage DMVs to track down what's using TempDB space and tune it to behave better.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply