May 3, 2006 at 7:28 pm
I have a database which the tempdb grows pretty large, several GB and once it even filled up all the available space. Can I know how I can monitor (using profiler ?) and then get the store procedure codes (assuming the programs at my place uses mostly store procedures for SQL transactions) that caused this.
Thank you in advance for any advice offered.
May 3, 2006 at 8:10 pm
On the server you are monitoring, you can use the DBCC sqlperf(logspace) command to return the size of the log file (in MB) and take action as appropriate:
create table #logspace (dbname sysname, logsize decimal(12,2), percentused decimal(6,2), status int)
insert into #logspace
exec ('dbcc sqlperf(logspace)')
select * from #logspace
You could set up a job to perform the monitoring!
May 3, 2006 at 8:14 pm
Yes, I have been monitoring using this method in a SQL job and thus realizing that the tempdb is growing. My objective is to know which store procedure is causing this so that I can ask the developers to alter their program.
Thank you.
May 3, 2006 at 8:31 pm
Tempdb growth is mainly due to poor performing queries, so you can use SQL Profiler and probably filter on Duration to determine if there are any stored procedures that are taking more than x seconds to execute. You should only be interested in the SP Event classes.
May 4, 2006 at 6:05 am
It also is used for large scale joins and other large set routines. tempDb is the system scratchpad for many actions.
July 18, 2007 at 7:27 am
Pardon the interruption.
I have the same issue as with newbie. My question is would tempdb release back the disk space? If Tempdb will grow forever and will release disk space until restart would it not post a issue for db that try to stay up for long period of time?
Thank you
March 20, 2008 at 4:26 pm
I can not tell you how its working exactly regarding your tempdb , however I can say in tempdb when you are doing extensive joins and grouping work once the query is captured or result set returned the tempdb space on the datafile will no longer be there , however your free space will not release to the OS (Unless there is somtype of auto shrink or an agent job set up) I believe that is why it is key to monitor for tempdb growth so you are aware there is acivity occuring and you can jump in an capture it. I have seen people create #temp tables and unless they drop the table afterwards you at that point may have data contained, I just am not sure how long it will stay there if the code does not drop it after the work is performed.
November 24, 2008 at 10:07 am
Hi,
i had an issue where tempDB data file size is increasing and occupying the space in the drive(growing upto 95 gb drive capacity is 99GB). other user DB's are residing at the same drive. due to this the application is going down. There is no enough free space in the data file to shrink and release the free space to OS.
There is only one active trasaction on tempdb database:
create table #tmpDBCCinputbuffer ([Event Type] nvarchar(512), [Parameters] int, [Event Info] nvarchar(512))
insert into #tmpDBCCinputbuffer exec ('DBCC INPUTBUFFER(57)')
select [Event Info] from #tmpDBCCinputbuffer
Can anyone help me why the data file is filled with data?? Is this a bug?? Do we need to apply latest SP(SP2). How to monitor tempdb growth and how to avoid it.
Details of the server:
SQL server 2005 Enterprise Edition with RTM version
Installed on Activie/Passive cluster.
thanks in adv,
Uma
June 3, 2009 at 8:21 am
I have almost same problem the tempdb .mdf file are increasing more than the data drive have of capacity, 120 GB and then the job (Project Server 2007 Cube building) fails. The tempdb database file are still at the same size and I have to restart SQL or shrink.
We are running SQL 2008 std. SP1.
I have to mention that I first had the recovery model set to full, but then I changed it to see if any difference occurred with the job.
I have changed the Autogrow to MB and not procent.
Hope to hear from someone
Regards Janus.
PS: If I find a solution I will post it ASAP.
December 14, 2009 at 12:50 pm
I have the same issue. After migrating to 2008, temp db is growing so fast. It grew to 38 gig. It shows that available space is 37gig but not releasing space to OS.
February 3, 2010 at 8:24 am
Tempdb is a database, like the others ones on your system. If it grows, it needs that space to grow. It does not shrink back down once a process is done. And if it needs that space for your processing, why would it? You can shrink it, but it will just grow again. If it needs 120GB or more for some operation, you need to size tempdb for that operation.
If you want to determine why it's growing, you'll need to determine which process causes the abnormal growth. You can use profiler and try to make guesses based on the large transactions you have, but an easier (though more disruptive), way is to turn off autogrow for tempdb. See what processes fail and rework them, or resize tempdb to meet your needs.
Don't think of this as memory where there is garbage collection. It's more like a pagefile where you set aside the space you need and SQL Server then uses it. If it needs that space, don't try to keep freeing it up for the OS. Leave it there for SQL Server.
February 4, 2010 at 2:34 am
is there a certain time that your tempdb grows?
my guess would be you are using the sort in tempdb option when rebuilding your indexes (possibly) which would put a huge amount of data in tempdb while your maintenance plan runs
also -have a look at some of the microsoft best practices for sql server
one of these is
Don't put your tempdb on the same drive as your data and log files!!!
MVDBA
February 4, 2010 at 7:59 am
A) Set up a job that records tempDB size every minute
B) Set up SQL Profiler to record SQL:BatchComplete and RPC:Completed
The two together will give you some answers you asked for.
Another answer you need is that autogrow in small increments generates filesystem-level (external) fragmentation, resulting in massive head movement, and thus very slow performance (use a tiny piece, move the head, use a tiny piece, move the head).
Any shrinking option that leads to another autogrow tends to generate additional external fragmentation.
OS-level compression also leads to incredible fragmentation.
Log files increasing in small increments generate excessive VLF's as well (DBCC LOGINFO; hundreds is bad, I try for less than 40)
Set database sizes to what you think they'll need over the next year/few years, and leave them.
Check to see if you have database files in many fragments; a 300GB file in 5 fragments is just fine. A 50GB file in 100,000 fragments is insane (see the OS-level compression comment).
February 5, 2010 at 12:59 pm
There are several DMVs specifically created for monitoring tempdb space AND the queries that are using said space.
See here in Books Online: ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/tsqlref9/html/148a5276-a8d5-49d2-8146-3c63d24c2144.htm
note the See Also Reference section for 2 additional DMVs. You can find queries online to tie these together to help you keep up with what is going on.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply