August 3, 2010 at 9:59 am
We recently upgraded a server to Windows Server 2008 and SQL Server 2008 Standard (sp1). The tempdb database continues to grow and grow until we recycle SQL Server. It has grown to over 300gb before recycling. I have put in scripts to try and capture the processes that are running during this time but it captures quite a few and is making it difficult to troubleshoot. We cannot afford to recycle SQL Server daily to maintain the size of the tempdb database. Checkpoints are occurring regularly. Does anyone have any suggestions on what to check?
August 3, 2010 at 11:58 am
There are quite a bit of DMVs that you can use to monitor and trouble shoot the TempDB growth.
Usually tempDB is used when you have Row Level version set up, or your queries have big Joins, sorts, uses temp tables etc.
Here is a interesting article that might help you
http://technet.microsoft.com/en-us/library/cc966545.aspx#EDAA
-Roy
August 3, 2010 at 2:54 pm
Thanks for the quick reply. But with tempdb set to SIMPLE recovery shouldn't it maintain the log file rather than it continue to grow? The way it is behaving you would think it is set to FULL recovery with no backups of the t-logs being taken.
August 4, 2010 at 3:02 am
Tempdb growth..then its definitely going to be poorly written queries..
As you said you have a capturing mechanism in place ..You should use that and pin point that particular query..
August 4, 2010 at 9:59 am
Yesterday, we recompiled most of our stored procedures to drop any #tables as there were a number of stored procedures that didn't do this and the temporary tables were apparently remaining in tempdb even after the process finished. Now tempdb is growing at a much slower, and more manageable, rate. We upgraded from sql server 2000 recently to 2008 and 2000 would automatically delete the temporary tables. Is there a setting in SQL Server 2008 to do this by default as well? Not just for temporary tables but table variables?
August 4, 2010 at 10:26 am
2008 is supposed to delete the temp tables on completion of the SP as well, that didn't change...
August 4, 2010 at 12:57 pm
Local temporary tables are discarded after the connection to the server is closed. If the connection stays open, the tables will stay in tempdb. Additionally if you are using global tables, anything still referencing the global temp tables can keep it in tempdb until a explicit drop.
It could be really bad queries as another poster mentioned, but it also could be that you're queries do need all that extra space to process. In that case you probably should pre-allocate the extra space already.
August 5, 2010 at 7:57 am
"If the connection stays open, the tables will stay in tempdb"
Does connection pooling count here as the connection staying open?
The connections do a sp_reset_connection, but does this imply that #temp tables stay around until the next connection comes in & does the reset?
August 5, 2010 at 8:00 am
Steve T (8/4/2010)
Local temporary tables are discarded after the connection to the server is closed. If the connection stays open, the tables will stay in tempdb.
Only in ad hoc SQL queries or in dynamic SQL. In theory, if you are using a stored procedure, in all versions of SQL Server I know of, all temp tables are supposed to be dropped as soon as the stored procedure that created the temp table is completed, even if the connection stays open.
August 5, 2010 at 12:01 pm
jeff.mason (8/5/2010)
Steve T (8/4/2010)
Local temporary tables are discarded after the connection to the server is closed. If the connection stays open, the tables will stay in tempdb.Only in ad hoc SQL queries or in dynamic SQL. In theory, if you are using a stored procedure, in all versions of SQL Server I know of, all temp tables are supposed to be dropped as soon as the stored procedure that created the temp table is completed, even if the connection stays open.
Ah, I stand correct. Just tested this. Thank you.
August 5, 2010 at 12:07 pm
No problem. Lord knows how any of us keeps up with all of the complexity available.
I would guess that the reason for such functionality is that if you execute multiple stored procedures in sequence, each proc would have no idea which prior temp tables had been created, and you'd have errors all over the place (i.e. can't create temp table because it existed already, or the column names are different, or whatever). By cleaning up the mess each time you get consistency for the lifetime of the connection. I have seen such name messes on some poorly designed DTS packages that used global temp tables and run in parallel.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply