June 8, 2010 at 6:51 am
Hi,
I am facing a problem over SQL 2000, the datafile (.mdf) of temp db keeps on growing on a continuous random (occur at different times) pattern, and resets back to its original size after restarting SQL server. I have already set both recovery model to "simple" and growth size to "auto-growth". The same database environment was built over several servers and the problem was witnessed only at once.
So I wasn't not able to distinguish the reason of such error knowing that same maintenance plans ,design and implementations of databases are applied over all servers.
Would a continuous backup of transaction logs cause a growth from size 8 MB up to 10s of GB knowing that such backup is occurring periodically over short time and the databases do not usually face a huge number of transactions.
I need to know how to monitor tempdb status and narrow down reasons of my problem ?
Nadia
June 10, 2010 at 1:32 am
I would imagine there are some transactions running on your system that are making tempdb grow. These could be creating alot of temporary tables or running lots of cursors as i've seen on our system. Tempdb resets back to its original size (size of the model db) always after a restart of SQL unless you set it to be a different size using ALTER DATABASE.
It should also be set to SIMPLE recovery mode as you have done and the autogrow value should be set to a sensible amount.
I would be tempted to run profiler on your system to see what transactions are running and potentially IO intensive. You can also run command DBCC opentran ('tempdb') when you spot it is growing to see what open transactions are running in tempdb for a long duration.
June 11, 2010 at 4:33 am
Hi,
Thanks for your reply..
Regarding recovery model it is already set to Simple, and concerning open transactions I have tried to monitor them before using the dbcc opentran('tempdb') and nothing was reported as an old transaction. Concerning profiler, what are the optimal events to monitor since as long as am increasing the events being monitored the more the server is consuming CPU resources and I do not prefer to risk the server status.
Nadia
June 11, 2010 at 5:27 am
As for profiler, to trace any long running transactions I tend to remove the Security Audit and Sessions Event Classes and just trace
Stored Procedures - RPC:Completed
TSQL - SQL:BatchCompleted
Then on the filter tab monitor duration over 5000 ms. This will grab any transactions running over 5 seconds.
You could set up an alert in SQL to send you an email when the tempdb database is growing, you may see something with dbcc opentran('tempdb') at that point.
Hope this helps
June 11, 2010 at 7:13 am
nadia-720162 (6/8/2010)
Would a continuous backup of transaction logs cause a growth from size 8 MB up to 10s of GB knowing that such backup is occurring periodically over short time and the databases do not usually face a huge number of transactions.Nadia
I believe that you'll find the problem is a query with an "accidental cross join". Now, don't look for the words "Cross Join" in your queries because you won't find it that way. And "accidental cross join" is a join where someone doesn't really understand the data behind the scenes and they've accidently created a "many-to-many" join usually caused by insufficient join criteria. You'll need to use SQL Server Profiler to find it. I can almost guarantee it will be the one with the most "Reads". Setup SQL Profiler to look for any batch completion that has more than a million reads as a start.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply