May 29, 2008 at 2:33 pm
How would you figure out what caused the tempdb file to grow? Do I just set up an ongoing trace and hope that the cause resurfaces? Or is there a more systematic way to figure out the problem?
I just restarted my SQL server 24 hours ago with a default tempdb size of 10GB and 10% autogrowth. Currently, it's sitting at 260GB. Unfortunately, I wasn't running traces at the time, so I don't have a real starting point for the cause.
Thanks in advance for your help,
Jon
May 29, 2008 at 4:57 pm
Wow!
What version of SQL are your running? I only ask, because you might be able to find something useful in the 2005 dynamic management views. This site has a LOT of good monitoring scripts and you could probably narrow down who the culprit might be.
http://www.microsoft.com/technet/scriptcenter/scripts/default.mspx?mfr=true
After that, if you are on 2000 or lower, you can set up an alert to test the size of the tempdb every 5 minutes or so. Once it starts going over a certain size have it send you an email, and you can go in right then and sp_who2 to get your list of suspect spids and users. If they are growing the tempdb THAT much, they've got a long running query. You'll probably be able to tell which one it is just by looking. If you want to know what code the spids are running, DBCC INPUTBUFFER (spid number).
Good luck!
Deb
😎
May 30, 2008 at 6:08 am
I would definitely start with the trace to see which procedures are running long because I'll bet they're the same ones that are causing this problem. There are some great ways to monitor tempdb at this Microsoft white paper. I'd use them.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 30, 2008 at 6:21 am
I'm running SQL Server 2005 SP2.
The tempdb stopped growing at 260GB. However, I'm planning to shrink down the tempdb and hopefully catch the culprits.
May 30, 2008 at 6:31 am
You're in Full Recovery mode and have log backups in place right?
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 30, 2008 at 10:57 am
Grant,
Why would it be necessary to be in full recovery mode and doing a log backup before shrinking? Wouldn't SQL Server just throw an error if processes are currently using tempdb? Or is it just good practice to do backups before you do anything?
Thanks,
Jon
May 30, 2008 at 11:14 am
Not as such, no. I was just trying to get an understanding of how things were configured. If you're truncating on checkpoint, uh, I mean, in simple recovery mode, then you need to look for only a few transactions or even one as the source of the problem. If you're in full recovery mode, then it could be a lot more transactions causing the problem and building things up between log backups, however frequently you're taking them.
Is that at all clear or I am I starting the babble?
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 30, 2008 at 12:26 pm
Grant,
I guess I'm unclear. To detect the causes of tempdb growth, I was going to shrink the tempdb back to 10gb and start monitoring at that point (via suggestions provided earlier in the post).
I'm not sure I understand your last reply. If I was in simple mode or not, what transactions would I be able to troubleshoot to solve the tempdb problem?
Thanks,
Jon
May 30, 2008 at 12:34 pm
Transactions on databases are generally short. Actually they should be as short as possible most of the time. That means a small entry in the log. That log gets truncated at each checkpoint. That should keep it very small. With a log in Full recovery mode, the transactions, even if small, build up until a log backup is run. So you might have 1/2 hour (or whatever the schedule you're backing up the logs on is) of log data that you're storing. That means you'll generally be looking at a much bigger log file to start with and more variation as to what the log size needs to be, taking into account surges, the occasional long transaction, what have you.
In your case, you're in simple mode. For the log to go to 300gb, something, probably a single transaction, is going nuts and filling it up. You'll still need to run profiler and/or look at the DMV's outlined in the MS article above, but unlike a FULL model, you don't need to worry about all the other transactions that are getting kept because nothing is.
It's Friday afternoon, I'm definately babbling now.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 30, 2008 at 12:54 pm
Grant,
That makes sense. However, I think I was unclear in my initial problem description. Here are the current file sizes for tempdb:
tempdb.mdf = 261GB
tempdb.ldf = 512MB
The log file isn't the culprit for the huge file size. Something caused the tempdb.mdf file to autogrow out of control. I was planning to shrink the mdf file to 10GB and start profiling, etc.
Please let me know if something is still unclear.
Jon
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply