June 3, 2019 at 4:07 am
Hello,
Currently, we 8 tempDB files on our server. We size them equally using the available storage on the drive. What is the best way to know when we should extend the drive and temp DB file sizes.
Sincerely,
m
June 3, 2019 at 9:36 am
Keep an eye on the free space in the files, and extend the files when it approaches zero. Likewise, keep an eye on the free space on disk, and extend the disk when that approaches zero.
If you're experiencing growth in your tempdb files, you may wish to look at your workload to see what is causing that. Perhaps stale statistics are leading to inappropriate memory grants and hence spills to tempdb. That's just one possibility.
John
June 3, 2019 at 4:39 pm
Thank you John.
June 4, 2019 at 5:05 pm
Hello, Currently, we 8 tempDB files on our server. We size them equally using the available storage on the drive. What is the best way to know when we should extend the drive and temp DB file sizes. Sincerely, m
I wouldn't allocate all of the disk up front. It give you zero time to react.
Here's what I recommend...
For example... if you have a TempDB drive that has 100GB and lets say you know that your normal workload only needs, say, 12GB, then set each of 8 files to 2GB for 16GB total. After that, create a couple of SQL Agent Alerts that will fire up at 20GB (20000000 KB), and other levels. The alert can be setup to sent an email and execute a job to do other things that may be necessary such as capturing what's running and who or what is currently using TempDB.
That should give you the time to check out what's going on with your server and, possible, kill a session or add more space or whatever.
As a bit of a side bar, our server takes care of a pretty decent OLTP load and some monster batch jobs running. Even with all that, the only time we go of the original 16GB (total) is when some ad hoc special thing needs to happen and, even then, we normally do go over.
If you're going over all the time, something is wrong with the code on the box and you need to spend some serious time fixing the code.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 5, 2019 at 8:34 pm
Thanks for the replies.
Yes, we are working on alerts now.
In our case its a DW. Since we have multiple files that are currently set as limited in their growth, would it affect performance if a process straddles two of the files?
Thank you,
June 6, 2019 at 3:10 am
Thanks for the replies. Yes, we are working on alerts now. In our case its a DW. Since we have multiple files that are currently set as limited in their growth, would it affect performance if a process straddles two of the files? Thank you,
All else being equal, no. It shouldn't hurt at all. If the files are actually on different physical spindles, you can get a performance boost from it. If your "disks" are actually SSDs, then probably not.
I also say you "can" get a performance boost from different physical spindles but, as with all else in SQL Server, "It Depends", and mostly on how code is written and how the data is distributed across the files.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply