December 29, 2016 at 6:54 am
I need to recommend tempdb files how many needs to created , can nay suggest to recommend how many files needs to be there or else is any query to recommend how many files to be created ?
December 29, 2016 at 6:59 am
ramyours2003 (12/29/2016)
I need to recommend tempdb files how many needs to created , can nay suggest to recommend how many files needs to be there or else is any query to recommend how many files to be created ?
Microsoft themselves have posted recommendations. Did you even try to find this info for yourself?
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
December 29, 2016 at 2:43 pm
ramyours2003 (12/29/2016)
I need to recommend tempdb files how many needs to created , can nay suggest to recommend how many files needs to be there or else is any query to recommend how many files to be created ?
I think that is one that can fall into the "it depends" category.
Microsoft has more than one set of recommendations on the number of files but the one per core is the one you will see most often. However, that can also be a problem. Bob Ward came up with a different set of recommendations that you can find in this Microsoft KB:
https://support.microsoft.com/en-us/kb/2154845
The reason for the different recommendations and some of the ways too many files can hurt performance are in one of Paul Randal's blog posts when he was doing the DBA myths a day posts (good series worth reading by the way):
Sue
December 31, 2016 at 4:51 am
Microsofts recommendations are a "one-shoe-fits-all" solution that I am not a huge fan of.
Experience that I have gathered has shown me that an initial Installation with 5 datafiles with 1GB size is a good start. After the installation I check the logs for IO contention and if I find it, add two datafiles until the contention stops.
Also important is to activate the trace flag 1117 (on versions lesser than 2016) so that the datafiles grow equally.
December 31, 2016 at 7:30 am
kevaburg (12/31/2016)
Microsofts recommendations are a "one-shoe-fits-all" solution that I am not a huge fan of....
Here is a direct quote from the Microsoft recommendations:
Create as many files as needed to maximize disk bandwidth. Using multiple files reduces tempdb storage contention and yields significantly better scalability. However, do not create too many files because this can reduce performance and increase management overhead. As a general guideline, create one data file for each CPU on the server (accounting for any affinity mask settings) and then adjust the number of files up or down as necessary.
Not really a 'do it this way' directive, as you appear to be suggesting, but more of a guideline, in my opinion.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
December 31, 2016 at 8:16 am
Phil Parkin (12/31/2016)
kevaburg (12/31/2016)
Microsofts recommendations are a "one-shoe-fits-all" solution that I am not a huge fan of.
The reason I said this is because of this statement......
As a general guideline, create one data file for each CPU on the server (accounting for any affinity mask settings) and then adjust the number of files up or down as necessary.
If I have a server with 64-cores, do I really want (or need) 64 datafiles? How many accidental DBAs are out there that would simply think more is better. They don't often take the time to check if they can reduce the number of files but will have to check the logs accordingly if there is a performance issue and adjust the files appropriately.
Maybe it is a question of interpretation but I really do believe that starting small and working up is much better than guessing a middle Point.
But then it isn't an exact science and what works for one Environment might be less than optimal for another....
December 31, 2016 at 8:39 am
kevaburg (12/31/2016)
Phil Parkin (12/31/2016)
kevaburg (12/31/2016)
Microsofts recommendations are a "one-shoe-fits-all" solution that I am not a huge fan of.The reason I said this is because of this statement......
As a general guideline, create one data file for each CPU on the server (accounting for any affinity mask settings) and then adjust the number of files up or down as necessary.
If I have a server with 64-cores, do I really want (or need) 64 datafiles? How many accidental DBAs are out there that would simply think more is better. They don't often take the time to check if they can reduce the number of files but will have to check the logs accordingly if there is a performance issue and adjust the files appropriately.
Maybe it is a question of interpretation but I really do believe that starting small and working up is much better than guessing a middle Point.
But then it isn't an exact science and what works for one Environment might be less than optimal for another....
Good point, thanks for posting back.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
December 31, 2016 at 8:42 am
Phil Parkin (12/31/2016)
kevaburg (12/31/2016)
Phil Parkin (12/31/2016)
kevaburg (12/31/2016)
Microsofts recommendations are a "one-shoe-fits-all" solution that I am not a huge fan of.The reason I said this is because of this statement......
As a general guideline, create one data file for each CPU on the server (accounting for any affinity mask settings) and then adjust the number of files up or down as necessary.
If I have a server with 64-cores, do I really want (or need) 64 datafiles? How many accidental DBAs are out there that would simply think more is better. They don't often take the time to check if they can reduce the number of files but will have to check the logs accordingly if there is a performance issue and adjust the files appropriately.
Maybe it is a question of interpretation but I really do believe that starting small and working up is much better than guessing a middle Point.
But then it isn't an exact science and what works for one Environment might be less than optimal for another....
Good point, thanks for posting back.
It's always good to have a sparring partner for such subjects! 🙂
December 31, 2016 at 9:12 am
And this is what we do on production environment :
1. 4 tempdb datafiles on Primary File Group with exactly same size (1024 MB) and same autogrowth value (1 mdf and 3 ndf). Doing this, SQL Server uses proportional fill algorithm to fill data files.
2. It is good to keep dedicated drive for tempdb. That is, keep it separate from user database drive.
Regards
VG
December 31, 2016 at 9:14 am
kevaburg (12/31/2016)
Microsofts recommendations are a "one-shoe-fits-all" solution that I am not a huge fan of.Experience that I have gathered has shown me that an initial Installation with 5 datafiles with 1GB size is a good start. After the installation I check the logs for IO contention and if I find it, add two datafiles until the contention stops.
Also important is to activate the trace flag 1117 (on versions lesser than 2016) so that the datafiles grow equally.
Try that when you have multiple file groups in other databases and witness the havoc that causes because TF 1117 is a server wide setting.
A better thing to do pre-2016 is to right-size the TempDB files and leave TF 1117 turned off. Of course, you should definitely turn TF 1118 on for pre-2016 installations.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 31, 2016 at 9:20 am
SQL Learner - VKG (12/31/2016)
And this is what we do on production environment :1. 4 tempdb datafiles on Primary File Group with exactly same size (1024 MB) and same autogrowth value (1 mdf and 3 ndf). Doing this, SQL Server uses proportional fill algorithm to fill data files.
2. It is good to keep dedicated drive for tempdb. That is, keep it separate from user database drive.
Just to clarify....although you have the same autogrowth Settings for each of the files it doesn't guarantee they will grow at the same rate. The problem is that the proportional fill algorithm will favour larger files over the smaller and create hot spots.
To get over this restriction activate trace flag 1117 to ensure equal growth across all of the files.
In SQL 2016 this trace flag is already activated for tempdb but not for lesser versions....
December 31, 2016 at 9:33 am
In the past, a lot of people have recommended multiple files for TempDB only if TempDB has a problem. And, in the past, I've ignored that advice because systems grow and there will eventually be some contention.
I can't speak for anyone else but here's how I setup my systems at work.
1. If the system has 8 or less processor cores, I set TempDB up with 1 file each. I don't use TF 1117 because of what I said in my post above so I generally allocate 2GB per file and call it a day.
2. If the system has more than 8 processor cores, I set TempDB up with 8 files of 2GB each. Because I also do 100% peer reviews, we don't have any code that goes nuts on TempDB and, despite having several databases on my production instance that are approaching 1TB each, the files have never in the last 5 years grown beyond the 2GB initial allocation of each. To be sure, we do have a lot of large ETL and do make pretty good use of TempDB in code but the files have still never grown.
YMMV. We also have 256GB of RAM on the system so we don't get a whole lot of TempDB spills to disk even though the larger databases don't fit into RAM.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 31, 2016 at 9:42 am
Jeff Moden (12/31/2016)
In the past, a lot of people have recommended multiple files for TempDB only if TempDB has a problem. And, in the past, I've ignored that advice because systems grow and there will eventually be some contention.I can't speak for anyone else but here's how I setup my systems at work.
1. If the system has 8 or less processor cores, I set TempDB up with 1 file each. I don't use TF 1117 because of what I said in my post above so I generally allocate 2GB per file and call it a day.
2. If the system has more than 8 processor cores, I set TempDB up with 8 files of 2GB each. Because I also do 100% peer reviews, we don't have any code that goes nuts on TempDB and, despite having several databases on my production instance that are approaching 1TB each, the files have never in the last 5 years grown beyond the 2GB initial allocation of each. To be sure, we do have a lot of large ETL and do make pretty good use of TempDB in code but the files have still never grown.
YMMV. We also have 256GB of RAM on the system so we don't get a whole lot of TempDB spills to disk even though the larger databases don't fit into RAM.
Point taken with the trace flag.....
January 9, 2017 at 7:08 am
SQL Learner - VKG (12/31/2016)
And this is what we do on production environment :1. 4 tempdb datafiles on Primary File Group with exactly same size (1024 MB) and same autogrowth value (1 mdf and 3 ndf). Doing this, SQL Server uses proportional fill algorithm to fill data files.
2. It is good to keep dedicated drive for tempdb. That is, keep it separate from user database drive.
Fully Agree with the above. Check for contention. Add more files if needed. Check I/O speed put TEMPDB on fast storage. Identify with SAN admin if you are using a SAN that this area is quick and under utilized if possible.
I usually set Traceflags T1118 and T1117 on.
If I remember one of them is no longer needed but switching it on poses no issues can't remember which.
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply