June 30, 2017 at 8:04 am
Hi, I noticed yesterday that I have the following setup for one of my production virtual database servers:
1 socket, 8 vCPUs
32 GB RAM
SQL Server 2014 Std
1 tempdb file - Initial size 2262 MB, Autogrowth 10% , Max Size Unlimited
I'm wanting to split tempdb into 8 files with an initial size of 8 MB (total 64MB), Autogrowth 64 MB, and Max Size Unlimited. It seems this would be more appropriate for the server specs and more likely to prevent future tempdb contention.
However, I'm by no means an expert when it comes to TempDB. Is there a good reason not to split tempdb here?
Thanks in advance,
Mike
Mike Scalise, PMP
https://www.michaelscalise.com
June 30, 2017 at 10:37 am
Mike Scalise - Friday, June 30, 2017 8:04 AMHi, I noticed yesterday that I have the following setup for one of my production virtual database servers:1 socket, 8 vCPUs
32 GB RAM
SQL Server 2014 Std
1 tempdb file - Initial size 2262 MB, Autogrowth 10% , Max Size UnlimitedI'm wanting to split tempdb into 8 files with an initial size of 8 MB (total 64MB), Autogrowth 64 MB, and Max Size Unlimited. It seems this would be more appropriate for the server specs and more likely to prevent future tempdb contention.
However, I'm by no means an expert when it comes to TempDB. Is there a good reason not to split tempdb here?
Thanks in advance,
Mike
There is a VERY good reason to not split tempdb into 8 files: you can CREATE IO slowness in doing so. Too many files on too few spindles forced more head movement and disk spin to access the files. If you have a bunch of disks, great.
Much worse than that though is your extremely small initial size, and very small growth. You currently have a 2.2GB tempdb file but you want to start with 8MB?!? Your tempdb will likely quickly grow by 2200/64 fragments, which just doesn't make sense. Make the total size as big as it needs to be and be done with it.
Assuming you have Instant File Initialization enabled (you should) I would go with something like 512MB for file growth.
Have you ever detected any tempdb object allocation issues on this server? On ANY server? I have been consulting on SQL Server for over 20 years and have seen that just a bit more times than I have fingers and toes. And most of those were resolved with a) trace flags (1117/1118) and/or b) fixing some suboptimal code and/or c) moving tempdb off of extremely slow IO subsystem.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
June 30, 2017 at 6:49 pm
Kevin,
Thank you for the response and information. I do have a few follow-up comments and questions.
There is a VERY good reason to not split tempdb into 8 files: you can CREATE IO slowness in doing so. Too many files on too few spindles forced more head movement and disk spin to access the files. If you have a bunch of disks, great.
Perhaps I'm way off here, but I thought a good rule of thumb was to make the number of tempdb files = the number of cores up to 8. After 8, add sets of four, equally sized.
https://www.sqlskills.com/blogs/paul/correctly-adding-data-files-tempdb/
Also, as I'm sure you know, the SQL Server 2016 setup will attempt to split tempdb into 4 files by default. Keeping it at one file seems like it's becoming more the exception than the rule, but maybe I'm missing something.
Much worse than that though is your extremely small initial size, and very small growth. You currently have a 2.2GB tempdb file but you want to start with 8MB?!? Your tempdb will likely quickly grow by 2200/64 fragments, which just doesn't make sense. Make the total size as big as it needs to be and be done with it.
Fair enough. My rationale behind the numbers here over the existing numbers was simply the defaults from the SQL Server 2016 setup. I mean, yes, I know that my current tempdb is 2.2GB, but if I were doing a new install of SQL Server that will house three applications' databases, how would I possibly know what size tempdb would grow to and what autogrowth chunk size makes the most sense? So if I didn't mention what the current size was, what would be best practice?
Now, since I did mention the current size, your point makes a lot of sense. And if I'm understanding you right, you would make the initial size something like 2.5GB?
Assuming you have Instant File Initialization enabled (you should) I would go with something like 512MB for file growth.
I don't think I have Instant File Initialization enabled for this 2014 instance or any of my 2016 instances. I believe it's not enabled by default in 2014 and 2016 setups, but if you think there's a lot more to gain in terms of performance by avoiding the zeroing out of data pages, then I'll definitely look into it. What's the risk in using it?
Also, if I don't enable that option, what would you suggest the autogrowth be? 64MB? 128MB?
Regarding TF 1117 and 1118, they've been on my list to research and potentially add to the 2014 instances, but I have an overarching project to upgrade to 2016, at which point I won't need to specify those trace flags anymore.
----------
Related to all of that, is there a good, accurate way to get performance numbers behind tempdb so that I can get a baseline, then run before and after tests to see the impact each individual change has on tempdb performance? I would love to be able to prove with numbers that enabling Instant File Initialization or increasing the autogrowth has had a positive effect. Apologies for what's probably a very basic question, but tempdb tuning is a new area for me.
Thank you,
Mike
Mike Scalise, PMP
https://www.michaelscalise.com
July 1, 2017 at 3:20 pm
Mike Scalise - Friday, June 30, 2017 6:49 PMKevin,Thank you for the response and information. I do have a few follow-up comments and questions.
There is a VERY good reason to not split tempdb into 8 files: you can CREATE IO slowness in doing so. Too many files on too few spindles forced more head movement and disk spin to access the files. If you have a bunch of disks, great.
Perhaps I'm way off here, but I thought a good rule of thumb was to make the number of tempdb files = the number of cores up to 8. After 8, add sets of four, equally sized.
https://www.sqlskills.com/blogs/paul/correctly-adding-data-files-tempdb/Also, as I'm sure you know, the SQL Server 2016 setup will attempt to split tempdb into 4 files by default. Keeping it at one file seems like it's becoming more the exception than the rule, but maybe I'm missing something.
Much worse than that though is your extremely small initial size, and very small growth. You currently have a 2.2GB tempdb file but you want to start with 8MB?!? Your tempdb will likely quickly grow by 2200/64 fragments, which just doesn't make sense. Make the total size as big as it needs to be and be done with it.
Fair enough. My rationale behind the numbers here over the existing numbers was simply the defaults from the SQL Server 2016 setup. I mean, yes, I know that my current tempdb is 2.2GB, but if I were doing a new install of SQL Server that will house three applications' databases, how would I possibly know what size tempdb would grow to and what autogrowth chunk size makes the most sense? So if I didn't mention what the current size was, what would be best practice?
Now, since I did mention the current size, your point makes a lot of sense. And if I'm understanding you right, you would make the initial size something like 2.5GB?
Assuming you have Instant File Initialization enabled (you should) I would go with something like 512MB for file growth.
I don't think I have Instant File Initialization enabled for this 2014 instance or any of my 2016 instances. I believe it's not enabled by default in 2014 and 2016 setups, but if you think there's a lot more to gain in terms of performance by avoiding the zeroing out of data pages, then I'll definitely look into it. What's the risk in using it?
Also, if I don't enable that option, what would you suggest the autogrowth be? 64MB? 128MB?
Regarding TF 1117 and 1118, they've been on my list to research and potentially add to the 2014 instances, but I have an overarching project to upgrade to 2016, at which point I won't need to specify those trace flags anymore.
----------
Related to all of that, is there a good, accurate way to get performance numbers behind tempdb so that I can get a baseline, then run before and after tests to see the impact each individual change has on tempdb performance? I would love to be able to prove with numbers that enabling Instant File Initialization or increasing the autogrowth has had a positive effect. Apologies for what's probably a very basic question, but tempdb tuning is a new area for me.
Thank you,
Mike
A) IFI should be on for all but the absolute most secure SQL Servers. The reason it is off is "security", namely that the data in the space SQL Server gets is not wiped, so someone with a hex editor can open the file and see what is in them if they haven't been overwritten yet. I submit that if I am opening a hex editor on your SQL Server you are already p0wned.
B) I am giving your my experience. Tempdb allocation issues are a rarity. Having too many files on too few spindles is not. And if you identify tempdb allocation issues the solution is straight-forward to implement.
C) If you have no idea what size to make tempdb because it is a new server with new databases then I would probably pick some fraction of the largest database, with the fraction depending on whether it was a reporting or OLTP-centric instance.
D) There are a variety of resource to be found on monitoring tempdb performance. There is a DMV performance tuning book (Redgate freebie), Glenn Berry's SQL Server Diagnostic Scripts has some stuff for it, and there are some SQL Saturday sessions on it too.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
July 4, 2017 at 7:16 am
TheSQLGuru - Saturday, July 1, 2017 3:20 PMMike Scalise - Friday, June 30, 2017 6:49 PMKevin,Thank you for the response and information. I do have a few follow-up comments and questions.
There is a VERY good reason to not split tempdb into 8 files: you can CREATE IO slowness in doing so. Too many files on too few spindles forced more head movement and disk spin to access the files. If you have a bunch of disks, great.
Perhaps I'm way off here, but I thought a good rule of thumb was to make the number of tempdb files = the number of cores up to 8. After 8, add sets of four, equally sized.
https://www.sqlskills.com/blogs/paul/correctly-adding-data-files-tempdb/Also, as I'm sure you know, the SQL Server 2016 setup will attempt to split tempdb into 4 files by default. Keeping it at one file seems like it's becoming more the exception than the rule, but maybe I'm missing something.
Much worse than that though is your extremely small initial size, and very small growth. You currently have a 2.2GB tempdb file but you want to start with 8MB?!? Your tempdb will likely quickly grow by 2200/64 fragments, which just doesn't make sense. Make the total size as big as it needs to be and be done with it.
Fair enough. My rationale behind the numbers here over the existing numbers was simply the defaults from the SQL Server 2016 setup. I mean, yes, I know that my current tempdb is 2.2GB, but if I were doing a new install of SQL Server that will house three applications' databases, how would I possibly know what size tempdb would grow to and what autogrowth chunk size makes the most sense? So if I didn't mention what the current size was, what would be best practice?
Now, since I did mention the current size, your point makes a lot of sense. And if I'm understanding you right, you would make the initial size something like 2.5GB?
Assuming you have Instant File Initialization enabled (you should) I would go with something like 512MB for file growth.
I don't think I have Instant File Initialization enabled for this 2014 instance or any of my 2016 instances. I believe it's not enabled by default in 2014 and 2016 setups, but if you think there's a lot more to gain in terms of performance by avoiding the zeroing out of data pages, then I'll definitely look into it. What's the risk in using it?
Also, if I don't enable that option, what would you suggest the autogrowth be? 64MB? 128MB?
Regarding TF 1117 and 1118, they've been on my list to research and potentially add to the 2014 instances, but I have an overarching project to upgrade to 2016, at which point I won't need to specify those trace flags anymore.
----------
Related to all of that, is there a good, accurate way to get performance numbers behind tempdb so that I can get a baseline, then run before and after tests to see the impact each individual change has on tempdb performance? I would love to be able to prove with numbers that enabling Instant File Initialization or increasing the autogrowth has had a positive effect. Apologies for what's probably a very basic question, but tempdb tuning is a new area for me.
Thank you,
Mike
A) IFI should be on for all but the absolute most secure SQL Servers. The reason it is off is "security", namely that the data in the space SQL Server gets is not wiped, so someone with a hex editor can open the file and see what is in them if they haven't been overwritten yet. I submit that if I am opening a hex editor on your SQL Server you are already p0wned.
B) I am giving your my experience. Tempdb allocation issues are a rarity. Having too many files on too few spindles is not. And if you identify tempdb allocation issues the solution is straight-forward to implement.
C) If you have no idea what size to make tempdb because it is a new server with new databases then I would probably pick some fraction of the largest database, with the fraction depending on whether it was a reporting or OLTP-centric instance.
D) There are a variety of resource to be found on monitoring tempdb performance. There is a DMV performance tuning book (Redgate freebie), Glenn Berry's SQL Server Diagnostic Scripts has some stuff for it, and there are some SQL Saturday sessions on it too.
Thanks, Kevin. You make some compelling arguments. I'm going to do some more research and testing. Thanks for your help and enjoy the holiday.
Mike
Mike Scalise, PMP
https://www.michaelscalise.com
July 4, 2017 at 10:26 am
TheSQLGuru - Friday, June 30, 2017 10:37 AM
There is a VERY good reason to not split tempdb into 8 files: you can CREATE IO slowness in doing so. Too many files on too few spindles forced more head movement and disk spin to access the files. If you have a bunch of disks, great.
Not to mention its a virtual machine with virtual disks anyway so all this is moot
Mike Scalise - Friday, June 30, 2017 8:04 AM
Hi, I noticed yesterday that I have the following setup for one of my production virtual database servers:1 socket, 8 vCPUs
Use virtual sockets rather than virtual cores, check the VMWare virtualisation doc for sql server for more info
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
July 4, 2017 at 1:14 pm
Perry Whittle - Tuesday, July 4, 2017 10:26 AMMike Scalise - Friday, June 30, 2017 8:04 AM
Hi, I noticed yesterday that I have the following setup for one of my production virtual database servers:1 socket, 8 vCPUs
Use virtual sockets rather than virtual cores, check the VMWare virtualisation doc for sql server for more info
However watch licensing restrictions if this isn't Enterprise edition.
Standard edition, for example, is limited to lesser of 4 Sockets or 16 cores.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 5, 2017 at 9:23 am
" I submit that if I am opening a hex editor on your SQL Server you are already p0wned. "
Bingo 🙂
------------------------------------------------------------------------------------------------Standing in the gap between Consultant and ContractorKevin3NFDallasDBAs.com/BlogWhy is my SQL Log File HUGE?!?![/url]The future of the DBA role...[/url]SQL Security Model in Plain English[/url]
July 6, 2017 at 5:28 am
I guess I'm not understanding the comment(s) about why to use sockets instead of cores. Is it something like this:
1 socket 8 vCPUs is the same as 8 sockets 1 vCPU in a virtual world, and therefore go with the latter option because it's less $$ in licensing costs, or am I missing something.....?
If so, Gail, you're saying that if I'm running the standard edition, 8 sockets and 1 vCPU wouldn't jive with that? If that's what you're saying, then should I use something like 4 sockets with 4 vCPUs?
And if that's the route I should take, does that mean, if I've licensed 8 cores and have a 4 socket 4 vCPU configuration, then I can install a second instance of SQL server on a different server and utilize the remaining 4 cores of my license?
I'm not sure if my thought process makes sense at all, so please let me know if I'm thinking the right way or I'm way off.
Thanks,
Mike
Mike Scalise, PMP
https://www.michaelscalise.com
July 6, 2017 at 5:59 am
Mike Scalise - Thursday, July 6, 2017 5:28 AM1 socket 8 vCPUs is the same as 8 sockets 1 vCPU in a virtual world, and therefore go with the latter option because it's less $$ in licensing costs, or am I missing something.....?
Licensing of SQL and Windows is per-core these days, not per socket. Have 8 cores, license 8 cores
If so, Gail, you're saying that if I'm running the standard edition, 8 sockets and 1 vCPU wouldn't jive with that?
Correct. Standard edition is the lower of 4 sockets or 16 cores. So if you have 8 sockets, each with a single core, Standard edition uses just 4 of them (but you still have to pay for all 8 cores afaik)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 6, 2017 at 6:34 am
Thank you for clarifying. That makes more sense to me.
Sometimes this stuff can get very confusing.....
Mike
Mike Scalise, PMP
https://www.michaelscalise.com
July 6, 2017 at 3:04 pm
Mike Scalise - Thursday, July 6, 2017 5:28 AMI guess I'm not understanding the comment(s) about why to use sockets instead of cores. Is it something like this:1 socket 8 vCPUs is the same as 8 sockets 1 vCPU in a virtual world, ...
Thanks,
Mike
That not a true statement. NUMA interworkings would certainly be different, and I am pretty certain (but don't know enough of the internals) that they have other differences such as L1/2/3 cache access and usage, memory access and usage, etc.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply