March 7, 2017 at 10:42 pm
Comments posted to this topic are about the item TempDB Database On a Local Disk for an FCI
Sadan Kumar Budde
March 8, 2017 at 1:24 am
Nice to know, how to install / configure it, but I still do not know, if and how this will work, when a server crash and has to fail over from Node1 to Node2.
Will all tempdb-writes replicated (synchron or asynchron?) or will all open transactions be rolled back? What, if I created a #temp-table or a global ##temptable e.g. in SSMS. Will they be available (and filled) after the fail over?
The main reason for tempdb on local SSDs is performance, but any benefit would be lost, if there would be some sort of synchronisation between the nodes, on the other hand I would need a sync, when I depend on tempdb-content.
God is real, unless declared integer.
March 8, 2017 at 7:13 am
The installation steps are clear, but the reasons for doing this type installation are not quantified. You mentioned that the performance "may" be better. I think some examples of what performance was like when the tempdb files are located on SSD SAN are compared to tempdb files locally would be helpful. For example did you see a 50% improvement on certain queries that made high use of tempdb or is this more like a 1% improvement. I expect this would related to DB sizes since it is clear to me that moving large amounts of data over the SAN would be slower than local disk - the question is how large does the data need to be before it is noticeable?
March 8, 2017 at 7:28 am
Performance tests would not be helpfull / significant, since it depends very much on your database / application / environment). In an ideal world, you would do only simple selects which ALL uses perfect indexes, so that no lookups / sorts / spills whatever are needed and your TempDB would not be touched. In this case you would not find any benefit for placing the TempDB on a local SSD.
For every other database on the world it should be clear, that reads and writes to a local RAID-10 with two or more SSDs will be MUCH faster than to SAN (a single SSD could provide 500 GB per second with no latency, while acces to a SAN (particularly when it is busy) would be much slower (depending on the LAN connection and disctance between the DB Server and SAN)), disk number and types (HDD / SSD), SAN-Cache, SAN-Usage, encryption settings and of the TempDB-Usage of your application)....
God is real, unless declared integer.
March 8, 2017 at 7:41 am
Tempdb is recreated with each restart.
There is no need to sync between nodes.
I have been doing this same thing with tempdb on my clusters for years.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
March 8, 2017 at 7:51 am
I've found that putting TempDB on SSDs normally doesn't help much. Good code doesn't need it and bad code might see a 2X-5X improvement, which is still a pitiful improvement compared to finding the performance challenged code that uses too much TempDB and fixing it. Fixing such code can easily realized a 30-60X improvement with a 1000X improvement not being uncommon and it doesn't take much to accomplish such fixes.
Even MPP systems (which usually require a tweek of the code to take advantage of it) only advertise a 33X improvement.
If you want performance, performance is in the code. Bite the bullet and fix it before it bites you.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 8, 2017 at 8:04 am
Good post Sadan Kumar. Keep going. 🙂
Regards,Ganapathi varma, MCSAEmail: gana20m@gmail.com
March 8, 2017 at 8:07 am
Michael L John - Wednesday, March 8, 2017 7:41 AMTempdb is recreated with each restart. There is no need to sync between nodes.I have been doing this same thing with tempdb on my clusters for years.
Yes, it is recreated which each server start. But I thought, that fail over servers allows me to fail over without interupting every connection. When I restart my database server, I have to stop and start the applications, that are using it, too, because the most applications needs some init stuff and does not really like it, when the server connection get lost. I may be too optimistic (we are not using clusters, so I have no experience), but a failover should be transparent to the apps (they should not even realize, that there happens something)
If you want performance, performance is in the code. Bite the bullet and fix it before it bites you.
@jeff Modem: of course fixing the code would usually result in much more improvements. But going to Amazon, ordering 3 customer SSDs (hey, its only TempDB), putting it into the server and creating a RAID 10 would be maybe 1000 EUR / USD and increases the speed immediately. Fixing the code in a big application could cost several man-years of work - if you can change it (false for the most vendor apps and of course you need somebody who is able to do it)
God is real, unless declared integer.
March 8, 2017 at 9:14 am
t.franz - Wednesday, March 8, 2017 8:07 AMMichael L John - Wednesday, March 8, 2017 7:41 AMTempdb is recreated with each restart. There is no need to sync between nodes.I have been doing this same thing with tempdb on my clusters for years.Yes, it is recreated which each server start. But I thought, that fail over servers allows me to fail over without interupting every connection. When I restart my database server, I have to stop and start the applications, that are using it, too, because the most applications needs some init stuff and does not really like it, when the server connection get lost. I may be too optimistic (we are not using clusters, so I have no experience), but a failover should be transparent to the apps (they should not even realize, that there happens something)
If you want performance, performance is in the code. Bite the bullet and fix it before it bites you.
@jeff Modem: of course fixing the code would usually result in much more improvements. But going to Amazon, ordering 3 customer SSDs (hey, its only TempDB), putting it into the server and creating a RAID 10 would be maybe 1000 EUR / USD and increases the speed immediately. Fixing the code in a big application could cost several man-years of work - if you can change it (false for the most vendor apps and of course you need somebody who is able to do it)
Typically, it's only the top 10 things you need to fix and, typically, they're small things that certainly won't take "several man-years of work" yet can eliminate (personal experience) TRILLIONs of bytes of memory IO every 8 hours and have a significant (easily 30-60X improvement) reduction in CPU and Duration. Yep... they need to be retested after bing rewritten but it's worth it. And, yes, putting TempDB on SSDs will help, but don't expect it to work a miracle for you. Further, if you don't actually fix the root cause and no one changes the way they write code, you'll need to buy more SSDs because the abuse of TempDB will continue to scale with the data.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 12, 2017 at 7:19 am
This was removed by the editor as SPAM
November 27, 2020 at 10:47 am
Okay....
But now think of this: the local disk that holds the tempdb fails.... how will the cluster or SQL detect this, and initialize a failover ?
November 27, 2020 at 11:52 am
TempDB is a critical resource without it the system cannot operate, as such any failure in TempDB will cause a service stop and thus a failover.
January 5, 2021 at 11:54 am
Hi Ant....
There is a caveat here.... if the local disk with the tempdb fails, SQL will *not* initiate a failover. My colleague Twan and I have written an article detailing that on https://www.sqlservercentral.com/articles/do-not-place-tempdb-on-a-local-disk-in-a-sql-failover-cluster
TLDR: you need to check in SQL for the loss of the local disk, and initiate the failover yourself.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply