November 10, 2012 at 11:45 pm
i have noticed tempdb contention on our tempdb, so i want to split the data file to multiple files.We have 24 cores, i know that i shouldn't be splitting more than 8 files. Our drive capacity is 100gb, how much should i assign size to each of these file.? Lets say if i assign 2 gb to each file, when will the transaction switch to next file?
November 11, 2012 at 8:00 am
Allocation in files belonging to tempdb is done using proportional fill. It means that when the files have the same size the allocation will be distributed across all files. It is very important to have all files set to the same initial size and growth because if there are difference in the size, there will be difference in the load pressure that each file will have. As for the size that you should use – this is something that no one here can tell you what to use. It depends on how much use you have for tempdb and we simply don't know that.
Adi
--------------------------------------------------------------
To know how to ask questions and increase the chances of getting asnwers:
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
November 11, 2012 at 9:10 am
sqldba_newbie (11/10/2012)
i have noticed tempdb contention on our tempdb, so i want to split the data file to multiple files.We have 24 cores, i know that i shouldn't be splitting more than 8 files. Our drive capacity is 100gb, how much should i assign size to each of these file.? Lets say if i assign 2 gb to each file, when will the transaction switch to next file?
While there is no one right answer to the question of how many tempdb data files, there are some best practices to follow. First, I would recommend reading Paul Randal's blog on tempdb file allocation myths:
Creating one file per processor core is a myth perpetuated by many who do not fully understand how allocations in tempdb work. We typically allocate files based on the number of NUMA nodes in the server and either adjust it up or down depending on performance outcomes. Our primary SQL Server has 32 cores, but only 4 NUMA nodes. We therefore started with 4 data files and increased that number to 8 after tests showed we got better performance.
Make sure you have a tempdb allocation problem first by confirming that you are seeing file I/O latch waits in tempdb (database = 2).
November 12, 2012 at 10:46 am
How big's TempDB now? unless you have a good reason, you probably want the total size of all the files to be about the same as the total size it is now with one file. So if TempDB is about 100GB now and you're splitting to 8 files, you want something around 12.5 GB per file. They should all be the same size.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 12, 2012 at 10:55 am
George M Parker (11/11/2012)
Make sure you have a tempdb allocation problem first by confirming that you are seeing file I/O latch waits in tempdb (database = 2).
No.....
IO latch contention is not why one would typically split TempDB. IO latch contention (PageIOLatch) suggests that the IO subsystem is being driven beyond what it can handle, maybe because there's too much demand, maybe because there's a poor IO subsystem.
If there's excessive IO latch contention in TempDB that would typically suggest that TempDB needs to be moved to different storage array, one that can handle more throughput (potentially SSD) or that the app needs to be tuned to read/write less to TempDB.
The indicator that TempDB has the allocation problem cmmonly solved by adding files is latch contention (PageLatch) on the first set of allocation pages in the file. These typically appear as PageLatch waits on 2:1:3 (most common), 2:1:2 or 2:1:1.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 13, 2012 at 9:31 am
GilaMonster (11/12/2012)
George M Parker (11/11/2012)
Make sure you have a tempdb allocation problem first by confirming that you are seeing file I/O latch waits in tempdb (database = 2).No.....
IO latch contention is not why one would typically split TempDB. IO latch contention (PageIOLatch) suggests that the IO subsystem is being driven beyond what it can handle, maybe because there's too much demand, maybe because there's a poor IO subsystem.
If there's excessive IO latch contention in TempDB that would typically suggest that TempDB needs to be moved to different storage array, one that can handle more throughput (potentially SSD) or that the app needs to be tuned to read/write less to TempDB.
The indicator that TempDB has the allocation problem cmmonly solved by adding files is latch contention (PageLatch) on the first set of allocation pages in the file. These typically appear as PageLatch waits on 2:1:3 (most common), 2:1:2 or 2:1:1.
We saw a lot of PAGELATCH_UP and PAGELATCH_SH waits. We split all our tempdb files into 8 equally divided and we see very little waits on tempdb. I would like to know where isw round-robin and proportional algorithm applied?
November 29, 2012 at 2:48 pm
still trying to understand when and where tempdb would use proporational algorithm and rounnd-robin
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply