March 15, 2013 at 12:46 pm
I have a teradata environment with database aroudn 3TB. This server has around 40 core processors. Microsoft suggested to Use only 32 processors instead of 40 as there was a bug.
So, Inorder to build temp database I am starting with 8 temp data files and 1 log files. I have two solid state drives each of size 290GB, so Total of two drives is 580 GB . So, I am diving my files in the following way. I need your suggestion
in dividing the files.My environment uses lot of tempdb. Please suggest with my ideas.
T1: 70 GB
T2:70 GB
T3:70 GB
T4:70 GB
T5:70 GB
T6:70 GB
T7:70 GB
T8:70 GB
Tlog: 10 GB.
By dividing into this way do I across any problems.
March 15, 2013 at 1:03 pm
I'd recommend you place your TEMPDB Log file on a different drive...
Seems like nice hardware for the server regarding CPU's but what about RAM and the disk subsystem? I am going to assume that there's nothing else on these 2 SSD's except for tempdb, and would suggest purchasing two additional drives so you can put them into a RAID 10 configuration (ideal for tempdb performance and reliability).
There's really no need to have 8 tempdb data files until you're experiencing tempdb contention and then I would suggest only adding 1 new file at a time to see if things improve.
If you split out all your space across smaller data files you may run into issues with command DBCC operations like CHECKDB not having enough space available in your tempdb data files to perform CHECKTABLE on your larger tables (unlikely, but still possible).
If you're stuck with 2 disks, I would recommend:
Disk 1
T1: 140 GB (set size, NO AUTOGROWTH)
T2: 140 GB (set size, NO AUTOGROWTH)
Disk 2
Tlog: 260 GB, leaving a few GB to grow
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
March 15, 2013 at 1:09 pm
MyDoggieJessie (3/15/2013)
There's really no need to have 8 tempdb data files until you're experiencing tempdb contention and then I would suggest only adding 1 new file at a time to see if things improve.
I would usually recommend doubling the files when there's contention identified (allocation contention that is), so 1->2->4->8 and so on
If you split out all your space across smaller data files you may run into issues with command DBCC operations like CHECKDB not having enough space available in your tempdb data files to perform CHECKTABLE on your larger tables
Why? TempDB files, like user database files, are used with a proportional fill algorithm, so there's no difference between 2 files of 140GB and 4 files of 70GB, it's still 280GB total space.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 15, 2013 at 1:38 pm
Gilla thanks for quick explanation. We had microsoft vendor who came in past and recommended to use 8 data files of equal size as we have lot of tempdb contention issues. Also, I am looking at the following article "http://support.microsoft.com/kb/328551" and it is recommending the same. We are no more in finance to get new hardware. So, the thought is to divide the .mdf files into two drives and to place .ldf into one of the drives with around 5 GB and have mdf's around 70 GB of each and setting no autogrowth.
March 15, 2013 at 1:57 pm
If it's allocation contention, doesn't matter how many drives the data files are split over, just the number of files. If you have IO contention that's when you need separate IO channels.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 16, 2013 at 8:01 am
GilaMonster (3/15/2013)
MyDoggieJessie (3/15/2013)
so there's no difference between 2 files of 140GB and 4 files of 70GB, it's still 280GB total space.
To add to the above, remember that when adding multiple files to a single filegroup you are forcing SQL Server to stripe across the files. The overhead of maintaining the proportional fill and the constant file switching may actually have an adverse effect on the system.
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
March 16, 2013 at 10:07 am
GilaMonster (3/15/2013)
Why? TempDB files, like user database files, are used with a proportional fill algorithm, so there's no difference between 2 files of 140GB and 4 files of 70GB, it's still 280GB total space.
I said this because I've encountered issues with CHECKDB on database servers that have multiple smaller Temp DB data files for particularly large databases. Due to space restrictions on the disk subsystem, I couldn't just increase all 8 tempdb data files to a "happy" amount, instead I opted for lesser files (4), absorbing the space from the other files spread evenly across the new files...making them larger - The issue was fixed.
While I can't remember the exact error it was something like:
: "Could not allocate space for object 'dbo.SORT temporary run storage: 140846049001472' in database 'tempdb' because the 'PRIMARY' filegroup is full. Create disk space by..."
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
March 17, 2013 at 9:12 am
Jessie,
Thanks for your detailed explanation. I was referring to the following article "http://sqlserverpedia.com/wiki/TempDB". In this it informs " in SQL Server 2005/2008 it is now recommend to have 1/2 or 1/4 as many files as CPU cores. This is only a guide and TempDB should be monitored to see if PAGELATCH waits increase or decrease with each change. ". Even the microsoft recommends the same. I have 32 processors, but my drive space is around 560 GB(2 drives) only. My ram is 64 gb. Can you provide recommendation for this. Thanks
March 17, 2013 at 12:27 pm
TempDB files = 1/2-1/4 the number of cores, start with no more than 8, increase if you're still seeing allocation contention (and only split if you're seeing allocation contention to start with)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 17, 2013 at 1:03 pm
Yes Gila, I want to do the same. But, as per jessi experience it seems we might encounter checkdb constraints. So, I think even I need to experiment this. So, Gila this is my consideration: For 2 drives, each of size 290 GB, I will divide into 8 tempdb data files of equal size into two drives and place 1 log file separately.
March 21, 2013 at 10:50 am
Can someone look into this article and let me know is it really good to set tempdb auto growth ENABLED. It's just wondering me.
"http://msdn.microsoft.com/en-us/library/ms175527.aspx"
I am planning to allocate a separate drive for Tempdb data and log files with autgrowth disabled by adjusting size to 85% of the drive.
March 22, 2013 at 5:04 am
DBA_Learner (3/15/2013)
I have a teradata environment with database aroudn 3TB. This server has around 40 core processors. Microsoft suggested to Use only 32 processors instead of 40 as there was a bug.
can we restrict the machine to use some limited cores (like 32 instead of 40). ? This is new to me.Any reference of link or related article?
-------Bhuvnesh----------
I work only to learn Sql Server...though my company pays me for getting their stuff done;-)
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply