August 22, 2010 at 12:10 am
Hi ,
I have a question for creating and placing TempDB data and log files.
We have 2 Quad core CPUs (8 processors) and the storage is SAN
We have assigned one disk say T:\ to keep TempDB data and logs on same disk T.
Here, if I want to make sure to have 8 data files (1 per core), then Do I need to create 1 mdf & 7 ndf files of same size and 1 ldf file (ldf file size = sum of 8 mdf files) on same disk T? or
One mdf file on T drive and the 7 ndf files need to be created on separate 7 other disks?
or 8 data files (including 1 mdf & 7 ndf files on T) on disk T and Ldf file on other separate disk say L?
Please clarify me..
Note: The application uses TempDB heavily:hehe:
Thanks
August 22, 2010 at 7:38 pm
If TempDB is "used heavily" as you say, then a spindle per CPU would be really good but... the big thing is to get that log file off those disks (IMHO).
--Jeff Moden
Change is inevitable... Change for the better is not.
August 22, 2010 at 8:24 pm
It's always not necessary to create tempdb files based on your processor count. You can check the info given by Paul Randal
If you have an high speed disk then you can create all the files in the single drive, make sure that drive is dedicated to tempdb database alone
Regards..Vidhya Sagar
SQL-Articles
August 23, 2010 at 3:00 am
Seperate Disk for Tempdb will be good option in your case since it is being used extensively by application ( as mentioned by you).
August 23, 2010 at 4:59 am
vidhya sagar (8/22/2010)
It's always not necessary to create tempdb files based on your processor count. You can check the info given by Paul RandalIf you have an high speed disk then you can create all the files in the single drive, make sure that drive is dedicated to tempdb database alone
I've read that and agree. It doesn't hurt though when you're talking about spindles, though. I've found that the more spindles you can get involved (up to a point... I quit when spindles = processors or before... "It Depends"), the better off you are for parallel/asynchronus access by the read/write heads by multiple processes. If you can't do as many spindles as you'd like, then I'd seriously use Paul's good recommendations. After all, he wrote a lot of this stuff. 😛
--Jeff Moden
Change is inevitable... Change for the better is not.
August 23, 2010 at 1:03 pm
One other trick is to put TempDB on, say, a pair of local SSD's in RAID1; keep it off the SAN.
Given the price per SAN spindle, and the (often) small number of GB in use, SSD's can be worth it in this case.
August 23, 2010 at 9:30 pm
Nadrek (8/23/2010)
One other trick is to put TempDB on, say, a pair of local SSD's in RAID1; keep it off the SAN.Given the price per SAN spindle, and the (often) small number of GB in use, SSD's can be worth it in this case.
I agree. It's like having an old ram disk on steroids. We're setting up to do this at work as we speak.
Still, despite what you do, you have to obey the "Rule of Ron White"... "You can fix ugly... you can't fix stupid." 😀 What I mean by that is you can, many times, add some absolutely super hardware and still have virtually no effect on the real problem of "stupid" code. Let's think about this... you double disk speed by going from, say, 7,500 RPM disks to 15,000 rpm disks and you boost the cpu performance by 50% by going from 2GHz to 3.1GHZ and you double the number of cores. Did you really do a good thing by making that 10 hour run drop to "only" 5 or 6 hours? Does it really help when it jumps back up to 10 hours in less than 6 months because it's reached the next "tipping point"?
Performance is in the code. If you spend a little time at it, you can probably make that 10 hour job run in a couple of minutes or less. I increased the amount of data being checked by 50% on a particularly complex dupe check job that took 10 to 24 hours to sometimes fail and knocked the time down to 11 minutes for that month end run. The daily runs usually took 45 minutes and after a rewrite, only took 7 seconds to run. And instead of having almost daily failures with reruns required, the new code hasn't failed in the 3 years that it's been in service.
It's important to note that the system went through a similar hardware change as what I previously spoke of when the old code was still in service... The old code used to take 20 to 36 hours to run or fail. Hardware doesn't "fix" crap code.
Performance is in the code.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply