February 18, 2004 at 1:19 pm
Our solution is currently running on an HP SuperDome w/ 12 64 bit processors and 48 GB memory. Current partitioned fact tables have several hundred MB of data per partition. We have a well formed constellation of star schemas. We have been advised to alter tempdb to have multiple files. This should improve performance on parallel queries and when there are multiple queries running...or so we've been told and as logic would dictate.
Is this guidance correct and if so, how do we determine the appropriate number of files? We are leaning toward 12...one for each proc.
Pls advise.
Kurt Allebach
February 20, 2004 at 6:57 am
Not an answer, just an opinion: I would think the key here would be how many different disk drives you have available (probably obfuscated by SAN in your case). I can't see how more files on same drive would improve anything. Curious to see other replies.
June 25, 2009 at 2:33 pm
There are MS articles on this which I cant find. On 64 bit servers you should have 1 per CPU, yes. This will give you a better performance overall regardless of OLTP or OLAP.
Steve.
June 25, 2009 at 2:49 pm
Do me a favor - can you open perfmon and compare Sql Server:Databases/ Transactions/sec for _total to tempdb transactions per second? Subtract out tempdb so you can see what % of trans/sec is actually tempdb. I'm curious - why add all of the files if you don't need them. Sure the best-practice is to do so, but i'm at a client and see a prod server with only < 10% tempdb usage...at this point it ain't broke so I ain't fixin' it. Word up?
June 25, 2009 at 2:58 pm
Please note, 5 year old question.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 25, 2009 at 3:01 pm
Bet my answer ain't 5, cous 😀
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply