September 1, 2011 at 7:19 pm
Hi,
For Physical servers, we have the best practice for Optimum performance to have data files as
number of database data files should equal the number of cores per CPU socket
But how does it works for SQL Server instance on a VM?
Our Physical Esx server has 2 CPU with 6 cores. The VM for sql server has provided with 2 VPC (Virtual processor)
Having above configuration, what is the best practice for creating number of Data files?
Please advice..
Thanks
September 1, 2011 at 8:15 pm
I would say go with the number of cores is showing up in the VM. I do not think it would do you any good to go by the number of cores that ESX has if the VM cannot access all but two of them - you would only be able to get two threads.
Joie Andrew
"Since 1982"
September 1, 2011 at 11:02 pm
gmamata7 (9/1/2011)
For Physical servers, we have the best practice for Optimum performance to have data files as
Note: This best practice is applicable only to tempdb data files. Not to other database data files.
September 2, 2011 at 12:35 am
Note: This best practice is applicable only to tempdb data files. Not to other database data files.
Can you provide me that link where it says this best practice is applicable only to tempdb data files
September 2, 2011 at 12:42 am
September 2, 2011 at 2:40 am
Suresh B. (9/1/2011)
gmamata7 (9/1/2011)
For Physical servers, we have the best practice for Optimum performance to have data files asNote: This best practice is applicable only to tempdb data files. Not to other database data files.
And even for TempDB that's often overkill.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 2, 2011 at 2:58 am
It's really one for "It just depends" , the theory behind the multiple files is that each cpu core can support a read/write thread so in theory you can can get better throughput with multiple files. For tempdb it's based upon how many objects are being created.
For user databases this depends upon a whole multitude of factors and in some circumstances can degrade performance quite badly, however that's not always the case.
You shouldn't apply this blindly and only make changes with testing so you know it makes a difference.
We're currently looking at multiple files for one large database, but this is because the server has 6 fusion io cards and it's been suggested that multiple files may work better then using software raid and one file.
With a VM it probably won't make any difference because performance is so poor anyway.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
September 2, 2011 at 3:02 am
gmamata7 (9/1/2011)
we have the best practice for Optimum performance to have data files asnumber of database data files should equal the number of cores per CPU socket
you shouldnt blindly believe everything MS tell you. As PR states in his post linked above, various blogs and posts hit the internet from MS employees and in some cases they're not all correct. Research and testing will tell you what's best 😉
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
September 2, 2011 at 3:30 am
colin.Leversuch-Roberts (9/2/2011)
It's really one for "It just depends" , the theory behind the multiple files is that each cpu core can support a read/write thread so in theory you can can get better throughput with multiple files.
That would be a good reason if SQL had one read/write thread per file. However it doesn't.
http://blogs.msdn.com/b/psssql/archive/2007/02/21/sql-server-urban-legends-discussed.aspx
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 2, 2011 at 5:18 am
ok so it's a poor way I described it: However it's still possible to get better performance on a database by using multiple files and you're more likely to benefit with lots of cores vs a single core ( taking things to extremes ). Perhaps it would be better to say that multiple files can help negate serialisation of object allocation, thus speeding up concurrent access.
I've certainly been able to prove/test that for some applications I've supported this is true for Tempdb. As for user databases this is much more difficult and usually depends upon your storage, as I say we're looking into multiple files with fusion io cards because of the pci bandwidth.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply