August 16, 2007 at 7:52 am
Does anyone know off the top of their head some links to Microsoft pages showing reccommended filegroup placement?
We are trying to convince our hardware guys we should have separate drives for data / log / tempdb, etc. and need some ammunition.
Thanks,
Jason
The Redneck DBA
August 16, 2007 at 8:03 am
this is a really good source...there's a brief paragraph on the one filegroup per disk and such:
http://www.sql-server-performance.com/articles/per/optimize_filegroup_performance_p2.aspx
Lowell
August 16, 2007 at 8:59 am
Another thing to add to Alexander Chigrik's article is that SQL Server 2005 is puts a larger load on tempdb than 2000.
Regards,
Andras
August 16, 2007 at 9:22 am
Thanks for the tips.
In looking at all sorts of resources, I get conflicting information all over the place (even within Microsoft). One of the big things I see is different places reccomending RAID 1 and others reccomending RAID 5.
I've always thought 5, but I can see the argument for 1. What do y'all thinnk?
The Redneck DBA
August 16, 2007 at 9:33 am
From what I've learned, RAID 5 is slower in comparsion to RAID 1 because the data is distributed redundantly accros the stripes to allow for recovery ,so it can potentially degrades performance. on the other hand, the striping in RAID5 allows a disk to go down and get replaced dynamically/while still hot.
If your application is light as far as read/write, RAID5 is good, but if there is a lot of read/writes, youd want to compromise a little failover capability for daily performance.
Lowell
August 16, 2007 at 9:35 am
As always the answer is: it depends
For your tempdb, if it is not write intenisve, then RAID 5, if it is write intensive RAID 1-0 or RAID 0 (the later obviously not a choice if system availability is critical).
For Log files RAID 1-0, or RAID 1 (better write performance)
For data, it depends on how write intensive it is
Andras
August 17, 2007 at 6:40 am
August 17, 2007 at 7:23 am
I must have better RAID controllers than most, as I consistently see better write and read performance with striping across 3 or more drives with at least one hot swap. For medium systems I have increased the number of drives being written to or read from and always increased performance. For large, truly enterprise systems I use a SAN, fiber attached and through a switch when sharing it with other servers.
If we are not booting from a SAN or other network device, we mirror the boot drive, place the mdf files on one volume, one hba or channel of a RAID controller and place the ldf files on another volume, one hba or channel of a RAID controller. In the case of a SAN, boot on one volume, mdf and ldf on another volume.
[font="Arial"]Clifton G. Collins III[/font]
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply