August 25, 2009 at 10:11 am
I'm not sure if this is the right place for this topic, but here goes. I'm buying a new server that we will use to install SQL Server 2008. We develop an app that puts a lot of strain on the disks, so high disk I/O write speed is crucial. I'm looking at a HP DL580 G5 with 16GB of RAM, and capacity for 16 SAS drives. I've come up with three drive configuration scenarios, and am hoping someone has the same setup, or can give some advice as to why or why not these setups are acceptable. All drives will be SAS 15K.
Config 1:
OS/SQL Binaries: 4-72GB | RAID 10 | ~144GB Usable
SQL Data Files: 8-146GB | RAID 10 | ~584GB Usable
SQL Log Files: 4-72GB | RAID 10 | ~144GB Usable
Config 2: (giving TempDb data file its own array)
OS/SQL Binaries: 4-72GB | RAID 10 | ~144GB Usable
SQL Data Files: 4-146GB | RAID 10 | ~292GB Usable
TempDB: 4-72GB | RAID 10 | ~144GB Usable
SQL Log Files: 4-146GB | RAID 10 | ~292GB Usable
Config 3: (separating system data files, including TempDb from user data files)
OS/SQL Binaries: 4-72GB | RAID 10 | ~144GB Usable
Sys Data Files: 4-72GB | RAID 10 | ~144GB Usable
User Data Files: 4-146GB | RAID 10 | ~292GB Usable
SQL Log Files: 4-146GB | RAID 10 | ~292GB Usable
Any and all insight is greatly appreciated!
August 25, 2009 at 12:00 pm
2!!!!
As a rule, I like logs on their own drive and I like tempdb on its own drive.
it really depends on tempdb read/writes, but your doing well by placing the other data and log files on seperate volumes on high-write OLTP systems. 3 would be OK too.
One thing to think about is filling up a drive and what you'd like to do for that situation.
It's probably more likely to happen than a full SQL Server recovery.
~BOT
Craig Outcalt
August 25, 2009 at 1:31 pm
Thanks for your input, BOT. I'm not too worried about filling up the drives. This is for development purposes, so no mission critical databases to worry about. I have other (slower) servers that I can use to shuffle databases around should it fill up. I didn't see any 300GB 15K SAS drives on the HP configuration page, but I know they make them. I would feel better with a little more space.
August 26, 2009 at 10:07 am
I would go with second configuration. But one more thing I would like to add. Generally we prefer keeping the index file on a separate drive than the data file. This will help to improve performance. So you may consider putting indexes on the temp db drive or having one more drive only for indexes.
August 27, 2009 at 4:44 am
Good point, Apurva. I do that with our Oracle servers, but for some reason I have never done it with our SQL Servers. Our main application has tons of indexes, so it would help to separate them.
August 27, 2009 at 6:57 am
If you are going with the configurations listed, I would pick 1 or 2, but use only 2 drives for the OS and throw the other 2 72GB in with log/tempdb respectively.
What "I" would do however is to benchmark the IO patterns of my existing server (if you have one online already) then TEST the IO throughput using SQLIO to verify which configuration was best for those IO patterns. If you don't know what you will be seeing with your app, I would do a shotgun test of many different read/write types against each configuration.
I will note you have left out one main configuration: 2 drives for OS and all other drives together in RAID10. Aggregate spindles can really make a difference, and lets face it - you really don't have many drives to work with. I routinely see people carve the hell out of their drives with lots of 4-drive RAID10 sets and performance is awful. That is why option 1 could well be the best for overall performance (admitting that occassionally you will hit a spike where it is worse than option 2).
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
August 28, 2009 at 5:35 am
I thought about just mirroring the drives the OS and SQL binaries are on, but was hoping to eek out a little faster read speeds using RAID 10. If it won't make that much of a difference, then I would much rather have the drives used for SQL data files.
I'm a little confused as to why you would recommend aggregating all of the drives for the data and log files. This goes against what Microsoft recommends. I could understand doing it with a SAN, where you have 30 or more drives dedicated to SQL Server, but using only 14 seems like a bad idea. I'm glad you brought it up, hopefully others will chime in with their opinions. It would give me the option to use 14 146GB drives, thus giving me more space. I do like that. Do you run this configuration yourself? If anyone else has done this type of setup, please let me know. I'm very curious now.
I do have the option of plugging the server into a Compellent SAN via iSCSI, however, that SAN will also be used as a file server, VM storage, and who knows what else.
September 6, 2009 at 12:02 pm
it depends ;-))))
I assume that you gonna buy 16x SAS 73GB 15k rpm and you have only 1 production database
first of all: OS/SQL Binaries: 2disks -> RAID 1 -> 73GB (enough even for Win 2008 x64)
now we have 14 disks. i don't know your environment, but but default i will try something like this:
tempdb: 4disks R10
log: 4 disks R10
data: 6 disks R10
it is good for a OLTP under high stress. Maybe your database dont need it? Maybe better willbe:
tempdb: 2 disks -> R1
log: 2 disks -> R1
data: 10 disks -> R10
some times there is a need to use S.A.M.E. (Stripe And Mirror Everything) strategy:
tempdb + log + data: 14 disks -> R10
give us more information about your environments.
July 12, 2010 at 10:21 pm
oralinque (8/28/2009)
I'm a little confused as to why you would recommend aggregating all of the drives for the data and log files. This goes against what Microsoft recommends.
Sorry to drag this out of the deep dark depths of the system, but I am intrigued by this. I have recently been put in front of a SQL installation (OLTP) that has been built with RAID10 servicing all the datafiles/logfiles/tempdb in one drive....I would have traditionally seperated these, but the performance of RAID10 has me wondering about the value of pulling this configuration apart and rebuilding.
Do people have any opinions and reasons on this? Keen to hear some discussion.
Cheers
Troy
July 12, 2010 at 11:41 pm
Hi Troy,
If performance is good enough, and you're not seeing much in the way of IO stall, you're probably OK. If the database has little TempDB usage, and little memory pressure (either the entire DB, or the most commonly used parts, sit in memory), and there isn't much write activity going on, then there's no real reason to rip the array apart and rebuild. If performance is good enough to get it to the next server upgrade, it would be easiest to fix it then.
If you are suffering performance issues, I'd suggest making sure the necessary indexes are available (to prevent table scans), and adding more RAM, to cache more data and reduce contention between the log and the data files. These are the cheapest and least intrusive ways of increasing performance.
July 13, 2010 at 2:29 pm
Thanks for the feedback Jim; that is exactly what I was wondering - cost benefit from rebuilding versus just living with it....my main concern (from a laymans perspective about i/o) was that the disk controllers potentially would be the bottle neck going forward for all the read/writes going through the same pipe.
There are no immediate issues; everything looks fine, but it just concerned me and I was wondering about going down the whole rebuild thing but was not looking forward to having to rebuild from the ground up.
Will just leave it as is - unless others have contradicting views - until the next build and then sort it out properly
Thanks again
Cheers
Troy
July 13, 2010 at 2:33 pm
there's no mention of backup files where do you intend to put these?
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply