November 9, 2001 at 8:55 am
If you could design and build the perfect SQL Server 7.0 workhorse, what would it be? I'm looking for disk configurations, disk space, RAM, # of CPUs, the ratio of data space to log space and backups, etc.
Thanks,
Jason
November 9, 2001 at 8:57 am
without knowing the front end application or number of concurrent users/max users ect ect what you ask cannot be maped out. In general two servers in an active/passive mode attached to a SAN with as many cpus and as much ram as possible is what you want.
Wes
November 9, 2001 at 9:01 am
Most of the front end applications are Cold Fusion applications. The remaining would be VB and Access apps. The number of concurrent users is somewhere in the several hundreds.
Jason
November 9, 2001 at 9:57 am
having just purchased a new server for an MSSQL 2000 installation i would highly reccomend this configuration. it runs quick as you like.
Compaq proliant ML570 with Quad Xeon III processors.
4GB memory
2* 9.1 GB 15k scsi III in RAID 1 for system
5* 72 GB 10K scsi III in RAID 5 for data
4* 72 GB 10K scsi III in RAID 5 for logs
2* 72 GB 10K scsi III in RAID 1 for backup
3* 72 GB active failover drives 1 for each array
add into that a SCSI controller with 128MB Cache
4 network cards with load balancing/failover
Redundant hotswap on power and fans
and a compaq lights-out remote board to allow power-down/up and recovery remotely (recently used it via Ipaq from a night club saved me a 2 hr drive!!)
and add a compaq 40/80 DLT for tape backup.
This works a treat for a transactionally heavy system with 224 users. I am currently trying to use a second one for replicaion to another part of our site. One of the problems of working for a port of 70 sq miles separated in half by a 300m river. Can't quite get that working.
however the above server comes highly recommended for a 100-500 user workhorse
Matt thompson
November 9, 2001 at 9:58 am
Some guidelines.
Use multi-CPUs from the go. Upgrading single to multi has never worked well from MS. Leave yourself room to grow, so buy a 4, 8, or bigger box and start with 2 CPUs.
RAM - as much as you can afford. 1GB prob a good min these days with prices low, go for 2 if you can.
Disk
As many spindles as you can. A big array like an EMC is nice if you have the $$, but may be overkill. Personally I recommend having at least 3 (preferebly 4) physical arrays. I say arrays becuase you want to spread the load. Also, buy "smaller" drives and more of them. 4 9GBs work better than 1 36GB in general.
Array 1 - Raid 1 - W2K OS, SQL System files, backups can go here.
Array 2 - RAID 5 - Data files, nice to have one of these per user database, but at least have one and as many spindles as you can.
Array 3 - RAID 1 - log files. Again, nice to have 1 per heavily loaded db. The reason: log writes are sequential, not random, so it's nice to have the drive heads relatively stable.
Array 4 - If you can, drop the windows pagefile here. Again, backups and misc can go here.
Network - At least 2 NICs. Keep one for fault tolerance. You can always have separate IPs for each and use one for backups, management, etc. and one for the workload. If it fails (not likely, but it happens), then you can switch the other over. BTW, use 2 different hubs if you can for these 2.
Misc:
at least 2 power supplies. Each should run to a different circuit (through a UPS).
Remote card is nice
Have enough backup capacity. Local tape backup is a bad idea, IMHO, but be sure you plan for backups somehwere.
Test system - leave enough $$ for a test system that can be restored from a copy of the live system.
BTW, if you are building a workhouse, build a SQL 2K workhorse and run it in 7.0 mode if you have to. Lots of enhancements to query engine, backup engine, etc. in 2000.
Steve Jones
November 9, 2001 at 2:11 pm
Steve, I disagree about the smaller drives. Larger the better. Not because you want to fill them, just the opposite, you want to keep a lot of free space. I like Fibre solutions. In any case, I'd take a hard look at the 15k rpm drives out. Seek time is EVERYTHING!
I'd also vote for more RAM, both initially and room to grow. I've got 8g in a box that will 32g, nice to have room.
One other point, I don't always agree with separate RAID 1 for log files. I know the reasoning is sound, but you have to think about your usage. I've got a box with 240+ db's...that would be a LOT of drives. I put all the logs on a mirror set for stability, but figure I lose the advantage of sequential writes. In practice is has made NO difference.
I'd also opt for a big dog caching controller with battery backup. It's fast and you can sleep soundly at night!
Andy
November 9, 2001 at 2:14 pm
So what are your thoughts about two 0+1 raid arrays. One for data and one for log files. Access times should scream if they are all 15k drives.
November 9, 2001 at 2:19 pm
Andy - I don't want to speak for Steve but I have always understand, "the more spindles, the faster the throughput". One such statement was something to the effect of 'every time you double the number of disks you increase the throughput (trans/sec) by 50%'. Have you seen this to be different? Curious.
David
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
November 9, 2001 at 2:43 pm
At some point you are going to run out of bandwith on the raid controler with two many drives. Thats why SAN is so attractive you can fill the bus with data on one hba connected to the SAN or two hba's for multi pathing and redundancy. Most SAN devices are self tuning these days and with enough disk and data endup in a 0+1 config from talking with EMC, Compaq, and HP almost all of them endup with this kind of data structure for their drive arrays in a SAN. But, I could be wrong. Lord knows it wouldn't be the first time.
Wes
November 9, 2001 at 2:47 pm
O.k. I was somewhat hesitant to start this conversation but, since Wes brought it up, I will. After seeing a presentation on some of the new (future) SAN technology, I was fairly impresed (doesn't take much these days). Throughput on some of these systems seems unbelievable. Are they really as good as they sound. Interested to find out what people have seen or are hearing.
David
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
November 9, 2001 at 2:50 pm
We have a T3 san device from SUN up and running now. It is impressive. The new stuff comming from HP and EMC look REALLY nice. With the new switching technology and double the data pipe from the new fibre cards it should make large databases fly. Also, most companies have serverless backups in the works too. that would be nice for all the guys that have fibre attached tape units.
wes
November 9, 2001 at 2:55 pm
I did see some information about the Compaq offerings and was highly impressed there as well. Serverless Tape BU solution too. Speeds mentioned were very impressive.
Have you heard of any shortcomings?
David
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
November 9, 2001 at 3:09 pm
Only that compaq is still SCSI at the backplane. They say it only makes like 100th of a second diffrence I don't know about that. I have setup a compaq SAN as well and it is nice to be able to pull drives from any where or add in a shelf just by changing two modules. Everybody else does tout fibre to the drives.
November 9, 2001 at 3:16 pm
Thanks Wes! I appreciate the input and I will let you know if I hear anything different about their offering.
David
David
@SQLTentmaker“He is no fool who gives what he cannot keep to gain that which he cannot lose” - Jim Elliot
November 9, 2001 at 3:51 pm
I know some of the newer FC items work great, but I still think the more spindles the better. Let me qualify that with, more controllers and spindles.
With the support for remote databases, then I think the SAN options are great, I was actually wondering if clustering would support this. However, on the SAN, I still think you would want more spindles.
As far as 0+1, it works great if you can afford the $$ for extra drives. RAID 5 for data is a nice compromise.
For logs, I would separate them out ONLY for the heavily used dbs. For ones that are not being slammed in an OLTP environment, I agreee with Andy. In my last job, we kept the trading db logs on a separate R1 device because of the activity, all other db logs shared a R1 array.
Keep in mind the the spindles principle helps by spreading the load. Your will get more benefits here as the load increases (and xfer size increases). However, the downside is managing more devices (files/filegroups/etc.). The nice thing about the larger Clarion, EMC, etc. arrays is they can hide much of this complexity and still give you the benefits of more spindles.
Most of us will get by with R1, 0+1 or R5 solutions since we are not taxing the hardware.
Steve Jones
Viewing 15 posts - 1 through 15 (of 24 total)
You must be logged in to reply to this topic. Login to reply