June 9, 2009 at 6:05 am
Here's an open question for the group. If you were tasked with creating the perfect database server for a high volume OLTP database for SQL Server 2005, how would you put it together?
Alternatively, what would you need to have specified about the server to answer this question 🙂
I guess the question that underlies my question is: how do you perform server sizing when you need to setup a SQL Server database?
June 10, 2009 at 7:39 am
June 10, 2009 at 8:16 am
I guess the reason noone has responded is because there is no answer.
You design a solution to meet requirements. You have asked a pie in the sky question about perfection - there is no such thing. Does a solution meet your requirements? yes or no. Even if the answer is yes, it may be able to be improved.
If you're meeting your requirements, does it matter if it is perfect? no
--
Andrew Hatfield
June 10, 2009 at 8:29 am
Fair enough. However, I'm still quite interested in how one would go about determining how to determine the correct hardware for a set of requirements.
What sort of specifics would need to be in the requirements to be able to determine what hardware to purchase? Would you need to know how many transactions per hour, etc.?
Yes, I know that this seems a bit of a silly question, but the truth is I see a lot of people who aren't sure what sort of hardware to purchase when they buy a multi-tiered application from a software vendor. And most vendors just specify at random, so really they aren't a good guide at all.
July 3, 2009 at 3:17 pm
I think I would take this from the i/o perspective as that is the usual bottleneck , especially in an OLTP system where concurrency is paramount.
I am not sure what number of transactions per hour\sec or whatever gives you. How many bytes of data that will need to be pumped through seems more relevant.
Knowing as much as possible about the database design is paramount. Accurate sizings of average data row sizes per table and expected no of rows per table. Also expected growth rates so you can build the database to the right size to handle a good few years worth of data.
Then so you can lay the database files out correctly in the right amount of filegroups\files and spread over the right number of disks you need to know which tables are read only and static, which are heavily updated, which are heavily read though updated, whats the indexing like, which tables are joined a lot in queries.
Please note you will NEVER have all this information up front. 🙂
After that get the best you can afford. Minimum would seem to be 4 quad core processors and 4 GB of memory. I say that because that what the standard editions of the OS and SQL restrict you to. Its quite impressive what that set up can cope with.
---------------------------------------------------------------------
July 4, 2009 at 2:11 am
Well, the perfect database server has as many processors as possible, as much memory as possible.
If all your databases can be run in memory, this is the best option.
When your database is 28 GB, have 64GB memory and so on...
N 56°04'39.16"
E 12°55'05.25"
July 4, 2009 at 6:38 pm
In the face of bad software, improperly written T-SQL, and poorly designed databases, the rating of the hardware and it' support systems means absolutely nothing.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 6, 2009 at 7:07 pm
The only perfect database server is the one which never gets switched on; or at the very least never has any users* on it 🙂
Paul
* especially database administrators
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
July 8, 2009 at 5:22 am
Jeff Moden (7/4/2009)
In the face of bad software, improperly written T-SQL, and poorly designed databases, the rating of the hardware and it' support systems means absolutely nothing.
I'll remember that the next time someone installs SQL Server 2005 on a server running a Pentium II with 512 MB of RAM and software based RAID to run multiple high-use transaction intensive databases 🙂
Seriously, tuning is definitely the first point of call, but when setting up a database server it's still important to get the server hardware as right as possible so you don't have to migrate to new hardware later on.
July 8, 2009 at 6:24 am
george sibbald (7/3/2009)
I think I would take this from the i/o perspective as that is the usual bottleneck , especially in an OLTP system where concurrency is paramount.I am not sure what number of transactions per hour\sec or whatever gives you. How many bytes of data that will need to be pumped through seems more relevant.
Knowing as much as possible about the database design is paramount. Accurate sizings of average data row sizes per table and expected no of rows per table. Also expected growth rates so you can build the database to the right size to handle a good few years worth of data.
Then so you can lay the database files out correctly in the right amount of filegroups\files and spread over the right number of disks you need to know which tables are read only and static, which are heavily updated, which are heavily read though updated, whats the indexing like, which tables are joined a lot in queries.
Please note you will NEVER have all this information up front. 🙂
After that get the best you can afford. Minimum would seem to be 4 quad core processors and 4 GB of memory. I say that because that what the standard editions of the OS and SQL restrict you to. Its quite impressive what that set up can cope with.
That's interesting info. I work for a software vendor, we're always trying to work out different workloads... it's why I asked here because this is where the experts hang out!
I've been doing some reading, and it appears that benchmarking is key. It's just knowing what the capture that's a bit tricky!
July 8, 2009 at 7:28 am
That's interesting info. I work for a software vendor, we're always trying to work out different workloads... it's why I asked here because this is where the experts hang out!
I've been doing some reading, and it appears that benchmarking is key. It's just knowing what the capture that's a bit tricky!
everyone has there own favorites but heres my starter for 10:
avg. disk read and write Q lengths for each drive. (if on a SAN will need their help to interpret this and anyway need to take raid set up into account)
avg. disk sec\read and write per drive
avg disk reads and writes per second
avg. disk bytes\read and write
memory\available mbytes
memory\pages\sec
buffer cache hit ratio (for old times sake)
user connections
sql server memory\lock memory and total and target server memory
I leave network stuff to the network guys.
If you want to get flash you could do a profiler trace based on the duration template.
Also useful to find out disk activity per database is stats from sys.dm_io_virtual_file_stats (or fn_virtualfilestats)
---------------------------------------------------------------------
July 8, 2009 at 4:09 pm
Well, my philosophy is everything counts....
HW, NW, DB, Maintenance, Dev, Security, Users, Workflow, Redundancy, document...and of course money.
Just from the basics...I'd like to make sure that the network has good clean bandwidth.
Yes, I know, the NW engineers are in charge of this, but check with them, review what you have,
push their buttons a bit to make them know you're interested.
Hopefully you've been able to convince your mgmt to have a dedicated DB server.
CPU is an important factor, not absolute...remember that vendors charge per CPU and core.
I like dual, but I can get away with 1 good one.
Of course, ram...its cheap these days, 4 GB without having to go to another OS version (techs cringe at this for some reason)
I always like to emphasize I/O to them...they always forget this. Good motherboard, good I/O card, good HD, etc.
Depends on your funding on what you'll get...SAN, RAID type, etc.
Of course, storage...but its relatively cheap these days again...just remember to account not only for the db...but installations, dev files, growth, backups, playroom, etc, etc...
On the install, make sure you've played with it a few times, only put in what you need,
but don't forget to install the utilities which will make your life easier...Ent Studio, Integration Services, etc.
Plan out where you're going to put it...I usually avoid installing on C: (which MSSQL does a bit anyway)
Maintenance/admin-defrag HD partitions...techs forget this too (I like auslogics boostspeed)
updating stats, checkdb, backups, redundancy, shrinks, defrag idx, etc, etc, etc....
Users...how many total...avg concurrent, max concurrent (national deadline submissions are a killer)
Workflow...How are the using it? When are the using it? Fluctuation during times, days, weeks, months, the year.
What processes, reports, etc...hit it the hardest? These will factor into your setup.
I like monitoring it with real-time tools like Quest's spotlight to see its heartbeat. (both DB and OS/HW btw)
Dev...make sure your developer have a list of best practices that won't kill your performance.
Avoid cursors, functions in the where clause, minimal output, etc, etc, etc.
There are easy lists to give them.
Make sure they know they are playing with physics not just logic.
Oh, and also make sure they are creating their objects to certain standards
(naming, structure, etc...make them put a darn "create by" "modify by" and datatime columns on all tables)
Security...the basics at least...decent passwords, audits, win auth with at least a couple of sql for you and yours.
Document this somewhere secure...because you will forget.
I like having certain automated alerts/logs to tell me what's happen and the history of things.
Backups, maintenance, audits, usage.
If you can get them to fund you...have a redundant system, hopefully with some basic replication.
At least have another server you can switch to...
The key is to keep ahead of it and make it quiet.
Avoid the scene where you are drooling at 3 in the morning with 5 managers behind you asking is it done yet and how could you let this happen...
Which comes to documentation...document everything. Nothing is unimportant.
Because there will come to that time where you WILL be drooling...and everyone else is shrugging their shoulders, "I don't know"
Have I spouted off enough now?? If I had a penny for everytime I argue with mgmt on these points...
I'll probably remember more later.
July 8, 2009 at 4:10 pm
Well, my philosophy is everything counts....
HW, NW, DB, Maintenance, Dev, Security, Users, Workflow, Redundancy, document...and of course money.
Just from the basics...I'd like to make sure that the network has good clean bandwidth.
Yes, I know, the NW engineers are in charge of this, but check with them, review what you have,
push their buttons a bit to make them know you're interested.
Hopefully you've been able to convince your mgmt to have a dedicated DB server.
CPU is an important factor, not absolute...remember that vendors charge per CPU and core.
I like dual, but I can get away with 1 good one.
Of course, ram...its cheap these days, 4 GB without having to go to another OS version (techs cringe at this for some reason)
I always like to emphasize I/O to them...they always forget this. Good motherboard, good I/O card, good HD, etc.
Depends on your funding on what you'll get...SAN, RAID type, etc.
Of course, storage...but its relatively cheap these days again...just remember to account not only for the db...but installations, dev files, growth, backups, playroom, etc, etc...
On the install, make sure you've played with it a few times, only put in what you need,
but don't forget to install the utilities which will make your life easier...Ent Studio, Integration Services, etc.
Plan out where you're going to put it...I usually avoid installing on C: (which MSSQL does a bit anyway)
Maintenance/admin-defrag HD partitions...techs forget this too (I like auslogics boostspeed)
updating stats, checkdb, backups, redundancy, shrinks, defrag idx, etc, etc, etc....
Users...how many total...avg concurrent, max concurrent (national deadline submissions are a killer)
Workflow...How are the using it? When are the using it? Fluctuation during times, days, weeks, months, the year.
What processes, reports, etc...hit it the hardest? These will factor into your setup.
I like monitoring it with real-time tools like Quest's spotlight to see its heartbeat. (both DB and OS/HW btw)
Dev...make sure your developer have a list of best practices that won't kill your performance.
Avoid cursors, functions in the where clause, minimal output, etc, etc, etc.
There are easy lists to give them.
Make sure they know they are playing with physics not just logic.
Oh, and also make sure they are creating their objects to certain standards
(naming, structure, etc...make them put a darn "create by" "modify by" and datatime columns on all tables)
Security...the basics at least...decent passwords, audits, win auth with at least a couple of sql for you and yours.
Document this somewhere secure...because you will forget.
I like having certain automated alerts/logs to tell me what's happen and the history of things.
Backups, maintenance, audits, usage.
If you can get them to fund you...have a redundant system, hopefully with some basic replication.
At least have another server you can switch to...
The key is to keep ahead of it and make it quiet.
Avoid the scene where you are drooling at 3 in the morning with 5 managers behind you asking is it done yet and how could you let this happen...
Which comes to documentation...document everything. Nothing is unimportant.
Because there will come to that time where you WILL be drooling...and everyone else is shrugging their shoulders, "I don't know"
Have I spouted off enough now?? If I had a penny for everytime I argue with mgmt on these points...
I'll probably remember more later.
July 8, 2009 at 4:12 pm
Well, my philosophy is everything counts....
HW, NW, DB, Maintenance, Dev, Security, Users, Workflow, Redundancy, document...and of course money.
Just from the basics...I'd like to make sure that the network has good clean bandwidth.
Yes, I know, the NW engineers are in charge of this, but check with them, review what you have,
push their buttons a bit to make them know you're interested.
Hopefully you've been able to convince your mgmt to have a dedicated DB server.
CPU is an important factor, not absolute...remember that vendors charge per CPU and core.
I like dual, but I can get away with 1 good one.
Of course, ram...its cheap these days, 4 GB without having to go to another OS version (techs cringe at this for some reason)
I always like to emphasize I/O to them...they always forget this. Good motherboard, good I/O card, good HD, etc.
Depends on your funding on what you'll get...SAN, RAID type, etc.
Of course, storage...but its relatively cheap these days again...just remember to account not only for the db...but installations, dev files, growth, backups, playroom, etc, etc...
On the install, make sure you've played with it a few times, only put in what you need,
but don't forget to install the utilities which will make your life easier...Enterprise Studio, Integration Services, etc.
Plan out where you're going to put it...I usually avoid installing on C: (which MSSQL does a bit anyway)
Maintenance/admin-defrag HD partitions...techs forget this too (auslogicsslogics boostspeed)
updating stats, checkdb, backups, redundancy, shrinks, defrag idx, etc, etc, etc....
Users...how many total...avg concurrent, max concurrent (national deadline submissions are a killer)
Workflow...How are the using it? When are the using it? Fluctuation during times, days, weeks, months, the year.
What processes, reports, etc...hit it the hardest? These will factor into your setup.
I like monitoring irealtimeal-time tools like Quest's spotlight to see its heartbeat. (both DB andBtw/HW btw)
Dev...make sure your developer have a list of best practices that won't kill your performance.
Avoid cursors, functions in the where clause, minimal output, etc, etc, etc.
There are easy lists to give them.
Make sure they know they are playing with physics not just logic.
Oh, and also make sure they are creating their objects to certain standards
(naming, structure, etc...make them put a darn "create by" "modify bydatetimetatime columns on all tables)
Security...the basics at least...decent passwords, audits, win auth with at least a couple of sql for you and yours.
Document this somewhere secure...because you will forget.
I like having certain automated alerts/logs to tell me what's happen and the history of things.
Backups, maintenance, audits, usage.
If you can get them to fund you...have a redundant system, hopefully with some basic replication.
At least have another server you can switch to...
The key is to keep ahead of it and make it quiet.
Avoid the scene where you are drooling at 3 in the morning with 5 managers behind you asking is it done yet and how could you let this happen...
Which comes to documentation...document everything. Nothing is unimportant.
Because there will come to that time where you WILL be drooling...and everyone else is shrugging their shoulders, "I don't know"
Have I spouted off enough now?? If I had a penny for everytime I argue with mgmt on these points...
I'll probably remember more later.
July 8, 2009 at 4:15 pm
Man, it won't delete...3 posts??? You guys are going to kill me.
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply