Calculation of CPU and RAM requirement for SQL Server based application.

  • We are having a dot net application which is using SQL Server 2008 R2 Enterprise edition. For the production implementation, we need to propose the hardware requirement for the database.

    I have already looked into minimum requirement for SQL Installation and what maximum SQL Server can support. But what is actually needed for our application.

    So what are the parameters that we should look into so that we can calculate the No Of CPUs (and which type of CPUs), RAM Requirements?

    Thanks in advance.

  • This should be a good read to get you started, but as everyone will say on this "It Depends"

    http://www.sqlservercentral.com/articles/books/71809/

  • Thanks for sharing the book. I have gone through first 4 chapters.

    I got the info related to CPU (and its main properties to be considered), chipsets, RAM, controller.

    In the Storage (HDD, SSD etc.) area, i got the IOPS of different storage media and which part of the database should be on which type of media.

    There was a calculation of no of disks required based on a formula by using Read %. Write %, Raid Level, and required IOPS for the application.

    Now the main question to know what is the IOPS for our dot net application?

    We have choosen SQL server 2008 R2 Standard Edition 64 bit on Windows server 2008 standard edition 64 bit.

    Our application (OLTP application) is new and it is in development stage. We know the following:

    No of tables, PK, FK, indexes, data capacity (in GB) for next 3 years (full recovery mode), required RAID level, concurrent users (max & min), some night job processes.

    So how to calculate the IOPS of this application?

    How to calculate the Read / Write ratio in our application?

    I have seen the benchmark results or people using old system to find out the IOPS but in our case it is not there. Please suggest.

  • If its in the development stages, then you should be able to use perfmon on your development SQL server and identify the counters needed.

    Granted this wont give you a true picture of the overall requirements as say you have 10 developers but 100 users, load is going to be significantly lower than what you could see in production, so take it with a pinch of salt.

    So if 10 devs do 10 IOPs you could say 100 users will do 100 IOPs, but give yourself some additional just incase you have a massive process which is IOP intensive, say between an extra 25-50%, so 150 IOPs.

    But it all depends on your load and how you write your queries and their access methods to the data.

  • Yes, this is right IOPS will depend on lots of other things as well like:

    How much data is already in RAM

    Structure of your schema (indexes, row width, data types, triggers, etc)

    Nature of your queries (joins, multiple single-row vs. row range, etc)

    Data access methodology (ORM vs. set-oriented, single command vs. batching)

    Ratio of reads vs. writes

    Disk (database, table, index) fragmentation status

    I will try to find the IOPs using our developed code

    But again there is a scenario when we got Request for proposal (RFP) and we need to submit the technical & financial based on mentioned requirements. We don't have access to any system. Then how to calculate the IOPs in this case.

    Looks like DBA experience, IOPs figures from past developed application, required storage capacity, TPC benchmarking results will provide inputs to find out hardware requirement.

    Your comments please.

  • If this is a brand new requirement and you dont have anything to base your hardware requirements on it is best to ask the people who want the application / database what their requirements are for it? Is it heavy reads, is it heavy writes, is it 50/50, 60/40, 70/30, what data is being stored, how much of it is being stored, data retention periods, how many users, indexing strategies, data access methods etc etc.

    Its not an exact science calculating hardware requirements as things may not always be what they seem.

    This book may also come in handy for you http://www.sqlservercentral.com/articles/books/71809/

  • Thanks for your valuable comments.

    I am already on chapter 6 (SQL Server Version & Editions) and will finish it very soon. But really thanks to share this informative resource (SQL Server hardware) which was eye opener for me for hardware configuration.

  • If you have to ask this early in the development cycle, then you won't be able to get a good answer this early in the development cycle.

    There are two major ways to handle this:

    1) Generate test data a few times larger than what you expect the production data to look like in N years (10, or three times the designated lifetime of your application, whichever is longer), then benchmark the development version as it gets to a testable state. Tune the software a lot, and you'll need less hardware.

    2) Fully specify your requirements, fully document a design, and then calculate how many rows and how much data each SQL call will move around/utilize, know how many of which SQL calls each action of your app will take, and have a focus group of real users walk through their actual workflow. Calculate load at each time period (startup, login, do X, do Y), calculate peak and average/median, then use statistics to calculate aggregate averages and peaks for your projected workforce + projected enhancements.

    Chances are, you'll be at 1).

    There's also one minor way to handle this:

    0) Generate "normal" amounts of test data (because generating test data is hard and you're on a deadline), and if the app "feels" fast enough in user testing, it's good. After you get into production, it will either screech to a halt immediately, sooner, or, if you're done a good job, later. Buy $150,000 servers and Enterprise SQL licenses to try and compensate, because it's hard to fix the app. Or hire a tuning professional who will tell you to fix your app.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply