Make a decision for the No.of data files on differnt drives on SAN

  • Hi,

    We are planning to build an environment for Share point 2010

    We are ready with 2 node active/passive cluster setup on Windows Server 2008 R2 with two quad core physical processors (i.e 8 processors)

    Now we need to decide on no'f drives (disks) and I'm planning to give below requirements

    1. C - OS

    2. D --Local local disk to install and keep SQL Server binary files

    3. We can have no.of data files <= no.of processors. But I'm considering only 4 data files to start with. Please advice me here, if we must need to create 8 data files so that I can request to create 8 different disks to place 8 data files for Content database

    E - Primary data file

    F - Secondary data file

    G - Secondary data file

    H - Secondary data file

    4. TempDb settings.

    The number of tempdb data files should equal the number of core CPUs, and the tempdb data files should be set at an equal size.

    So I would like to request 9 different disks to be created for placing tempdb data files (8) and one disk to keep the tempDB log file

    Please advice me on above requirements..

    All the storage is on SAN.

    If we want use mount points, could you please advice me how the E,F,G,H can be created?

    Thanks

  • I would consider grouping content databases on seperate mount points instead of splitting databases on secondary files. This will spread out your IO and provide flexibility in addressing IO concerns for particular databases.

    The other thing to consider is often larger volumes perform far better than small. So lets say you request 30 GB and that ends up on one hyper (actual number of disks depends on raid config) the performance is going to be horrible compared to 120 GB made up of 4 hypers. You may find that grouping ten content databases together will increase your number of spindles and improve performance of the really active database(s).

    The best approach is to work closely with your SAN team and verify performance with SQLIO prior to installing SQL.

    Dan

  • On the Temp Db more then likely you will not need a spindle for each core. my guess is the SAN admin already has an array created and he will slice a LUN out of the array. Its going to have many spindle within already, as along as that is the case you should be good to go.

    http://technet.microsoft.com/en-us/library/cc966545.aspx

    "Divide tempdb into multiple data files of equal size. These multiple files don’t necessarily be on different disks/spindles unless you are also encountering I/O bottlenecks as well. The general recommendation is to have one file per CPU because only one thread is active per CPU at one time. SQL Server allocates pages for tempdb objects in a round-robin fashion (also referred to as proportional fill) so that the latches on PFS and SGAM pages are distributed among multiple files. This is supported both in SQL Server 2000 and SQL Server 2005. There are improvements to the proportional fill algorithm in SQL Server 2005."

  • On the Temp Db more then likely you will not need a spindle for each core. my guess is the SAN admin already has an array created and he will slice a LUN out of the array. Its going to have many spindle within already, as along as that is the case you should be good to go.

    http://technet.microsoft.com/en-us/library/cc966545.aspx

    "Divide tempdb into multiple data files of equal size. These multiple files don’t necessarily be on different disks/spindles unless you are also encountering I/O bottlenecks as well. The general recommendation is to have one file per CPU because only one thread is active per CPU at one time. SQL Server allocates pages for tempdb objects in a round-robin fashion (also referred to as proportional fill) so that the latches on PFS and SGAM pages are distributed among multiple files. This is supported both in SQL Server 2000 and SQL Server 2005. There are improvements to the proportional fill algorithm in SQL Server 2005."

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply