SAN/CPU's SQL 2005 Enterpriseedition

  • I have a large DB (now approx. 800 gb), and I am going to put it on a SAN. I have a Server with 4 physical CPU's (with 2 cores each). I am using Enterprise Edition.

    Is it so that If I make 8 Luns, where I place my databasefiles (I have one .mdf and .ndf), then my computer can gain some performance since I kan use parallelism, and does parallelism also work on the IO operations?

    Will it degrade performance to have 8 files (forget about the log-files. I have them om a separete lun on separete discs) that are sharing the same physical discs compared to having 1 file that are using the same physical discs in a Raid 5 or Raid 10?

    Thanks in advance.

    Søren

    Søren,
    MCITP SQL 2008 (administration / programming), MCITP SQL 2005 (BI), MCT

  • ask your SAN vendor since they all have different ways to configure their SAN's

  • As long as the LUN's are PHYSICAL LUN's and not virtual/shared LUN's (meaning - based on different disks and controllers, not just a partitioned version of the same disk set), then by far and large, that would allow for higher parallelism on the storage. Of course, most data access patterns tend to show rather "clustered" access at any given time (meaning - a lot of people are accessing data in the same general area at any given time like - today's invoices), so it's certainly not going to be a linear progression. In fact - it's possible you won't see ANY benefit.

    As to multiple files vs single files - Paul Randal discussed it a little while ago on here. My take on what he said was this: the "1-file per core" rule of thumb tends to apply only to tempDB, and might be a high ratio even there. Only a very few database settings with VERY high activity levels will be able to leverage gains from spreading your data over that many files. Meaning - it's not so much that it will "hurt" performance, you just aren't likely to see much gain past a few files. This is even more true if the files are on the same LUN/volume.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • I like the 1 file per core rule as a guideline. Each separate file should get a separate thread to handle it's IO work, so if you go with that, you should be ok in terms of processing the IO.

    I'm not sure I see multiple files hurting performance. It's more that it becomes administratively more work to recover and restore.

  • Agreed, don't confuse filegroups and files. You can gain performance by creating files within a filegroup, usually 1 per core, filegroups for performance should be on seperate physical spindles ( e.g. you should have dedicated luns ) and once again 1 file per cpu per filegroup.

    That said increasing memory will offset disk reads which is where your multiple files gain most. Unusually I've just set up a oltp server with more memory than the size of the database, it has been interesting to note that physical reads have dropped to almost zero, so in my case the multiple files are unlikely to have much impact. With bigger databases it's subjective and depends upon how much of the data set is being used. Again I upped memory on bigger servers, with around 1TB of databases and it was seriously noticible the drop in physical reads, so I'd suggest you ramp up the memory as high as you can.

    Steve is right in noting that lots of files can become more of an adminisrtative headache, especially if you happen to restore prod backups for dev or testing ( suitably data cleansed )

    What ever you do you should create tempdb with multiple files.

    Sadly despite some testing I've never been able to measure the impact of multiple files.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • You really should hire an expert SAN-experienced DBA to guide you in setting up this new environment. I can GUARANTEE that you will not get optimal performance if you don't do this. There is a LOT of decisions that go into proper configuration.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Get your SAN vendor involved. For homework beforehand be ready to describe the current physical and logical disk setup including SCSI controllers of your existing server and current I/O rates in MB/Sec for each array/spindle.

    One common issue that many folks run into is that SAN/NAS connection speeds are calculated in Megabits or Gigabits per second, not Megabytes/Gigabytes per second. That's right, a single U320 (320MB/sec) controller is potentially 2.5 times faster than a 1Gb HBA (122MB/Sec) and if you're already running multiple SCSI controllers spread across multiple busses in your server you'll need a similar configuration of HBA's, fabric connections, etc. to duplicate your existing performance.

    One last caveat, the latest greatest 4Gb/Sec (~480MB/Sec ea.) HBA's are awesomely fast - with the caveat that many older servers (say more than a year) can't move data across their PCI busses fast enough to keep up with them. Same caveat goes for any multi-connection HBA or Network card, does very little good to have 4 1Gb/Sec connections on a single card - the PCI buss can only move data so fast.

  • Hello Everybody,

    Thanks for your replies. They have been very helpful. About the San, then I can tell you that it is configured by a vendor, and it is optimized for SQL Server data. It is not something I can change, eventhough I am a little bit concerned after I have read all your replies and searched the internet.

    I have also teste a method where I Script the database to a new database, and then moves the data by using SSIS. That works. My problem is: I have only tested it on 40GB and not 8-900GB.

    I am very concerned about the time consumption in the datatransfer, when I go from the testenvironment and 40GB to 8-900 GB.

    Does someone have any Idea about the timeconsumption with a 8-900 GB database (I know that it depends on the hardware, configuration e.g., but anyway all experiences are welcome).

    Søren

    Søren,
    MCITP SQL 2008 (administration / programming), MCITP SQL 2005 (BI), MCT

  • You can easily test 900GB file sizes (well, actually with a database that size you should be using file groups of smaller sizes that net to 900+GB). Check out SQLIO and IOMETER.

    BTW, if your SAN was set up by a vendor consultant do not expect it to be optimal for SQL Server usage.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • Thx. I will use the tools.

    By the way: Your comment about the vendor of the SAN is excatly my problem. I have told my co-workers, my problems, e.g. the issues about physical discs and LUN's sharing physical discs.

    However, I have no chance of changing the configuration, and therefor I can only take responsibility for converting and not for the performance. That is a shame, but that's the situation.

    Thx. very much for your help.

    Søren,
    MCITP SQL 2008 (administration / programming), MCITP SQL 2005 (BI), MCT

  • Forget the rule of One LUN or File to one Processor. Is that the StarWars Sith rule of something:) That was an old SQL 2000 theory.

    In 2005 an since the concept of table spaces is only practice by oracle DBA, table partions where added. a Table space or Filegroups let you break your data into files and prevents fragmentation and or complete table scans. I may have 12 filegroups with one files each with a table partion by month and each file will only have that month of data on LUN 1. OR I may create a file group with a table partition and span it across 4 LUNS. What you need to do is set up your disk permons and get to know your disk characteristics.

    Here some good refernece docs.

    http://www.dell.com/sql

  • I have actually found a way of splitting the database. The short version is that I use DBCC ShrinkFile with the parameter EmptyFile.

    I first create 8 more files, and then I use EmptyFile to move my data.

    One small problem is that it stops in the middle and says:

    DBCC SHRINKFILE: System table SYSFILES1 Page 1:4601107 could not be moved to other files because it only can reside in the primary file of the database.

    Is it possible to create a new primary file for the database, instead of creating 8 secondary files (.ndf), then have one to be a new primary and 7 to be secondary?

    Søren,
    MCITP SQL 2008 (administration / programming), MCITP SQL 2005 (BI), MCT

  • just to cheer you up, experience shows that many SAN vendors and partners have little ability to correctly set up a SAN for SQL Server - there have been several threads about this and my personal experience has also showed this to be true, sadly.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • I have to agree with TheSQLGuru re: the vendor setting up the SAN...from painful past experience. If you haven't started using it yet, then you may stand a chance of getting things changed for the better. Once in and operational, you'll find it very difficult to change things at an in-depth level.

    Have to also agree with Matt Miller re: Paul Randal's comments on multiple 'files' (not filegroups). Its worth reading his blog on this and then making your own mind up. We have used multiple files on a few databases, but with the view of spliting then off to other LUNs if required for performance (ie increase the IO capacity quickly without having to break an active LUN). More management though. Here's the links to his blog...

    http://www.sqlskills.com/blogs/paul/2008/01/29/SearchEngineQA12ShouldYouCreateMultipleFilesForAUserDBOnAMulticoreBox.aspx

    http://www.sqlskills.com/blogs/paul/2008/01/29/TheFrustrationOfSweepingGeneralizationsFollowOnFromSearchEngineQA12.aspx

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply