SQL server General administration: IO times and Page Life expectancy

  • Yup. There is 0 benefit to multiple log file. Ever.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • If you see Page Life Expectancy is less than 300 seconds then you have to increase the physical memory or RAM , if physical RAM is low then physical IOs also increase because page life in the buffer cache is low and engine reads and drop pages as per requirement from the buffer cache

    When you talk about Tempdb then first you have to check your Hardware is NUMA or not if hardware is NUMA then you have to find how many groups of NUMA then mdf and ldf depends on the NUMA groups otherwise on the available logical Processors,you should put tempdb files on separate disk

    Note multiple mdf,ndf and ldf gain the performance if you have multiple disk not in the case of multiple partitions

    Regards,
    Syed Jahanzaib Bin Hassan
    BSCS | MCTS | MCITP | OCA | OCP | OCE | SCJP | IBMCDBA

    My Blog
    www.aureus-salah.com

  • Syed Jahanzaib Bin hassan (6/8/2011)


    When you talk about Tempdb then first you have to check your Hardware is NUMA or not if hardware is NUMA then you have to find how many groups of NUMA then mdf and ldf depends on the NUMA groups otherwise on the available logical Processors,you should put tempdb files on separate disk

    Reference for the NUMA claims?

    There is NO performance benefit whatsoever from multiple ldf files, for TempDB or for any other database. SQL uses log files serially (one at a time) regardless of the number of cores, the NUMA settings or any other property of the server.

    Edit: To see how SQL works with multiple log files, see http://sqlskills.com/blogs/jonathan/post/An-XEvent-a-Day-%2823-of-31%29-e28093-How-it-Works-e28093-Multiple-Transaction-Log-Files.aspx

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster you seem to know what your talking about! I really appreciate the feed back!

    :cool:

    "There are no problems! Only solutions that have yet to be discovered!"

  • Syed Jahanzaib Bin hassan (6/8/2011)


    If you see Page Life Expectancy is less than 300 seconds then you have to increase the physical memory or RAM , if physical RAM is low then physical IOs also increase because page life in the buffer cache is low and engine reads and drop pages as per requirement from the buffer cache

    When you talk about Tempdb then first you have to check your Hardware is NUMA or not if hardware is NUMA then you have to find how many groups of NUMA then mdf and ldf depends on the NUMA groups otherwise on the available logical Processors,you should put tempdb files on separate disk

    Note multiple mdf,ndf and ldf gain the performance if you have multiple disk not in the case of multiple partitions

    What nonsense. All of it.

    Paul Randal
    CEO, SQLskills.com: Check out SQLskills online training!
    Blog:www.SQLskills.com/blogs/paul Twitter: @PaulRandal
    SQL MVP, Microsoft RD, Contributing Editor of TechNet Magazine
    Author of DBCC CHECKDB/repair (and other Storage Engine) code of SQL Server 2005

  • I agree that 300 seconds is only a microsoft recommendation not a requirement. I have a server that stays below that quite often could it use more memory sure is it a requirement NO.

    :cool:

    "There are no problems! Only solutions that have yet to be discovered!"

  • bopeavy (6/8/2011)


    I agree that 300 seconds is only a microsoft recommendation not a requirement. I have a server that stays below that quite often could it use more memory sure is it a requirement NO.

    And it's a pretty poor recommendation at that these days (especially on larger servers). It's one of those counters where you really need to know what is normal so that you know what's not normal, and then bear in mind what it means, not just quote numbers mindlessly.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Hope this helps: http://www.sqlmag.com/blogs/sql-server-questions-answered/sql-server-questions-answered/tabid/1977/blogid/59/Default.aspx

    Paul Randal
    CEO, SQLskills.com: Check out SQLskills online training!
    Blog:www.SQLskills.com/blogs/paul Twitter: @PaulRandal
    SQL MVP, Microsoft RD, Contributing Editor of TechNet Magazine
    Author of DBCC CHECKDB/repair (and other Storage Engine) code of SQL Server 2005

  • For the tempDB questions, my advice differs a little from what others are saying.

    I agree 100% that most systems do not need 1 data file per CPU (that's logical CPU, NUMA doesn't even enter into the equation). There are certain caveats you need to consider though. Do you know how to immediately identify tempDB contention? Are you actively monitoring for it? If you don't answer yes to both of these questions, then use 1 data file per CPU. The slight performance hit that you get for having more files than you may need is neglible when compared to the impact of tempDB contention that can last for a long time.

    Also, consider the approach that Gila Monster recommended about starting with a lower number of tempDB files and if you experience tempDB contention, increase incrementally until it goes away. Here, you have to make a judgement call regarding the criticality of your server. TempDB contention can bring your server to its knees so consider this to be unplanned downtime. Does your SLA allow you to be down long enough for this approach. How much downtime are you willing to incur to systematically find the "sweet spot"?

    Additionally, there is no hard evidence to support the statement that there is a performance hit for using more than 8 data files versus using 8 data files. There is additional overhead as you add more files, but there isn't a significant performance drop when you get above 8 data files. I strongly disagree with that statement.

    Testing has shown that some systems do still need 1 data file per CPU even today's very large systems with 48 CPUs or more. This has been proven in testing labs. Ultimately, it comes down to the number of concurrent tasks using tempDB (which is just about everything). If you system never has more than 8 concurrent operations running at the same time, then more than 8 data files for tempDB will not give you any benefit. If you can have several hundred concurrent operations running, then you will get the maximum benefit of having many tempDB files, not to exceed the number of CPUs.

    Equally important is that you configure the data files correctly. The files need to be all the same size and they need to be pre-sized to avoid having a file grow. SQL uses the amount of free space in its calculation to determine which file to hit next, and if 1 file is larger than the others and has more free space, all traffic will hit that single file. I generally size out my tempDB files to use about 90% of the tempDB drive and disable file growth on the data files.


    My blog: SQL Soldier[/url]
    SQL Server Best Practices:
    SQL Server Best Practices
    Twitter: @SQLSoldier
    My book: Pro SQL Server 2008 Mirroring[/url]
    Microsoft Certified Master: SQL Server, Data Platform MVP
    Database Engineer at BlueMountain Capital Management[/url]

  • Robert Davis


    For the tempDB questions, my advice differs a little from what others are saying.

    I agree 100% that most systems do not need 1 data file per CPU (that's logical CPU, NUMA doesn't even enter into the equation). There are certain caveats you need to consider though. Do you know how to immediately identify tempDB contention? Are you actively monitoring for it? If you don't answer yes to both of these questions, then use 1 data file per CPU. The slight performance hit that you get for having more files than you may need is neglible when compared to the impact of tempDB contention that can last for a long time.

    Also, consider the approach that Gila Monster recommended about starting with a lower number of tempDB files and if you experience tempDB contention, increase incrementally until it goes away. Here, you have to make a judgement call regarding the criticality of your server. TempDB contention can bring your server to its knees so consider this to be unplanned downtime. Does your SLA allow you to be down long enough for this approach. How much downtime are you willing to incur to systematically find the "sweet spot"?

    Additionally, there is no hard evidence to support the statement that there is a performance hit for using more than 8 data files versus using 8 data files. There is additional overhead as you add more files, but there isn't a significant performance drop when you get above 8 data files. I strongly disagree with that statement.

    Testing has shown that some systems do still need 1 data file per CPU even today's very large systems with 48 CPUs or more. This has been proven in testing labs. Ultimately, it comes down to the number of concurrent tasks using tempDB (which is just about everything). If you system never has more than 8 concurrent operations running at the same time, then more than 8 data files for tempDB will not give you any benefit. If you can have several hundred concurrent operations running, then you will get the maximum benefit of having many tempDB files, not to exceed the number of CPUs.

    Equally important is that you configure the data files correctly. The files need to be all the same size and they need to be pre-sized to avoid having a file grow. SQL uses the amount of free space in its calculation to determine which file to hit next, and if 1 file is larger than the others and has more free space, all traffic will hit that single file. I generally size out my tempDB files to use about 90% of the tempDB drive and disable file growth on the data files.

    You have made some good suggestions. Yes, I monitor, using Spotlight along with many queries against DMV's in my arsenal.

    What kind of DBA does not Monitor (UNDER PAYED ONE)!

    But as I have come to find out it is an art that must be changed or tweaked according to enviroment that your server is on.

    :cool:

    "There are no problems! Only solutions that have yet to be discovered!"

  • Well said Paul,

  • bopeavy (6/8/2011)


    But as I have come to find out it is an art that must be changed or tweaked according to enviroment that your server is on.

    Agree 100%


    My blog: SQL Soldier[/url]
    SQL Server Best Practices:
    SQL Server Best Practices
    Twitter: @SQLSoldier
    My book: Pro SQL Server 2008 Mirroring[/url]
    Microsoft Certified Master: SQL Server, Data Platform MVP
    Database Engineer at BlueMountain Capital Management[/url]

Viewing 12 posts - 16 through 26 (of 26 total)

You must be logged in to reply to this topic. Login to reply