Performance Tuning with SQL Server 2014 Standard

  • Hi,

    I'm sure this is probably trivial for many of you as seasoned DBAs but as a database developer, I have a system performance situation that I'm hoping you can help guide me on. The issue is that I have an application with a fairly transactional database, and I have a reporting tool used by many people that executes views created in the main application's database (for real-time reporting). The problem is that each application is impacting the performance of the other because CPU sits at 100% while the certain views run.

    I'm sure there are some low-hanging fruit here, but just so you have the whole picture, here's some information about the setup.

    -SQL Server 20014 Standard - Max server memory is set to 20GB, MAXDOP is set to 4, Cost Threshold for parallelism is at the default of 5

    -Windows Server 2012 R2

    -VM with 32GB memory, 1 socket and 8 virtual processors

    The application with the fairly transactional database generally uses only about 10-15% of CPU, and it's only when reports get run (and therefore the views get run) that the CPU really takes a hit. Ideally, I would prevent any views or queries like that from interfering with any of the 10-15% of CPU that the application needs. I've been doing some research and here are some of my thoughts to address this. I would be very interested to hear how you might approach correcting this issue or if some of these options would be completely useless.

    - Rebuild indexes - I know the database is highly fragmented right now

    - Rewrite views - although I know that some are as optimized as I can make them

    - Increase max server memory of SQL Server / the VM itself

    - Update the values of sockets, virtual processors, and MAXDOP to a more optimized combination

    - Use affinity mask to make SQL think there are fewer CPUs

    In your experience, have any of these proved to be consistently effective in reducing CPU utilization? Is there another option that I'm forgetting?

    Again, my primary goal is to prevent users from executing reports multiple times from the reporting tool and those queries interfering with the main application's performance, even if it takes the reports longer to return results.

    Thanks,

    Mike

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • Adding memory will almost always help some but there's really only one way to fix this problem and everyone tries to avoid it with trickery that usually doesn't work. You need finally bite the bullet and fix the bloody code in the views.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Increase your cost threshold. Far too low.

    Tune your code, tune your indexes. If the views are nested, fix that first.

    Consider some form of reporting server to offload the reports to.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thank you both for your replies. A few follow-up questions:

    Regarding cost threshold, I've read many articles that suggest 50 is a good starting point. However, some of the queries and views I'm working with have a cost of 6900+, so they would use parallelism regardless, right? But OK, if there's good reason to set it to something higher than 5, is there a concern that doing so would negatively impact the queries issued by the main application (because I could see that its queries may be less expensive than 50)?

    Is there a rule of thumb you use for setting this value in general?

    I don't think we're using very many nested views, but I'll check and correct as needed.

    With tuning code and the indexes, do you mean creating new indexes on fields we're using in joins as a means to reduce cost, and also rebuilding the indexes in the database (which is fragmented) should help with query cost too?

    Thanks again,

    Mike

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • mikes84 (8/6/2016)


    Regarding cost threshold, I've read many articles that suggest 50 is a good starting point. However, some of the queries and views I'm working with have a cost of 6900+, so they would use parallelism regardless, right?

    Yes they would

    But OK, if there's good reason to set it to something higher than 5, is there a concern that doing so would negatively impact the queries issued by the main application (because I could see that its queries may be less expensive than 50)?

    The point of setting it higher is that really cheap queries generally don't benefit from parallelism. The overhead imposed is usually higher than any gain, so we want the cheap queries to run serially and the expensive queries to run in parallel

    With tuning code and the indexes, do you mean creating new indexes on fields we're using in joins as a means to reduce cost, and also rebuilding the indexes in the database (which is fragmented) should help with query cost too?

    No. The Query Optimiser knows nothing about fragmentation.

    I mean adding/changing/removing indexes as necessary to support the queries that run against the tables.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thanks so much for the explanations! Very helpful.

    I'm going to work on all of the items in your and Jeff's recommendations.

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • As a bit of a sidebar, unless you've made the mistake of shrinking your database, which causes the worst kind of fragmentation possible, fragmentation usually isn't what really costs you in queries. It's usually bad code and a lack of properly updated statistics. A lot of people have come to the conclusion that rebuilding indexes fixes a lot of performance issues. In the old days, before SANs, that was partially true but what's important now is enabling the optimizer to figure out the best plan for the query. In order to do that, stats must be kept up to date. Since rebuilding indexes inherently rebuilds the stats, a lot of folks think that normal fragmentation is what's killing the query when, in reality, it was the rebuilding of the stats that "fixed" the query.

    I'll sometimes rebuild an index to recover some space caused by page splits but, for the most part, I don't rebuild or even reorganize indexes anymore and haven't since 18 Jan 2016. It sounds totally contradictory and flies in the face of all that we've been taught but both OLTP and heavy batch processing has actually improved performance on my main "money maker" machine since I stopped doing index maintenance. I have, however, increased the number of times I rebuild stats especially on tables that have ever increasing clustered indexes.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff,

    That's some interesting insight. I don't think I would have necessarily thought updating one was more important than updating the other. I'll make sure to ensure statistics get updated frequently enough to hopefully improve performance or at least remove it from the list of potential culprits.

    Again, thanks.

    Mike

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • mikes84 (8/7/2016)


    Jeff,

    That's some interesting insight. I don't think I would have necessarily thought updating one was more important than updating the other. I'll make sure to ensure statistics get updated frequently enough to hopefully improve performance or at least remove it from the list of potential culprits.

    Again, thanks.

    Mike

    Also remember not to waste your time. If you use the age of the stats (the last time they were updated), then you're doing it wrong because I'm sure that, like me, you have a huge number of 100% static lookup tables that only change maybe once a year. Some of them, like a ZIPCODE or NPA/NXX table can be quite large and will take a fair bit to rebuild stats on. Rebuilding stats on such a static table, regardless of how old those stats may be, it a total waste of time.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thank you. I'll keep all of this in mind.

    Mike Scalise, PMP
    https://www.michaelscalise.com

  • Gail,

    I know you suggested,

    "Tune your code, tune your indexes. If the views are nested, fix that first.

    Consider some form of reporting server to offload the reports to."

    I'm just curious--would you fix nested views before setting up a reporting server? I know Jeff suggested fixing the code in the views as the real and best way to address the issue, but in terms of stopping the bleeding and buying some time to do that, would setting up transactional replication (or something similar) be a good option?

    As a side note, I've started reading "Troubleshooting SQL Server: A Guide for the Accidental DBA" to get an idea of what DMVs I can run to see if other things may be causing some bottlenecks (e.g., lack of indexes, out of date statistics on fairly transactional tables, settings within SQL server, etc.).

    Thanks again to both of you for your insight.

    Mike

    Mike Scalise, PMP
    https://www.michaelscalise.com

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply