Perfmon Counters For Poor Performing Data Warehouse

  • PS, unless the server has tones of free ressources (cpu, ram, IO almost on idle), I wouldn't go with the missing index recommendation without proving it actually helps 1 or more costly queries.

    It's a dangerous quick fix to just add all those indexes blindly. It slows down maintenance, imports, backups, restores. Etc and with possibly 0 gain on the selects.

    Since this is almost a read-only system, I would not mind seeing 10-20 indexes on a table to support the frenquently used queries.

  • Since this is a reporting system, you might not want to go as high with the cost threshold, 25-30, which means your average cost of 35 might still get parallelism.

    CXPacket waits does imply parallel processing. Probably, but not definitely, it's the code, the indexes or both. Identify the costliest plans either in frequency of call, length of execution, or cpu cost (or all of the above). Find the 2-3 worse performing queries and then check the execution plans. See what they're doing. You may identify a coding practice that you have in lots of queries, or a central table with poor indexing, and get a huge win for very little effort. Then again, you may just have a long slog of tuning in front of you. But you have to work from a position of knowledge.

    That's the best approach.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Thanks to both of you for your guidance on this. I certainly believe that tuning will bring great benefits here! I just need to prove it! 🙂

    Are there any of the perfmon counters (or any other measurements you've had me take) that would indicate we should be looking at tuning rather than IO bottlenecks, memory upgrades, CPU upgrades etc?

    I'd like to say to my manager something like "I'd like a few days to spend tuning our database. The poor performance we've seen is not due to the hardware because of the following reasons..... It therefore must be because of the database and index design".

    I don't have enough knowledge to assert my conviction that tuning will have benefit.

  • davas (9/12/2011)


    Thanks to both of you for your guidance on this. I certainly believe that tuning will bring great benefits here! I just need to prove it! 🙂

    Are there any of the perfmon counters (or any other measurements you've had me take) that would indicate we should be looking at tuning rather than IO bottlenecks, memory upgrades, CPU upgrades etc?

    I'd like to say to my manager something like "I'd like a few days to spend tuning our database. The poor performance we've seen is not due to the hardware because of the following reasons..... It therefore must be because of the database and index design".

    I don't have enough knowledge to assert my conviction that tuning will have benefit.

    The only proof you can have is a before & after picture. For that you need to lose the fat queries. No way around it.

    Maybe ask for 2 days to prove you theory with at least finding & tuning the 1-2 worst offenders and then see if those reports run better (with before & after run time).

    Then ask for more time.

    I'm seen 90%+ system wide improvement using the articles I sent you (10 hours worth of work over 5 days). Not to be underestimated. And not to be mistaken with the easy button . It'll require work on your end (or someone else's).

  • The performance counters would be primarily waits and queues, but you need to combine perfmon with query behavior. You can't just measure one or just the other. What's the page life expectancy? That's one of the better indications for memory. Disk queues (individual, not average) and cpu queues. Those are fundamental measures. After that, you have to get granular. The problem is, you can have insufficient memory, but see CPU queueing (more compiles due to the memory pressure). It's a whole big picture that you have to paint. But if you can identify queries that are taking up X amount of CPU that are called thousands of times a minute or whatever, that will help your case.

    The most common problem is code, followed closely by indexing. However, don't discount the hardware. That's the foundation on which everything lives.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Ninja's_RGR'us (9/12/2011)


    Can you run this version?

    GO

    SELECT

    optz.counter

    , optz.occurrence

    , CONVERT(DECIMAL(18,2), optz.value) AS AvgValue

    , conf.name

    , conf.value

    , conf.value_in_use

    , Uptime.DaysUptime AS [Days Server UPTIME & Last Stats Reset]

    , CASE WHEN Uptime.DaysUptime < 45 THEN 'You may not have very meaningful stats because of a recent restart' ELSE NULL END AS [Stats Warning]

    , CASE WHEN optz.value < conf.value THEN 'Cost threshold for parallelism might be too low' ELSE NULL END AS [Cost Threshold Warning]

    , CASE WHEN conf.value_in_use <> conf.value THEN 'Server needs to be restarted for the setting to take effect' ELSE NULL END [Restart Warning]

    FROM

    sys.dm_exec_query_optimizer_info optz

    CROSS JOIN sys.configurations conf

    OUTER APPLY(

    SELECT

    CONVERT(DECIMAL(18 , 2) , DATEDIFF(hh , create_date , GETDATE()) / 24.0) AS DaysUptime

    FROM

    sys.databases

    WHERE

    name = 'tempdb'

    ) Uptime

    WHERE

    optz.counter = 'final cost'

    AND conf.name = 'cost threshold for parallelism'

    GO

    Ninja's_RGR'us,

    Can you tell me what the above code tells you? I'd like to potentially use it to compare two SQL servers to assess somehow the relative average complexity of a query. Will this figure tell me that?

    Thanks in advance!

  • That's precisely why I don't share this code (too much).

    It can lead to the wrong conclusion and fix symptoms rather than cause. CXpacket is not a problem per se. It mainly means that you have something running with more than 1 core at a time. Which, in a big query is the good solution.

    The idea behind this script (actually unproven but makes some kind of sense) , is to compare you current setting for the cost threshold for parallelism VS the avg cost calculated by the optimizer (which is only a guess).

    So while that avg is methematically correct, we're talking about a guess.

    Then you have to factor in that 90+% are oltp + olap, but at what ratio and how long are the olap queries??

    The idea here is to see if the setting comes under the avg cost calculated by the optimizer and if so to maybe raise it.

    Raise it to what is more of a gut call than anything else (I have superb instinct, but that is still largely a guess).

    So in all. It depends ;-).

  • I'm not looking to make changes to system setting as such... maybe just trying to understand what i'm looking at. I'd like to get a baseline value for a system and compare that against another system and i was thinking that your query may be useful.

    From reading the documentation on http://technet.microsoft.com/en-us/library/ms175002(SQL.90).aspx i'm making a big assumption that i could compare two systems using a query against this dmv.

    SELECT counter AS Counter,

    occurrence AS NumOptimizations,

    value AS AvgValue

    FROM sys.dm_exec_query_optimizer_info

    WHERE counter in ('optimizations','tables',

    'insert stmt','update stmt',

    'delete stmt','final cost',

    'elapsed time');

    The result (according to my little brain) would allow me to compare two system's workload. The average values for Total Cost, Elapsed Time and Tables would give me an indication of how the server is being used.

    the results for the server previously mentioned in this thread are below

    Counter NumOptimizationsAvgValue

    optimizations2386121

    elapsed time2386120.1061065

    final cost23861236.2774130

    insert stmt460921

    delete stmt39421

    update stmt287331

    tables2386123.0639951

    From this i would deduce that on average queries are looking across three tables with a cost of 36.

    I'd also be able to say that 19% of queries are INSERTs, 12% are UPDATEs, 2% DELETEs and 67% (the remainder) are SELECTs. I could compare this to another SQL server to get a very high-level indicator of usage?

    Is my logic valid do you think? Don't be afraid to hurt my feelings! 🙂

  • Your reasoning is correct, however without the perfom context (like cpu %, ram, etc) you can't really make that call.

    I see where you'd be going with this in a consolidation scenario and I think it has value.

  • Thanks for that. The other SQL server i am thinking of belongs to a sister company running similar tools and processes to our own. I'd like to see if these figures differ greatly (especially with the total cost value) to indicate the average 'cost' of a query on each system.

    Our SQL has performance problems, theirs does not have any obvious concerns so i'd like to demonstrate as many differences as I can. It will also server as a quick test to see how good tuning has been periodically since the cost figure should go down if we dramatically improve index coverage etc.

    Thanks again for your help on this... greatly appreciated!

    Chris

  • davas (9/14/2011)


    Thanks for that. The other SQL server i am thinking of belongs to a sister company running similar tools and processes to our own. I'd like to see if these figures differ greatly (especially with the total cost value) to indicate the average 'cost' of a query on each system.

    Our SQL has performance problems, theirs does not have any obvious concerns so i'd like to demonstrate as many differences as I can. It will also server as a quick test to see how good tuning has been periodically since the cost figure should go down if we dramatically improve index coverage etc.

    Thanks again for your help on this... greatly appreciated!

    Chris

    I think this needs pointing out.

    With all that gathering phase you've gone through so far you'd have time to read & understand the article and setup the daily trace + consume part.

    You'd be at the stage where you'd be tuning the top x queries instead of praying for a budget.

    Part of the DBA job should contain diy time for stuff that you assign to yourself.

Viewing 11 posts - 16 through 25 (of 25 total)

You must be logged in to reply to this topic. Login to reply