Baseline historical performance data collection per object per point in time.

  • I may have published a similar post in the past (my apologies if I repeat myself, it could have been half a year ago and in a different context  (+ I have a semi-retrograde amnesia caused by daily exposure to hazardous production data, so I can't really remember..)

    ).

    <b>My goal is to be able to accommodate the following scenario as a DBA:</b>

    USER:  'the daily report  called 'The Journey of 1000 miles starts with a single step'  is hanging for 10 minutes on step 1 before returning any data, after I enter search criteria and press 'Go Get It' button'

    DBA:    'When did it happen last time (or first time you noticed it) ?'

    USER: 'Yesterday afternoon at 14:21 EST'

    DBA:  'and why are you reporting this only today, 24 hours later?!'

    USER: 'I wasn't sure that I was not hallucinating yesterday. Besides, I attended my weekly DA meeting that lasted all evening  (Dataholics Anonymous)'

    DBA: 'So you are not sure if the report really was taking 10 minutes, it may have been your delusion, right?'

    USER: 'True. Not sure. But I noticed today that this web page freezing behavior repeated several times as well'

    DBA: 'Was the same report always running smoothly in the past?'

    USER: 'Yes. Before yesterday it usually took 20 or 30 seconds, which was really quick for so many aggregations it s performing'

    DBA: ' Are you sure it was always running impeccably until yesterday?'

    USER: 'I am not quite sure.. You see, I was on these heavy psych meds for almost 3 months by now, may be it was already running slow but 10 minutes may have looked like 10 seconds for me, due to severe side effects of these medications'.

    DBA: (perplexed, quietly sympathetic): SIGHS...  'I will get back to you'.

      NEXT STEP:

    Reluctant DBA opens his own perf base-lining report (based on custom tables into which life historical perf metrics are saved per each DB object every 10 minutes 24/7), picks the name of STORED PROC that supplies data to the report in debate, and picks the name of this SP from dropdown,  Selects DateTime1 from one datetime picker, DateTime2 from the other, and presses 'COMPARE PERF METRICS of prMyJourneyOf1000Miles  at 14:21 (yesterday) and  11:00 (a month ago)(or any other day to compare with'), to see tendency or pattern of performance and exec duration variations.

    The returned report shows DBA all most important perf runtime metrics for BOTH DATETIMEs selected, for same object. In this case, stored proc prMyJourneyOf1000Miles. The DBA sees that user did not hallucinate on his complain details. it took same SP  10 seconds in the past, and 10 minutes yesterday. DBA does a few more spot-comparisons of yesterday with a few random dates in the past, for same object, to confirm his finding, and to map out further investigation of the issue.

    Now, how do i setup automated polling/historical data run-time collection (for all or just100 top most resource-hungry SPs) every 10 minutes, into my DBA_Perf_Utility custom database, so that I could run the above-described comparison scenario at any given time for same (selected) object between ANY 2 points of time?

    Does anyone know of any existing solution of this kind?  I really appreciate your input on this.

     

    MSSQL 2016 Enterprise.

     

    Likes to play Chess

  • If you are on 2016 or newer, could you use the built in QueryStore feature instead of trying to build your own query store?

    https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2016

     

Viewing 2 posts - 1 through 1 (of 1 total)

You must be logged in to reply to this topic. Login to reply