October 4, 2006 at 10:37 am
Hello,
I have captured a perflog on a five day interval that was recorded every three seconds to capture baseline performance for a new system . I am trying to do some basic analysis on the counters using aggregate functions like min, max and avg() based upon each hour of the day and then a daily averages. Unfortunately my basic knowledge of sql is not getting me the data I want to get. Is this even possible and if so what would be the best way of doing it.
Here is the table structure:
primary_key, int
time_perf, datetime
total_processor, float
page_split_Sec, float
page_sec, float
cache_hit_ratio, float
perc_mem_use, float
transactions_per_sec, float
buffer_cache_hit,float
pg_file_use,float
Thanks
Graham
October 4, 2006 at 1:47 pm
Can you give a little more detail as far as what you want your result set to look like and which columns do you want min, max, avg? Also, if you've already written something that gets you part of what you're looking for, can you post that too?
October 4, 2006 at 2:37 pm
I have had a tough time working through in how this could even be feasible but ideally I would have a result set of the date time format of day/month/year/hour, avg(counter), min(counter), max(counter). There would be an row for each hour that was captured in the perflog.
I have developed a very basic query which finds the avg for each column based upon the total population of the table with no where statements or group by as of yet. I had been looking into using cursors as a solution for the time intervale but nothing hardcoded, is that possible? I also had seen some things about the dateDiff could I use that as a where condition to help filter out the the avgs?
Here is what I have done so far very basic and not of much use right now.
select
avg(total_processor) as 'total processor avg', min(total_processor) as 'total processor min',
max(total_processor) as 'total processor max',
avg(page_split_Sec) as 'page_split_Sec avg',
min(page_split_Sec) as 'page_split_Sec min',
max(page_split_Secr) as 'page_split_Sec max',
avg(page_sec) as 'page_sec avg',
min(page_sec) as 'page_sec min',
max(page_sec) as 'page_sec max',
avg(cache_hit_ratio) as 'cache_hit_ratio avg',
min(cache_hit_ratio) as 'cache_hit_ratio min',
max(cache_hit_ratio) as 'cache_hit_ratio max',
avg(perc_mem_use) as 'perc_mem_use avg',
min(perc_mem_use) as 'perc_mem_use min',
max(perc_mem_use) as 'perc_mem_use max',
avg(buffer_cache_hit) as 'buffer_cache_hit avg',
min(buffer_cache_hit) as 'buffer_cache_hit min',
max(buffer_cache_hit) as 'buffer_cache_hit max',
avg(pg_file_use) as 'pg_file_use',
min(pg_file_use) as 'pg_file_use',
max(pg_file_use) as 'pg_file_use'
from pb_baseline
October 5, 2006 at 7:10 am
You did all the hard work... I think this is probably what you're looking for... the changes will group your output by date and hour...
max(total_processor) as 'total processor max',
avg(page_split_Sec) as 'page_split_Sec avg',
min(page_split_Sec) as 'page_split_Sec min',
max(page_split_Secr) as 'page_split_Sec max',
avg(page_sec) as 'page_sec avg',
min(page_sec) as 'page_sec min',
max(page_sec) as 'page_sec max',
avg(cache_hit_ratio) as 'cache_hit_ratio avg',
min(cache_hit_ratio) as 'cache_hit_ratio min',
max(cache_hit_ratio) as 'cache_hit_ratio max',
avg(perc_mem_use) as 'perc_mem_use avg',
min(perc_mem_use) as 'perc_mem_use min',
max(perc_mem_use) as 'perc_mem_use max',
avg(buffer_cache_hit) as 'buffer_cache_hit avg',
min(buffer_cache_hit) as 'buffer_cache_hit min',
max(buffer_cache_hit) as 'buffer_cache_hit max',
avg(pg_file_use) as 'pg_file_use',
min(pg_file_use) as 'pg_file_use',
max(pg_file_use) as 'pg_file_use'
from pb_baseline
GROUP BY SELECT DATEADD(hh,DATEDIFF(hh,0,Time_Perf),0)
--Jeff Moden
Change is inevitable... Change for the better is not.
March 8, 2011 at 1:13 am
How about if i want data every 15 minutes instead of by an hour?
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply