May 23, 2011 at 10:10 pm
I have to test performance of queries with some homemade shortcuts to get run time information.
Here goes the sample
DECLARE
@total_clr_time BIGINT, @total_elapsed_time BIGINT, @total_logical_reads BIGINT, @total_logical_writes BIGINT,
@total_physical_reads BIGINT, @total_worker_time BIGINT
SELECT @total_clr_time = SUM(ISNULL(S.total_clr_time,0)), @total_elapsed_time = SUM(ISNULL(S.total_elapsed_time,0)),
@total_logical_reads = SUM(ISNULL(S.total_logical_reads,0)),@total_logical_writes = SUM(ISNULL(S.total_logical_writes,0)),
@total_physical_reads = SUM(ISNULL(S.total_physical_reads,0)), @total_worker_time = SUM(ISNULL(S.total_worker_time,0)) -- CPU
FROM SYS.dm_exec_requests R
JOIN SYS.dm_exec_query_stats S ON S.sql_handle = R.sql_handle
WHERE R.session_id = @@SPID
--- Test query area ---
SELECT name, * FROM SYS.all_columns WHERE OBJECT_ID = OBJECT_ID ('SYS.dm_exec_query_stats') AND NAME LIKE 'TOTAL_%'ORDER BY 1
--- / Test query area ---
SELECT SUM(ISNULL(S.total_clr_time,0)) - @total_clr_time AS clr_time, SUM(ISNULL(S.total_elapsed_time,0)) - @total_elapsed_time AS elapsed_time,
SUM(ISNULL(S.total_logical_reads,0)) - @total_logical_reads AS logical_reads, SUM(ISNULL(S.total_logical_writes,0)) - @total_logical_writes AS logical_writes,
SUM(ISNULL(S.total_physical_reads,0)) - @total_physical_reads AS physical_reads, SUM(ISNULL(S.total_worker_time,0)) - @total_worker_time CPU_TIME
FROM SYS.dm_exec_requests R
JOIN SYS.dm_exec_query_stats S ON S.sql_handle = R.sql_handle
WHERE R.session_id = @@SPID
Now the series of questions
•Segment 1
1.Is there any logical flaw in this whole mechanism (actually just constraining scope only to the sample query presented above)
2.Keeping all logical factors of Execution plan recompilation and DBCC DROPCLEANBUFFERS usage I have some ambiguity regarding followings
a.Creating index on a problem column (justified index suitability with all perspectives) which is in average cases may be the only solution for to improve Select query performance but in such cases while monitoring by above mentioned mechanism actually the “total elapsed time” in 90 % cases increased (mitigating new execution plan factors etc).
I need confirmation that whether the elapsed time may also accumulating the time it requires to update information in DBMS DMVs (especially DMVs stats related to newly created index etc)?
b.I have tried to test results of a query which contained some mixture of DDL statements too e.g.
IF (SELECT OBJECT_ID ('TEMPDB.DBO.#INCOME')) IS NOT NULL
-- delete table here -- SSC has restricted this syntax
CREATE TABLE #INCOME(
[contract_id] [int] PRIMARY KEY,
...
)
INSERT INTO #INCOME
In this scenario the results were also inconsistent!
Do the execution/query plans will not be ever cached for such blends of queries? And stored procedures having such problem will cause a serious risk to performance all the time?
Is this case is also valid if we even using a local variable rather than temporary table? (but in my case I have 50K+ records, so trying for Temporary table.)
•Segment 2
Queries which perform very good at SQL Server 2K but after technical upgrade to new super hardware J and SQL Server 2K8, their performance is dead and equal to zero! I have such query which was executing on old system by 30 min+, but after technical upgrade it was unable to produce result in 4 days (24 x 4 = 96 hours). Actually query was inserting in a 2.5 m+ records table but the same table was also part of select statement, noticeably WITH (NOLOCK) query hint, which was inserting data in that table at some very deep level of inner queries. J
Do you agree that this is that stated scenario is one of factor in dead performing queries after technical upgrades?
• Segment 3
Is there any good alternative of DBCC DROPCLEANBUFFERS?
And any other more helping queries are wellcome 🙂
Thanks for your valuable time!
May 23, 2011 at 11:22 pm
Asalam-O-Elaikum Abrar,
Use DTA for queries optimizations and check the Display execution plan and include execution plan this will help you to optimize the queries
For Queries
=========
sys.sysprocesses
sys.dm_exec_sql_text()
For Disk Contention
==============
sys.dm_io_virtual_file_stats()
For CPU Contention
==============
select
scheduler_id,
current_tasks_count,
runnable_tasks_count
from
sys.dm_os_schedulers
where
scheduler_id < 255
For Memory
========
Select * from sys.dm_os_performance_counters
where counter_name like 'Page Life %'
Regards,
Syed Jahanzaib Bin Hassan
BSCS | MCTS | MCITP | OCA | OCP | OCE | SCJP | IBMCDBA
My Blog
www.aureus-salah.com
May 23, 2011 at 11:40 pm
Syed Jahanzaib Bin hassan (5/23/2011)
Use DTA for queries optimizations ...
If you use DTA, do not simply accept its recommendations. Test them, implement the ones that make a difference.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 24, 2011 at 1:29 am
GilaMonster (5/23/2011)
Syed Jahanzaib Bin hassan (5/23/2011)
Use DTA for queries optimizations ...If you use DTA, do not simply accept its recommendations. Test them, implement the ones that make a difference.
[font="Verdana"]
Sorry, DTA?
Secondly if you can briefly comment on all questions raised above?
Thank you![/font]
May 24, 2011 at 2:43 am
DTA: Data Tuning Advisor.
Sound like to me, pretty strange
May 24, 2011 at 4:13 am
Database Engine Tuning Advisor and its not available in Express Editions and its a part of SQL Server Management Studio also
Regards,
Syed Jahanzaib Bin Hassan
BSCS | MCTS | MCITP | OCA | OCP | OCE | SCJP | IBMCDBA
My Blog
www.aureus-salah.com
May 24, 2011 at 4:37 am
No, don't use the DTA. It's recommendations are frequently a waste of time and that's if it makes recommendations.
Your approach is OK as mechanism to see what the query did. It will work well enough for single queries. But when you really want to expand out you'll want to look at something like the RML Utilities (free and from Microsoft, what's not to like?). Those allow you to capture lots of queries and then drill down on what is the longest running or most costly or most frequently called query. A much better approach than using the DTA.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 24, 2011 at 4:52 am
Grant Fritchey (5/24/2011)
No, don't use the DTA. It's recommendations are frequently a waste of time and that's if it makes recommendations.Your approach is OK as mechanism to see what the query did. It will work well enough for single queries. But when you really want to expand out you'll want to look at something like the RML Utilities (free and from Microsoft, what's not to like?). Those allow you to capture lots of queries and then drill down on what is the longest running or most costly or most frequently called query. A much better approach than using the DTA.
But if you can briefly comment on all questions raised above? :crying:
Thank you!
May 24, 2011 at 5:08 am
1.1. Not per se. I prefer just using the STATISTICS IO and STATISTICS TIME when working with a single query. It's easier than this set up and accurate enough
1.2.a. If adding an index is slowing things down, then it's probably not a a helpful index. Have you looked at the execution plan?
1.2.b. Mixing DDL into queries will have an impact, yes. It depends on what you're doing what that impact will be. Yes, it will probably lead to recompiles so you will get very inconsistent behavior. Moving around 50k records, frequently, there's nothing to do to speed that up. Indexes, etc. are not magic. If you're slamming memory and the disk with lots and lots of data, then you're slamming memory and the disk. Best answer, reduce the data set.
2. It is possible to get what is called a regression. A query that ran well enough in 2000 but stinks in 2008. In every case I've investigated, the query was actually poorly structured in 2000 and could have been tuned there to run much, much better. Moving it to 2008 just exposed the original problematic nature even more.
3. Alternative to what? You want to flush the cache, that's how you do it. Restarting the machine will do it as well.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 24, 2011 at 5:57 am
The whole point about performance tuning is that it's not just a case of running a few dmv queries and there's no magic solution - you only get to learn this sort of thing over time with experience.
Avoid the tuning advisor it's not going to help.
SQL Versions - yup every service pack, every version may alter the way any query runs,this is why you test and usually migrate patches/service packs through environments before hitting production.
However a query which ran well on monday at 09:00 may well perform really badly on tuesday at 12:00 and fine again 10 mins later!
There are big differences between sq versions, I spent a lot of time helping clients resolve performance issues after upgrading from 2000 to 2005, as mentioned previously many queries were just badly written.
I never truly understand this fixation with clearing cache(s): to explain - in production you'd hope that all your worst(!) heaviest procs/queries will run in cache, otherwise why put memory in your sql server, you'd save all that money and run with 4GB of ram < grin > True compilation time is different, true you'll do physical instaed of logical io, but if your query requires say 230,000 pages reads it'll still do that regardless of where it reads the pages. You'll use more cpu if you compile the proc and do physical io each time.
There are various memory pools you can clear , systemcache, for adhoc plans, specific databases etc. etc.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
May 24, 2011 at 6:23 am
Grant Fritchey (5/24/2011)
1.1. Not per se. I prefer just using the STATISTICS IO and STATISTICS TIME when working with a single query. It's easier than this set up and accurate enough1.2.a. If adding an index is slowing things down, then it's probably not a a helpful index. Have you looked at the execution plan?
1.2.b. Mixing DDL into queries will have an impact, yes. It depends on what you're doing what that impact will be. Yes, it will probably lead to recompiles so you will get very inconsistent behavior. Moving around 50k records, frequently, there's nothing to do to speed that up. Indexes, etc. are not magic. If you're slamming memory and the disk with lots and lots of data, then you're slamming memory and the disk. Best answer, reduce the data set.
2. It is possible to get what is called a regression. A query that ran well enough in 2000 but stinks in 2008. In every case I've investigated, the query was actually poorly structured in 2000 and could have been tuned there to run much, much better. Moving it to 2008 just exposed the original problematic nature even more.
3. Alternative to what? You want to flush the cache, that's how you do it. Restarting the machine will do it as well.
[font="Verdana"]
Thank you,
1.1. Great Agreed. (use of STATISTICS IO and STATISTICS TIME)
1.2.a. Yes, Execution plan was changed and actually was utilizing that index too. But total elapsed time didnt changed as much as that was expected.
Total Elapsed Time (fetched from mentioned shortcut of getting run time information from DMVs) increased in some cases due to increase in (I/O) of INDEX Pages or Index Information to be updated in DMVs at first? So my point was that the DMVs may also suffer from some latency to update new information?
1.2.b. yes, agreed. I was just dropping/creating temp table because it was used as replacement of some sub query, that was actually was the root cause of issue marked as segment 2 (Server 2k VS 2k8).
2. Well poorly structured = Lot of lot of sub-queries :w00t:
And as stated in problem, it had treated so..... ! 92 hours query response was just dropped to 13 minutes (Actually query structure didnt changed, but the one sub query which was accessing the same table, in which the insert was being done, was switched to temp table ... and all done! but may be that query was also behaving same non response problem with Select without insert statement 😎 so further investigations required ...... good)
3. DBCC DROPCLEANBUFFERS just clean up all data buffers for that Particular DB Server Instance... ! while actually i need same functionality (to get actual physical read count) for my test query only, so other people may dont suffer from certain level of latency.
Thank you![/font]
May 24, 2011 at 6:43 am
colin.Leversuch-Roberts (5/24/2011)
I never truly understand this fixation with clearing cache(s)
Me neither. I prefer to run queries more than once and discard the first results so as to get something close to what normal operation would (theoretically) be
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 24, 2011 at 6:46 am
Abrar Ahmad_ (5/24/2011)
So my point was that the DMVs may also suffer from some latency to update new information?
Not noticeably. The DMVs just expose internal memory structures, they're not real tables, and it's information that will be recorded for all queries.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 24, 2011 at 7:11 am
Abrar Ahmad_ (5/24/2011)
Grant Fritchey (5/24/2011)
1.1. Not per se. I prefer just using the STATISTICS IO and STATISTICS TIME when working with a single query. It's easier than this set up and accurate enough1.2.a. If adding an index is slowing things down, then it's probably not a a helpful index. Have you looked at the execution plan?
1.2.b. Mixing DDL into queries will have an impact, yes. It depends on what you're doing what that impact will be. Yes, it will probably lead to recompiles so you will get very inconsistent behavior. Moving around 50k records, frequently, there's nothing to do to speed that up. Indexes, etc. are not magic. If you're slamming memory and the disk with lots and lots of data, then you're slamming memory and the disk. Best answer, reduce the data set.
2. It is possible to get what is called a regression. A query that ran well enough in 2000 but stinks in 2008. In every case I've investigated, the query was actually poorly structured in 2000 and could have been tuned there to run much, much better. Moving it to 2008 just exposed the original problematic nature even more.
3. Alternative to what? You want to flush the cache, that's how you do it. Restarting the machine will do it as well.
[font="Verdana"]
Thank you,
1.1. Great Agreed. (use of STATISTICS IO and STATISTICS TIME)
1.2.a. Yes, Execution plan was changed and actually was utilizing that index too. But total elapsed time didnt changed as much as that was expected.
Total Elapsed Time (fetched from mentioned shortcut of getting run time information from DMVs) increased in some cases due to increase in (I/O) of INDEX Pages or Index Information to be updated in DMVs at first? So my point was that the DMVs may also suffer from some latency to update new information?
1.2.b. yes, agreed. I was just dropping/creating temp table because it was used as replacement of some sub query, that was actually was the root cause of issue marked as segment 2 (Server 2k VS 2k8).
2. Well poorly structured = Lot of lot of sub-queries :w00t:
And as stated in problem, it had treated so..... ! 92 hours query response was just dropped to 13 minutes (Actually query structure didnt changed, but the one sub query which was accessing the same table, in which the insert was being done, was switched to temp table ... and all done! but may be that query was also behaving same non response problem with Select without insert statement 😎 so further investigations required ...... good)
3. DBCC DROPCLEANBUFFERS just clean up all data buffers for that Particular DB Server Instance... ! while actually i need same functionality (to get actual physical read count) for my test query only, so other people may dont suffer from certain level of latency.
Thank you![/font]
Regarding your index, just because you see it referenced in the execution plan doesn't mean that the index was actually used well or helpful to the query. You may have introduced a key look up with the index and that is now causing extra operations that weren't there before. It's hard to say without seeing the execution plan, but from the sounds of things, we're talking about something like that.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 24, 2011 at 7:12 am
GilaMonster (5/24/2011)
colin.Leversuch-Roberts (5/24/2011)
I never truly understand this fixation with clearing cache(s)Me neither. I prefer to run queries more than once and discard the first results so as to get something close to what normal operation would (theoretically) be
Having had really horrific procedures that had compile time problems, I used to sweat that when I saw the size of a query exceed a certain point, but that was only clearing the procedure cache, not everything.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply