Interview Question: SQL

  • Hello

    Can any tell me, How to do performance tuning of dynamic T-sql?

    I did not understand ...Please help me

    Thank

    P

  • I appreciate the honesty. There's a few things you'll want to look into to understand optimizing Dynamic SQL in particular.

    First, you want to explore the plan cache. Then, you'll need to understand the difference between parameterized and non-parameterized plan re-usage. Next, you'll probably want to look into the sp_executesql command to both work with the cache and to help reduce sql injection. Finally you'll need to understand Execution Plans, which are important to both static and dynamic SQL.

    This is a BIG topic to understand for the two or three sentences that will actually answer the question that they're looking for. Prepare a good lunch, grab a bottle of soda, and dig in. This is important knowledge to know for the long term.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • 😀

    Thanks for your reply..

    P

  • You've basically ask how can I make any car go faster.

    No end really on that topic either!

  • patla4u (9/13/2011)


    Hello

    Can any tell me, How to do performance tuning of dynamic T-sql?

    To be quite honest, much the same way as you tune any T-SQL. Identify the poorly performing code, identify the cause of the slow down. Fix it. There's nothing special about dynamic SQL that changes the way you tune it.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • we can use by profiler we can do this. or else we will tune manully by step by step

  • Step 1:

    Run the query in Management Studio and view the actual execution plan.

    To view the execution plan, press Ctrl+M and then execute the query in SSMS.

    Step 2:

    Check if there are any table scans or Clustered index scan or Index scan involved in the execution plan. If yes, then you should analyze that tables info thoroughly in the execution.

    Step 3:

    Identify the actual rows in the table where there is scan involved. If the table is slightly larger i.e. greater than 2000 rows I would suggest you to check if there are proper indexes in the table. If the table has less than 2000 records table scan wouldnt be a problem and I would rather prefer a table scan on those tables.

    Step 4:

    If there is already an index you have to analyze why the optimizer preferred a Clustered index scan or an Index scan rather than Seeks. The reason may be due to fragmentation or outdated statistics or due to the least selectivity or the query cost.

    Step 5:

    The following query will give the exact % of fragmentation in the indexes for a particular table. The below query will display the fragmentation status in the table Person.Address in Adventureworks database.

    1

    2

    3

    SELECT CAST(DB_NAME(database_id) AS VARCHAR(20)) AS [DATABASE Name],

    CAST(OBJECT_NAME(OBJECT_ID) AS VARCHAR(20)) AS

    , Index_id, Index_type_desc, Avg_fragmentation_in_percent, Avg_page_space_used_in_percent

    FROM

    sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'),OBJECT_ID('person

    .address'),NULL,NULL,'Detailed')

    If the avg_fragmentation_in_percent is > 40% rebuild the index (using Alter index rebuild command) to eliminate fragmentation. Its recommended to have a rebuild index job for all the tables scheduled to run on a weekly basis. Please NOTE that rebuilding an index is an expensive operation and ensure that its done only during OFF-Production hours.

    Step 6:

    If the indexes are fine, then check the statistics. Sometimes the index will be fine but the query would still continue to be slow since the optimizer wouldnt be able to use the correct indexes due to outdated statistics. The following query gives the last time when the statistics for an index was last updated.

    1

    2

    3

    SELECT Name AS Stats_Name, STATS_DATE(OBJECT_ID, stats_id) AS Statistics_update_date

    FROM sys.stats

    WHERE OBJECT_ID=OBJECT_ID('person.address')

    The statistics should be updated either weekly or daily or on alternate days depending on the frequency of modifications in the table. The more frequent the table is modified the more frequent the statistics should be updated. Sometimes for high transactional tables you can schedule a job to update the statistics on a regular basis.

    Please NOTE that rebuilding the index will automatically update the statistics as well. Hence avoid updating the statistics if you are rebuilding the index.

    Step 7:

    If you see any key lookups happening in the execution plan, make use of Included columns to create a covering Nonclustered index to avoid expensive lookup operation. This will help in improving the query performance as the logical reads would be reduced to a great extent.

    Step8:

    Ensure that each table has a clustered index preferably on primary key columns (by default there is one unless you explicitly mention

    Nonclustered) or on Identity columns. The clustered index should always be defined on unique valued columns like primary keys or identity.

    Step9:

    If you have a composite index, ensure to have the most selective field (the ones which have unique values) as the leading column in the index.

    Step10:

    If you couldnt tune the query further or if you are clueless, try to use Database Tuning Advisor (DTA). Provide the SQL query as input file and run the DTA. It will provide a list of recommendations to reduce the query cost.

    Please do NOT blindly implement the suggestions doing so would certainly improve the query performance but you would end up creating numerous indexes which will be difficult to maintain during maintenance operations. You have to take the call of creating indexes as suggested by DTA, check whether the index will be used in most cases or if you can rewrite the query to make use of the existing indexes.

    Step11:

    While tuning stored procedures you need to ensure that the query plan for stored procedures is cached. The following query will help in providing the caching info for the stored procedures.

    1

    2

    3

    4

    5

    6

    SELECT usecounts, cacheobjtype, objtype, [TEXT]

    FROM sys.dm_exec_cached_plans P

    CROSS APPLY sys.dm_exec_sql_text(plan_handle) S

    WHERE cacheobjtype = 'Compiled Plan' AND objtype='Proc'

    AND [TEXT] NOT LIKE '%dm_exec_cached_plans%'

    AND S.DBID=DB_ID('dbname')

    The value of usecounts will increase every time you run the same stored procedure.If there is a problem in caching check if there is any SET options as most of them will cause a recompile in query plan. Also the plan will be flushed out every time you run DBCC Freeproccache or DBCC FlushprocinDB. Never use both of them in production environment as it will remove the cache for all the procedures and they (SP) will have to be recompiled the next time they are run.

    If you suspect there might be some problem in the query plan, you can try to use WITH RECOMPILE option which will recompile the particular stored procedure every time it runs and see how the performance is.

    CREATE PROC Test

    WITH RECOMPILE

    AS

    Statement 1

    Statement 2

    Step12:

    Finally if all the above options are fine and the query couldn't be tuned, try to rewrite the query. In few cases as soon as you view the query such as the ones below we need to rewrite the query:

    1.Creating a view with TOP 100% in order to include the ORDER BY

    clause in view definition where the view will not be sorted unless we explicitly sort the view by issuing

    Select * from view order by column1 Result will be sorted

    Select * from view Result will NOT be sorted even though there is a ORDER BY clause in the view definition.

    Thus there is a extra cost involved in sorting by using the ORDER BY clause in view definition even though the result is NOT sorted. Hence we should avoid ORDER BY in view definition and instead use it as Select * from view order by column1

    1.Using correlated sub queries will cause RBAR Row by agonizing

    Row and will affect the performance.

    2.Avoid using Scalar functions in select statements and instead

    use Inline or Table valued function. Since Scalar function behaves like a cursor we need to avoid it being referenced in the Select statement

  • Read my book. It's down there in my signature.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Thank you so much

    P

    🙂

  • Just a thought, could this question be in reference to QEP caching and dynamic sql?

  • SQLDBA ARJUN (9/14/2011)


    Identify the actual rows in the table where there is scan involved. If the table is slightly larger i.e. greater than 2000 rows I would suggest you to check if there are proper indexes in the table. If the table has less than 2000 records table scan wouldnt be a problem and I would rather prefer a table scan on those tables.

    That's dangerous to say. I've added indexes to tables way under 2000 rows and seen massive performance improvements. If you want a threshold, use page count not row count.

    Step 4:

    If there is already an index you have to analyze why the optimizer preferred a Clustered index scan or an Index scan rather than Seeks. The reason may be due to fragmentation or outdated statistics or due to the least selectivity or the query cost.

    Fragmentation will never cause the optimiser to ignore an index, the optimiser doesn't even consider fragmentation (that's a storage engine problem)

    If you see any key lookups happening in the execution plan, make use of Included columns to create a covering Nonclustered index to avoid expensive lookup operation. This will help in improving the query performance as the logical reads would be reduced to a great extent.

    Depends how many rows are involved. A key lookup on 5 or 10 rows is not a problem.

    Ensure that each table has a clustered index preferably on primary key columns (by default there is one unless you explicitly mention

    Nonclustered) or on Identity columns. The clustered index should always be defined on unique valued columns like primary keys or identity.

    While it is recommended that the cluster be unique, there can be good and valid reasons to put it on a column that is not the primary key and not unique at all.

    If you have a composite index, ensure to have the most selective field (the ones which have unique values) as the leading column in the index.

    Garbage. Index column ordering should be based on the queries. Defining indexes with unique columns first is worthless if no queries need to filter on those.

    http://sqlinthewild.co.za/index.php/2009/01/19/index-columns-selectivity-and-equality-predicates/

    1.Using correlated sub queries will cause RBAR Row by agonizing

    Row and will affect the performance.

    Correlated subqueries do not run once per-row of the outer query (there are two exceptions when they do)

    p.s. Copying someone else's work and presenting it as your own is plagiarism and is both unethical and unprofessional. If you quote someone's work, say so and reference the original site.

    http://sql-articles.com/articles/performance-tunning/query-tuning-steps/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply