Update takes too long

  • Grant Fritchey (4/27/2015)


    TheSQLGuru (4/27/2015)


    Igor Micev (4/27/2015)


    You can use the NOLOCK hint for srno without worries because u're reading an identity value.

    Can you please explain that statement Igor??

    Yeah. I'm unclear on it too.

    Sorry I'm replying late. I actually wrote it from my phone and was trying to be short.

    I meant this

    declare @srno int

    select @srno=srno from Entry_Jangad_Lot (NOLOCK) where Kapan=@Kapan

    and Lotno=@Lotno

    and Position='CURRENT'

    based on

    3) blocking by other activity (I go with this by default for a 60K row tale without any other information)

    from The SQL guru's post.

    Igor Micev,My blog: www.igormicev.com

  • Igor Micev (4/28/2015)


    Grant Fritchey (4/27/2015)


    TheSQLGuru (4/27/2015)


    Igor Micev (4/27/2015)


    You can use the NOLOCK hint for srno without worries because u're reading an identity value.

    Can you please explain that statement Igor??

    Yeah. I'm unclear on it too.

    Sorry I'm replying late. I actually wrote it from my phone and was trying to be short.

    I meant this

    declare @srno int

    select @srno=srno from Entry_Jangad_Lot (NOLOCK) where Kapan=@Kapan

    and Lotno=@Lotno

    and Position='CURRENT'

    based on

    3) blocking by other activity (I go with this by default for a 60K row tale without any other information)

    from The SQL guru's post.

    If a select to acquire a single clustered primary key value from a table is blocked for an unacceptable time you MUST take care of what is causing that from the other side of the coin (the thing DOING the blocking), not by NOLOKing the SELECT for said single value. It could be some unacceptable query is running for too long against the entire table (again, it is just 60K rows - but it could be some massive join taking forever?). It could also be needing a nonclustered index on (some of) the WHERE clause components in the SELECT, which could a) speed the query significantly by itself and b) avoiding being blocked by other concurrent accesses on the table.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • The following "first responder" query will provide us with some execution statistics like execution time, blocked vs working time, page read count, write count, etc. for each statement contained in the specified stored procedure. We then have a few clues about what's going on.

    The columns returned are for for most recent execution stats, but you can also add total, min, and max for each indicator. This link provides detail on each column contained in sys.dm_exec_query_stats :

    https://msdn.microsoft.com/en-us/library/ms189741.aspx

    SELECT

    OBJECT_NAME(objectid) object_name

    ,case when qs.statement_start_offset is not null

    then

    substring(char(13)+SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,

    ((

    CASE WHEN qs.statement_end_offset = -1

    THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2

    ELSE qs.statement_end_offset

    END

    - qs.statement_start_offset)/2) + 1),1,8000)

    else qt.text

    end as statement_text

    ,creation_time

    ,last_execution_time

    ,execution_count

    ,convert(varchar,dateadd(ms,last_elapsed_time,getdate())-getdate(),108)last_elapsed_time

    ,convert(varchar,dateadd(ms,last_worker_time,getdate())-getdate(),108)last_worker_time

    ,convert(varchar,dateadd(ms,last_elapsed_time - last_worker_time,getdate())-getdate(),108)last_blocked_time

    ,last_logical_reads

    ,last_physical_reads

    ,last_rows

    from sys.dm_exec_query_stats qs

    cross apply sys.dm_exec_sql_text(qs.sql_handle) qt

    where OBJECT_NAME(objectid) = 'sp_add_update_lot_receive'

    order by (last_elapsed_time - last_worker_time) desc;

    If you could provide the actual execution plan for the update statement, then that would be useful as well.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Right after adding the indexes you saw no improvement. Not too surprising as the procedure was using the old execution plan that was cached when the procedure was last compiled. That plan did not have the index to use so adding it would have little affect.

    Now if you are seeing improvement you can't tell if it was changing the procedure or that the procedure now uses the index that did it.

    ATBCharles Kincaid

  • Charles Kincaid (4/30/2015)


    Right after adding the indexes you saw no improvement. Not too surprising as the procedure was using the old execution plan that was cached when the procedure was last compiled. That plan did not have the index to use so adding it would have little affect.

    Now if you are seeing improvement you can't tell if it was changing the procedure or that the procedure now uses the index that did it.

    Hmm. Wouldn't adding a meaningful index to a table cause "affected" plans to recompile? Bit tired at the moment so could be confuzzled ... :w00t:

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • As far as I know, we never got any execution plans for reference. We don't know how many pages are being read / written, how often this procedure is called, what percentage of time is work vs waiting, or how long is considered by end users to be too long.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • TheSQLGuru (4/30/2015)


    Hmm. Wouldn't adding a meaningful index to a table cause "affected" plans to recompile? Bit tired at the moment so could be confuzzled ... :w00t:

    That would be so nice if it were true. That would save all the headache folks have with this issue.

    Here is, as I suspect, why that can't happen. Lets say that you had a table that had no index at all and there were several thousand rows (it could happen). Next you build a stored procedure that selects one or more rows from the table based upon a parameter. The plan that is cached at that time uses a table scan as there is no alternative.

    Now you add and index that well supports the WHERE clause in your procedure. Unfortunately SQL Server does not try to guess which procedures could be improved by using the index. If you recompile the procedure then SQL Server looks at indexes that exist at that moment and caches a new plan based on that.

    In some cases SQL Server could guess a bit as there are internal tables that indicate which objects are used by a procedure. Still one could construct a SQL statement in a procedure and have that executed in the procedure and that would not show in that case. Further a procedure can, and often does, call other stored procedures.

    ATBCharles Kincaid

  • Charles Kincaid (5/4/2015)


    TheSQLGuru (4/30/2015)


    Hmm. Wouldn't adding a meaningful index to a table cause "affected" plans to recompile? Bit tired at the moment so could be confuzzled ... :w00t:

    That would be so nice if it were true. That would save all the headache folks have with this issue.

    Here is, as I suspect, why that can't happen. Lets say that you had a table that had no index at all and there were several thousand rows (it could happen). Next you build a stored procedure that selects one or more rows from the table based upon a parameter. The plan that is cached at that time uses a table scan as there is no alternative.

    Now you add and index that well supports the WHERE clause in your procedure. Unfortunately SQL Server does not try to guess which procedures could be improved by using the index. If you recompile the procedure then SQL Server looks at indexes that exist at that moment and caches a new plan based on that.

    In some cases SQL Server could guess a bit as there are internal tables that indicate which objects are used by a procedure. Still one could construct a SQL statement in a procedure and have that executed in the procedure and that would not show in that case. Further a procedure can, and often does, call other stored procedures.

    Best way to test this is to test it. I have seen queries in SSMS change performance as an index is created and dropped, why wouldn't the same thing happen for a stored procedure?

  • Stored procedures cache the execution plan at the time the procedure is compiled. On the other hand a query may get a new execution plan every time that it is run.

    This is one of the upsides of stored procedures being that it does not have to use the query optimizer on every run. For a complex procedure this can be quite a savings.

    The downside is that a better index might be missed due to the cached plan. To see what I'm talking about see this MSDN article. In part it says

    When a procedure is compiled for the first time or recompiled, the procedure’s query plan is optimized for the current state of the database and its objects. If a database undergoes significant changes to its data or structure, recompiling a procedure updates and optimizes the procedure’s query plan for those changes. This can improve the procedure’s processing performance.

    ATBCharles Kincaid

  • Charles Kincaid (5/9/2015)


    Stored procedures cache the execution plan at the time the procedure is compiled. On the other hand a query may get a new execution plan every time that it is run.

    Queries are also stored in cache. It's just that for complex queries they will only get reused if the parameter values are the same. In cases of simple or forced parameterization, they're treated in exactly the same way as stored procedures. Also prepared statements which use parameters are treated the same was as stored procedures all the time.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I'm with Lynn... We need a test...

    Let's start with 1M rows of data...

    /* ========================================================

    Throw 1M rows of data into a table (no index to start with)

    ======================================================== */

    IF OBJECT_ID('dbo.Numbers') IS NOT NULL

    DROP TABLE dbo.Numbers;

    GO

    ;WITH n (n) AS (

    SELECT 1 FROM (VALUES (1),(1),(1),(1),(1),(1),(1),(1),(1),(1)) n (n)

    ), Tally (n) AS (

    SELECT

    ROW_NUMBER() OVER (ORDER BY (SELECT NULL))

    FROM n n1, n n2, n n3, n n4, n n5, n n6

    )

    SELECT

    t.n AS SomeNumber,

    CASE WHEN t.n % 2 = 0 THEN 'Even' ELSE 'Odd' END AS NumType

    INTO dbo.Numbers

    FROM Tally t

    ... and a simple proc that would benefit from an index...

    SET ANSI_NULLS ON

    GO

    ALTER PROCEDURE dbo.NumbersTest

    /* ==============================================================================

    ============================================================================== */

    /*

    EXEC dbo.NumbersTest

    */

    AS

    BEGIN

    SET NOCOUNT ON;

    SELECT

    n.SomeNumber,

    n.NumType,

    ROW_NUMBER() OVER (PARTITION BY n.NumType ORDER BY n.SomeNumber) AS rn

    FROM dbo.Numbers n

    END

    Now lets execute the proc without any indexes at all and grab some data from the execution plan...

    <StmtSimple StatementCompId="4" StatementEstRows="1000000" StatementId="1" StatementOptmLevel="FULL" CardinalityEstimationModelVersion="120" StatementSubTreeCost="56.4825"

    Now let's add an index that should help out the proc...

    CREATE NONCLUSTERED INDEX ix_Numbers_Type_Some ON dbo.Numbers (

    NumType,

    SomeNumber

    )

    WITH (DATA_COMPRESSION = PAGE, FILLFACTOR = 100)

    And rerun the proc with no additional changes and see if the actual plan changes...

    <StmtSimple StatementCompId="4" StatementEstRows="1000000" StatementId="1" StatementOptmLevel="TRIVIAL" CardinalityEstimationModelVersion="120" StatementSubTreeCost="2.55439"

    Those certainly look different to me...

    But to be safe, I'll drop the index a try it again...

    <StmtSimple StatementCompId="4" StatementEstRows="1000000" StatementId="1" StatementOptmLevel="FULL" CardinalityEstimationModelVersion="120" StatementSubTreeCost="56.4825"

    Hope this Helps... 🙂

  • I love it. A newbie to the forum with just 7 points and already posting million row test tables.

    Welcome aboard, Jason!

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (5/12/2015)


    I love it. A newbie to the forum with just 7 points and already posting million row test tables.

    Welcome aboard, Jason!

    We really should setup a file share so folks can post a complete .VHD image of their server. :rolleyes:

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Jeff Moden (5/12/2015)


    I love it. A newbie to the forum with just 7 points and already posting million row test tables.

    Welcome aboard, Jason!

    Thank you sir! I've been a long time reader around here. I figured that I should try to contribute a little.

    BTW... I'm a huge fan of your articles!

  • Good one Jason!

    What version of SQL Server are you working with? Looks like even BOL can be wrong. Like I always say, "It could happen." 🙂

    I had thought, since I have been looking into a lot of DMVs recently, that SQL Server could know that indexes were added or deleted and that procedures reference objects. It should, in many cases guess, that a procedure needed to be recompiled because a dependent object had been modified. If, however, you created SQL statements and called execute then that would not show up in the references. But wait! That plan could not been cached either and the new index should be used.

    Have we been bitten by the dogma? (Bad dogma. Bad. Bad dogma.) Ah that it happens to me too. It could happen.

    ATBCharles Kincaid

Viewing 15 posts - 16 through 30 (of 35 total)

You must be logged in to reply to this topic. Login to reply