Suggestions for a very long transaction

  • Hi Gail

    Why fregmentation won't slow insert?

    Suppose I have a clustered index and my indexes are fregmented heavily. That means my indexes are taking more pages. Now for sql server to identify where new record should be inserted, won't it take more time?

    For point 2, I think the user experience will be more bad if it times out rahter than waiting 40 secs.

    Although I do agree that 40 secs is also too long...

    GilaMonster (4/20/2012)


    S_Kumar_S (4/20/2012)


    1.defrag indexes more frequently after checking how much they are fregmented.Currently they are rebuild weekly.

    Fragmentation won't slow down inserts, so this will likely do nothing

    2. Increase CommandTimeout to 40s from default 30s.

    If I had to wait 40 seconds for a website to process a purchase, I'd find another place to buy from.

    Online Trainer For SQL DBA and Developer @RedBushTechnologies with 18 yrs exp.

  • S_Kumar_S (4/20/2012)


    Paul

    We'll see if changing that to say,3 days, helps.Of course we'll first check how much the deframentation is.

    It's not going to help inserts. Fragmentation only slows down large range scans from disk. Emphasis 'large', 'scan' and 'disk'. An insert is none of those.

    Why increasing timeout will cause more problem? this big transaction is being blocked by some other process, hence timing out. If we make it to wait for another 10 secs, then there is probability that blocking created by other process goes and my transaction completes. At worst, it may timeout again, but the chances of it getting it complete will increase,am i right?

    Yes, but it will also hold any locks it has for 10 seconds longer, potentially causing worse blocking of other processes that might have completed fine.

    Increasing the timeout is a way of ignoring a problem, not fixing it.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Well, then is it like hitting the wall? I can't make the transaction small. I can't completely remove the blocking. although I can try to minimize it but there is no gurantee than I can bring it down to a level I want.

    Inserts are happening in 12 most frequently used tables, so there is always a good probability that some other process is always causing blocking on these tables.

    Looks like another option is to deeply analyse the blockings and see what can be done to reduce them...

    Online Trainer For SQL DBA and Developer @RedBushTechnologies with 18 yrs exp.

  • Is this a new problem, a problem that occurs occasionally or happens with every transaction?

    :exclamation: "Be brave. Take risks. Nothing can substitute experience." :exclamation:

  • Wait, what?

    We've given you suggestions

    Check for blocking, see what's causing the blocking, see if it can be minimised.

    Trace the queries, see where the problem is

    Post the code and the indexes please.

    Also consider one of the version-based isolation level. (assuming the problem is blocking)

    There isn't a magic solution to blocking, it requires troubleshooting (and that does not mean randomly guess at solutions) and targetted fixes.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • No need to do any deep analysis of anything so far, just look at what's going on with the process while you are inserting and that should let you know who/what is blocking you (or not). You can do that quickly with the activity monitor (and then check DMV for more details).

    If your insert is being blocked by a similar insert from another booking then you have to review your indexes and find out why it's taking a bigger lock than necessary.

    If your insert is being blocked by some other business process then you'll have to check with them if there is no other way to do it.

    If it's being blocked by some Select for reporting purpose then you can start thinking about offloading reporting to a mirror/replica or a copy of the DB.

    Finally you could also try to reduce the amount of work you have to do for a single insert to less than 12 tables (that seems massive for a simple ticket booking) or use staging tables (so clients insert are instant and everything is processed in batch, if your business logic allows it) or we can probably find many other ways to do what you need to do 🙂

  • Thanks so much for your suggestions...I'll check for blockings.

    This is like 'once in day' problem. But I am sure checking the blocking will throw light on how blocking are happening in general for successfull bookings as well....

    Oliiii (4/20/2012)


    No need to do any deep analysis of anything so far, just look at what's going on with the process while you are inserting and that should let you know who/what is blocking you (or not). You can do that quickly with the activity monitor (and then check DMV for more details).

    If your insert is being blocked by a similar insert from another booking then you have to review your indexes and find out why it's taking a bigger lock than necessary.

    If your insert is being blocked by some other business process then you'll have to check with them if there is no other way to do it.

    If it's being blocked by some Select for reporting purpose then you can start thinking about offloading reporting to a mirror/replica or a copy of the DB.

    Finally you could also try to reduce the amount of work you have to do for a single insert to less than 12 tables (that seems massive for a simple ticket booking) or use staging tables (so clients insert are instant and everything is processed in batch, if your business logic allows it) or we can probably find many other ways to do what you need to do 🙂

    Online Trainer For SQL DBA and Developer @RedBushTechnologies with 18 yrs exp.

  • Sorry if I sounded like I haven't got any help. Sure I got whole lot of suggestions that wouldn't have come to my mind.

    GilaMonster (4/20/2012)


    Wait, what?

    We've given you suggestions

    Check for blocking, see what's causing the blocking, see if it can be minimised.

    Trace the queries, see where the problem is

    Post the code and the indexes please.

    Also consider one of the version-based isolation level. (assuming the problem is blocking)

    There isn't a magic solution to blocking, it requires troubleshooting (and that does not mean randomly guess at solutions) and targetted fixes.

    Online Trainer For SQL DBA and Developer @RedBushTechnologies with 18 yrs exp.

  • Help and an exchange of ideas is what SSC Forums are great for. Best of luck with your issue.

    :exclamation: "Be brave. Take risks. Nothing can substitute experience." :exclamation:

  • If it's a once-a-day thing and it's at the same time every day then chances are there is something else running at that time causing blocking.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Unfortunately the time of day is not fixed. It just happens any time. Sometimes 2-3 times a day and sometimes none in a day.

    GilaMonster (4/20/2012)


    If it's a once-a-day thing and it's at the same time every day then chances are there is something else running at that time causing blocking.

    Online Trainer For SQL DBA and Developer @RedBushTechnologies with 18 yrs exp.

  • If you've got 12 tables to INSERT into within the transaction, you can try reducing the number of INSERT statements to 6 using the OUTPUT clause.

    CREATE TABLE #T1 (num INT, val FLOAT)

    CREATE TABLE #T2 (num INT, val FLOAT)

    SET STATISTICS TIME ON

    -- Your approach - one transaction but 12 tables/12 inserts (2 shown)

    BEGIN TRAN T1;

    INSERT INTO #T1 VALUES (1, 10)

    INSERT INTO #T2 VALUES (1, 10)

    COMMIT TRAN T1;

    -- New approach - one transaction with 6 INSERTs (1 shown)

    BEGIN TRAN T2;

    INSERT INTO #T1 (num, val)

    OUTPUT INSERTED.num, INSERTED.val

    INTO #T2

    VALUES (2, 10)

    COMMIT TRAN T2;

    SELECT * FROM #T1

    SELECT * FROM #T2

    DROP TABLE #T1

    DROP TABLE #T2

    Might help or might not. You'd need to try it.

    I can't demonstrate with a temporary table but you might try to use the WITH(ROWLOCK) hint on the INSERTed table in case escalating to a page lock is causing a performance issue.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

  • Hi

    I really didn't understand how output clause will help actually. The schema for all my 12 tables is different. So how can I use it there?

    Thanks

    dwain.c (4/23/2012)


    If you've got 12 tables to INSERT into within the transaction, you can try reducing the number of INSERT statements to 6 using the OUTPUT clause.

    CREATE TABLE #T1 (num INT, val FLOAT)

    CREATE TABLE #T2 (num INT, val FLOAT)

    SET STATISTICS TIME ON

    -- Your approach - one transaction but 12 tables/12 inserts (2 shown)

    BEGIN TRAN T1;

    INSERT INTO #T1 VALUES (1, 10)

    INSERT INTO #T2 VALUES (1, 10)

    COMMIT TRAN T1;

    -- New approach - one transaction with 6 INSERTs (1 shown)

    BEGIN TRAN T2;

    INSERT INTO #T1 (num, val)

    OUTPUT INSERTED.num, INSERTED.val

    INTO #T2

    VALUES (2, 10)

    COMMIT TRAN T2;

    SELECT * FROM #T1

    SELECT * FROM #T2

    DROP TABLE #T1

    DROP TABLE #T2

    Might help or might not. You'd need to try it.

    I can't demonstrate with a temporary table but you might try to use the WITH(ROWLOCK) hint on the INSERTed table in case escalating to a page lock is causing a performance issue.

    Online Trainer For SQL DBA and Developer @RedBushTechnologies with 18 yrs exp.

  • The schemas don't need to be the same, but presumably there is some similarity in the columnsyou're updating to each table?

    You can do all kinds of transformations on the OUTPUT statement's INSERTED.* columns. You can create some columns on the fly (e.g., as a literal or from a local variable).

    To be more specific, we'd need DDL and some sample data.


    My mantra: No loops! No CURSORs! No RBAR! Hoo-uh![/I]

    My thought question: Have you ever been told that your query runs too fast?

    My advice:
    INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
    The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.

    Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
    Since random numbers are too important to be left to chance, let's generate some![/url]
    Learn to understand recursive CTEs by example.[/url]
    [url url=http://www.sqlservercentral.com/articles/St

Viewing 14 posts - 16 through 28 (of 28 total)

You must be logged in to reply to this topic. Login to reply