Table Lock Escalation

  • We've tried virtually everything to make our VFP app as quick and responsive as possible, but the constant database updates make the system very sluggish at times. Our's is a foreign exchange order system, so when the market experiences a lot of movement in the currency rates then that translates into table triggers firing left and right.

    Essentially, as the Currency Rate table experiences heavy updates from the rate feed process, then the Rate table trigger fires off for every single record update (which can be easily five or more updates per second during a busy market). This trigger in turn needs to update the customer order table, which in turn fires its own triggers to recalucate columns in the order table.

    So for example, if the EUR/USD experiences say five updates per second, then the Rate trigger in turn needs to update the customer EUR/USD orders five times. If there are say 200 EUR/USD orders in that table, then we run an Update command on a subset of those 200 EUR/USD orders.

    I've added "set lock_timeout 3" and "SET DEADLOCK_PRIORITY LOW" in the Rate Trigger.

    I'm also going to test out "SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED" in the procedure which runs the Update command to see if this prevents locking.

    Does anyone share my experience with heavy table updates slowing down your system ? Have you tried READ UNCOMMITED in your procedures ? Do I need a BEGIN TRAN and COMMIT statement around each and every UPDATE to help make a more efficient use of resources ?

    Thank  you in advance,

    Bob

     

     

  • To just follow up with some specific code, here's what our Rate Trigger basically does :

           update order_fx WITH (ROWLOCK) set act_code = 'FXO_CCY'

              where (cur_1 = @cur_1 and cur_2 = @cur_2 and cancelled = 0

              and executed = 0 and        expired = 0 and 

              frozen = 0 and tick_lnk1 = 0 and disabled = 0)

    then the "Order_Fx" table trigger looks like this :

     declare @ref_pkey numeric(6) 

     declare order_fx_trigger cursor for

     select pkey

     from inserted

     open order_fx_trigger

     fetch next from order_fx_trigger INTO @ref_pkey

     if @@FETCH_STATUS = 0

       begin    

         while @@FETCH_STATUS = 0

        begin     

               exec update_order_fx_calculated_data @ref_pkey

        fetch next from order_fx_trigger INTO @ref_pkey

             end

       end  

     

     close order_fx_trigger

        deallocate order_fx_trigger

    Please keep in mind that the procedure "update_order_fx_calculated_data " does an additional update on that record (represented by @ref_pkey).

    So, for example, if my initial Update above hits just 100 records, then the Order_fx trigger opens a cursor and loops thru it 100 times in order to recalculate columns in each record.

     

     

  • To use a term from Jeff, looks like RBAR (Row By Agonizing Row).  I would revisit your update process and look at rewriting it use a set based solution instead of using a cursor.  I am not saying cursors are evil and should never be used, but their use should becarefully considered and only used where appropriate.

    I would need a little more information regarding the the tables involved (DDL) and the update procedure you run to provide you with more help.

  • The trigger on the FXO_CCY rate table (abridged version) is :

    ALTER  trigger [tr_fxo_ccy] on [fxo_ccy]

    After Update, Insert

    As

    set lock_timeout 3   --10 BM 11/29/06 --30000 -- waits 30 seconds for another transac. to release lock

    SET DEADLOCK_PRIORITY LOW

    ...

     declare fxo_ccy cursor for

     select pkey

     from inserted

     open fxo_ccy

     fetch next from fxo_ccy INTO @ref_pkey

     

       // Just initialize some values ...

       select  @cur_1      = cur_1,

        @cur_2      = cur_2,

        @warn_lev   = warn_lev,

        @warn_lev2  = warn_lev2,

        @warn_close = warn_close,

        @priority   = priority,

        @lst_time   = lst_time,

        @lst_rate   = lst_rate,

        @lst_bid    = lst_bid ,

        @lst_ask    = lst_ask ,

        @rate_high  = rate_high,

        @rate_low   = rate_low ,

        @from_time  = from_time,

        @to_time    = to_time ,

        @frez_date  = frez_date,

        @frez_time  = frez_time,

        @max_am     = max_am,

        @decimals   = decimals

       from fxo_ccy where pkey = @ref_pkey

    ... Use those values above to run the stored proc

    exec fxo_ccy_update1 @cur_1, @cur_2, @lst_rate ,@lst_bid , @lst_ask // PROC BELOW...

    =================

    ALTER         procedure  fxo_ccy_update1

    @cur_1 varchar(3),

    @cur_2 varchar(3),

    @lst_rate numeric(15,8),

    @lst_bid numeric(15,8),

    @lst_ask numeric(15,8)

    As

    SET NOCOUNT ON

    Declare @fxo_bidask int,

     @fxo_cltsde int,

     @num_rows numeric(3,0)

     

     SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED -- 05/07/07

          update order_fx WITH (ROWLOCK) set act_code = 'FXO_CCY', update_date = getdate()

              where (cur_1 = @cur_1 and cur_2 = @cur_2 and cancelled = 0 and executed = 0 and expired = 0 and

      frozen = 0 and tick_lnk1 = 0 and disabled = 0)

    ==================

    The main order table, ORDER_FX, contains 178 columns. I know that's not a good design, but that's what we're stuck with until the system is redeveloped.

    The Order_Fx table also has a trigger which fires after the above-mentioned proc "fxo_ccy_update1" updates order_fx. At this final level is where our cursor exists.

    Here's the triggers on ORDER_FX table :

    --------------------

    ALTER                   trigger [tr_order_fx_update_calculated_data] on dbo.ORDER_FX

    After Update, Insert

    As 

     declare @ref_pkey numeric(6) 

     declare order_fx_trigger cursor for

     select pkey

     from inserted

     open order_fx_trigger

     fetch next from order_fx_trigger INTO @ref_pkey

     if @@FETCH_STATUS = 0

       begin    

         while @@FETCH_STATUS = 0

        begin     

               exec update_order_fx_calculated_data @ref_pkey // PROC TO RECALC THIS RECORD

        fetch next from order_fx_trigger INTO @ref_pkey

             end

       end  

     

     close order_fx_trigger

        deallocate order_fx_trigger

    GO

    SET QUOTED_IDENTIFIER OFF

    GO

    SET ANSI_NULLS ON

    GO

    --------------

     

  • Looking at what you have posted, I am not sure where to start, so I will start at the beginning.  Can you provide us with a more general description of your current process.  Distill it to its basics and provide ddl for the tables and triggers that cover the basics.

    Example (with no details): An update (single or multiple rows) occurs in table 1, a trigger on table 1 updates table 2 where another trigger updates table 3.

    With a more generic example we maybe able to more easily provide you with a more efficient means of completing your updates than using a cursor based process.  I feel confident that this is why your process is slow, and I am sure others out there would agree.

    Also, just looking at what you have posted, I see some extraneous code:

    if @@FETCH_STATUS = 0  -- These lines are extraneous

    begin                             -- These lines are extraneous

        while @@FETCH_STATUS = 0

        begin

            exec update_order_fx_calculated_data @ref_pkey // PROC TO RECALC THIS RECORD

            fetch next from order_fx_trigger INTO @ref_pkey

        end

    end                               -- These lines are extraneous

    Looking forward to seeing more!

  • We also need to see what the function(s) that are called are doing.  Without that info, it is hard to determine how to create the set based solution that may be used in place of your current cursor based solution.

  • A single-rec update occurs on FXO_CCY which triggers a mult-rec update on ORDER_FX. ORDER_FX trigger now fires, then opens a cursor for these updated records. We loop thru each record, recalculate mult columns, then re-update the record in order_fx.

    FXO_CCY Trigger DDL:

       if @cur_1 = @base_cur and @cur_2 <> @base_cur           // Update CURRENCY rec

        

          update CURRENCY set lst_time   = @lst_time,

          lst_rate   = @lst_rate,

          ref_rate   = @lst_rate,

          lst_bid    = @lst_bid ,

          lst_ask    = @lst_ask ,

          rate_high  = @rate_high,

          rate_low   = @rate_low

         where currency = @cur_2 and lst_rate <> @lst_rate

       if @cur_2 = @base_cur and @cur_1 <> @base_cur

          update CURRENCY set lst_time   = @lst_time,

          lst_rate   = @lst_rate,

          ref_rate   = @lst_rate,

          lst_bid    = @lst_bid ,

          lst_ask    = @lst_ask ,

          rate_high  = @rate_high,

          rate_low   = @rate_low

         where currency = @cur_1 and lst_rate <> @lst_rate

          if exists( select cur_1,cur_2 from ORDERFX1202..fxo_ccy where cur_1 = @cur_1 and cur_2 = @cur_2)

         begin

        update ORDERFX1202..FXO_CCY set     // Update replicated FXO_CCY tbl on web DB

         cur_1      = @cur_1,

         cur_2      = @cur_2,

         warn_lev   = @warn_lev,

         warn_lev2  = @warn_lev2,

         warn_close = @warn_close,

         priority   = @priority,

         lst_time   = @lst_time,

         lst_rate   = @lst_rate,

         lst_bid    = @lst_bid ,

         lst_ask    = @lst_ask ,

         rate_high  = @rate_high,

         rate_low   = @rate_low ,

         from_time  = @from_time,

         to_time    = @to_time ,

         frez_date  = @frez_date,

         frez_time  = @frez_time,

         max_am     = @max_am,

         decimals   = @decimals     

        where cur_1 = @cur_1 and cur_2 = @cur_2

         and lst_rate <> @lst_rate

         and lst_bid <> @lst_bid

         and lst_ask <> @lst_ask

            // Update main ORDER_FX table

           update order_fx WITH (ROWLOCK) set act_code = 'FXO_CCY', update_date = getdate()

              where (cur_1 = @cur_1 and cur_2 = @cur_2 and cancelled = 0 and executed = 0 and expired = 0 and

      frozen = 0 and tick_lnk1 = 0 and disabled = 0)

    ORDER_FX Trigger DDL :

       * I've already posted this code above. Shall I post the "update_order_fx_calculated_data" stored proc ?

     

    Thank you again,

    Bob

     

  • I may be totally off base, but IMHO you really have only one option: ditch the complicated triggers. Your performance issues are (by your own admission) due to a complex cascade of changes kicked off by each update/insert.

    While this structure may be the most natural way of expressing the business issues in SQL, it's certainly not the only way. Think about re-architecting your application to do as little as possible during the each insert/update and performing the more complex calculations later in the process.

    From a business perspective this may seem counter-intuitive, but you really don't need to update every customer order in real time - you only need to calculate those values at retrieval time. All the intermediate fluctuations really don't matter until the customer order record is accessed. Think about structuring a view that calculates values based on the most recent rates rather than storing (and constantly updating) values in tables. Yes, you'll eventually need to store a final value at, say, order confirmation time, but that's a different issue.

    Good luck, and that's my 2 cents... 😉

    Steve G.

  • Steve,

     I would love nothing better than to ditch the complex triggers, as you well put it. The main issue here is that the traders have an order view open on their desktop all day long. So as the rates are changing in real time, we are updating the order colors. The color changes allow the trader to view the closeness to the market (i.e. green orders are far away and red are very close).

    So as an order reaches its closeness to the market, the trader can recognize that on his view and act accordlingly (i.e. execute the customer order). If an order then gets missed, we launch an audio alert on the trader PC.

    If I understand correctly, what you're suggesting is to move the all the complex recalculation from the database level to the front end. Is that what you're suggesting ? Naturally, as we refresh the order view every 3 seconds there is already some overhread as we do that. We would be adding to this overhead, of course.

    Please feel free to respond with your thoughts again.

    Thanks,

    Bob

     

  • Yes, I am suggesting that the more complex calculations be moved out of the triggers and into the retrieval process. Fundamental design rule: do as little as possible in time-sensitive processes.

    Yes, you're adding overhead to the retrieval, but consider the big picture. You'll be doing these calcs once every 3 seconds instead of 5-6 times a second. Even more to the point - doing things in the triggers means you have to update EVERYTHING. Calculating during the retrieval means you only do the calcs for what's actually being retrieved. This is most likely a MUCH smaller dataset. Again, there's the possiblity of better performance because you're doing less work.

    Remember, your traders only see what you send them during that "every three second" refresh. If that refresh is built off the latest market data, it can look like you're updating orders (and you can continue with the color changing scheme), when really you're only calculating what's being retrieved.

    Do the calculations have to be in code in the front end? Not necessarily, though that may make the most sense. I'd recommend first looking at doing the calculations in a view that mocks up your existing table - your coders will love you for fixing things without them having to make major changes.

    Steve G.

     

  • Interesting idea on the table view. I'm one of two coders on this system, actually, and still learning some of the more in-depth areas of sql 2k. Our desktop app for the traders is written in visual foxpro 8.0, by the way, and the website I created for our client's customers is in asp.net 2.0.

    Oh, by the way, one minor correction on your last comment. We do not update everything during our triggers, as we do use Where clauses to limit the update dataset.

    In terms of the table view, you're suggesting that I create a view on the ORDER_FX table. With that scenario  my Select statement would probably have lots of CASE..WHEN clauses and perhaps some embedded function calls in order to recalculate the colors, etc. Is that along the lines of what you're thinking ? There are lots of odd little calculations along the way that will make it hard to put all the calcs in the view, but its worth experiementing with.

    Thank you again.

    Sincerely,

    Bob

     

  • That's the spirit! I'm pretty certain that the view to emulate what you have now would be quite tricky, though you may be able to replace some of the CASE ... WHEN clauses with functions. You may end up having to do some of the work in code if the calcs are tricky enough.

    Overall, I think you'll see a definite performance boost by separating the time-intensive (relatively speaking of course!) work from the complex work.

    Good Luck!

    Steve G.

  • Okay. Just one more question so I don't turn this into a year-long chat, thereby driving you crazy.

    Recoding this as this point is not realistic. We have some improved code on their QC box which we're moving to production very soon. My boss's question is whether you feel that adding additional processor to their database server would result in increased performance.

    Thanks again.

    Bob

  • 'sokay - you can't drive me someplace that I already am!

    Additional processor could help. Don't know what you're using for disk storage but updating to 15,000 rpm disks might also help. If you're using RAID 5, go back to one of the RAID 1 variants. Don't know if any of these will have a major impact, but every little bit helps!

    Steve G.

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply