May 23, 2007 at 8:00 am
This sql 2k proc below is called by our client's VB rate feed process. When the market is very busy, this gets called at least five times per second and updates table FXO_CCY. This causes our desktop app to slow down and frustrate the users.
The FXO_CCY table also has a trigger, which fires off and updates our main orders table.
My main question here is whether the BEGIN TRAN and COMMIT TRAN would be causing any additional overhead; and if I remove it would I get any speed improvements ?
Speed is very crucial here, as our desktop app is suffering due to heavy table updates. I can't redesign right now, so any little tweak might end up making a big difference in speed.
CREATE PROC pUpdateFxo_Ccy
@cur_1 varchar(3),
@cur_2 varchar(3),
@bid varchar(24),
@ask varchar(24),
@rate varchar(24)
as
begin
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
BEGIN TRAN
UPDATE FXO_CCY SET
UPDATE_DATE = getdate(),
LST_BID = convert(numeric(15,8),@bid),
LST_ASK = convert(numeric(15,8),@ask),
LST_RATE = convert(numeric(15,8),@rate),
LST_TIME = convert(varchar,getdate(),108)
WHERE CUR_1 = @cur_1 AND
CUR_2 = @cur_2
COMMIT TRAN
end
Thank you in advance for your response.
Regards,
Bob
May 23, 2007 at 8:40 am
The way you have things written, it's a very minor overhead. You are only running one statement.
If things are slow, be sure that you have good indexes on this table. I'd ask what your current clustered key is on this table. Also, can you do the converts first or do them in the app instead of the db server. It's not much overhead, but it can help.
May 23, 2007 at 8:51 am
I have a non-clustered index on CUR_1 + CUR_2 on that Fxo_Ccy currency table; and the table contains only about 60 records .
I suppose I could do the Convert() in the app, but it's not realistic right now with everything that's happening at the client.
Thanks for your response.
Bob
May 24, 2007 at 7:32 am
The BEGIN TRAN / COMMIT TRAN is not much overhead, but the triggers you have are inside the transaction so it may be more than you think. Since you are not actually doing any error handling and rolling back of this transaction, you probably do not need it. I cannot be completely sure of this without seeing the trigger syntax, but it is very likely you can remove it. Since it is only a single command and the triggers will automatically be combined into a transaction with it, you are probably safe unless you are doing some kind of odd error handling within the triggers themselves.
The isolation level you have set is also interesting since you are not actually reading any data and you are specifying that you want to read dirty pages. I assume this also has something to do with the triggers you have running, but if you have a lot of concurrent transactions running it could be a concern. I always question NOLOCK and READ UNCOMMITTED isolation levels if something was written by a developer that does not really understand the results of them. I see them a lot from application developers that think they just mean there will be no transaction blocking.
As far as other erformance improvements you can make here, you can probably help with the indexing on the table remembering that every time you update a field in the table, you may have to move things from one page to another in an index and this may cause page splits, etc. However, this is a simple command and is probably not your performance issue. You may want to post the triggers and see if anyone here can help you speed them up as they are more likely to be the problem.
May 24, 2007 at 9:35 am
Michael,
Thank you for your response.
The READ UNCOMMITTED is probably not needed at that level since I already have the READ UNCOMMITTED in other procedures. I basically added the Dirty Cache Read in various places because I don't want my select statements to wait until other updates are done. I've tested this and I know that my Select statements will return non-committed data in the event there's an update occurring somewhere else in the system (which is fine for the client because speed is crucial).
When you say "transaction blocking", are you also referring to Lock Escalations or is that something different.
We are infact NOT doing any odd error handling at the stored procedure level so the Begin Tran/commit can probably be removed as you suggested.
Thanks,
Bob
May 24, 2007 at 9:53 am
Here's the full trigger code on table FXO_CCY. Let me point out that the table name is FXO_CCY and the cursor defined below was also given the same name 'FXO_CCY' (for some reason).
If you look at the first Select statement, shouldn't the coder have selected from the "Inserted" table instead of pulling directly from FXO_CCY again ? Wouldn't that result in faster retrieval ?
ALTER trigger [tr_fxo_ccy] on dbo.FXO_CCY
After Update, Insert
As
set lock_timeout 3
SET DEADLOCK_PRIORITY LOW
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
Declare @cur_1 varchar(3),
@cur_2 varchar(3),
@warn_lev numeric(11,6),
@warn_lev2 numeric(11,6),
@warn_close numeric(11,6),
@priority varchar(1),
@lst_time varchar(5),
@lst_rate numeric(15,8),
@lst_bid numeric(15,8),
@lst_ask numeric(15,8),
@rate_high numeric(15,8),
@rate_low numeric(15,8),
@from_time varchar(5),
@to_time varchar(5),
@frez_date datetime,
@frez_time varchar(5),
@max_am numeric(16,0),
@decimals int,
@ref_pkey numeric(6),
@update_it int,
@ins_rate numeric(15,8),
@ins_bid numeric(15,8),
@ins_ask numeric(15,8),
@base_cur varchar(3),
@lst_feed varchar(8)
declare fxo_ccy cursor for
select pkey
from inserted
open fxo_ccy
fetch next from fxo_ccy INTO @ref_pkey
if @@FETCH_STATUS = 0
begin
while @@FETCH_STATUS = 0
begin
select @cur_1 = cur_1,
@cur_2 = cur_2,
@warn_lev = warn_lev,
@warn_lev2 = warn_lev2,
@warn_close = warn_close,
@priority = priority,
@lst_time = lst_time,
@lst_rate = lst_rate,
@lst_bid = lst_bid ,
@lst_ask = lst_ask ,
@rate_high = rate_high,
@rate_low = rate_low ,
@from_time = from_time,
@to_time = to_time ,
@frez_date = frez_date,
@frez_time = frez_time,
@max_am = max_am,
@decimals = decimals
from fxo_ccy where pkey = @ref_pkey
select @ins_rate = lst_rate,
@ins_bid = lst_bid ,
@ins_ask = lst_ask
from deleted where pkey = @ref_pkey
select @base_cur = base_cur from monarque
if @lst_rate > @rate_high
set @rate_high = @lst_rate
if @lst_rate < @rate_low
set @rate_low = @lst_rate
set @lst_feed = convert(varchar(8),getdate(), 8)
if @cur_1 = @base_cur and @cur_2 <> @base_cur
update CURRENCY set lst_time = @lst_time,
lst_rate = @lst_rate,
ref_rate = @lst_rate,
lst_bid = @lst_bid ,
lst_ask = @lst_ask ,
rate_high = @rate_high,
rate_low = @rate_low
where currency = @cur_2 and lst_rate <> @lst_rate
if @cur_2 = @base_cur and @cur_1 <> @base_cur
update CURRENCY set lst_time = @lst_time,
lst_rate = @lst_rate,
ref_rate = @lst_rate,
lst_bid = @lst_bid ,
lst_ask = @lst_ask ,
rate_high = @rate_high,
rate_low = @rate_low
where currency = @cur_1 and lst_rate <> @lst_rate
exec fxo_ccy_update_monarque @cur_1, @cur_2, @lst_rate ,@lst_bid , @lst_ask
exec fxo_ccy_update_web @cur_1, @cur_2 --,@lst_rate ,@lst_bid , @lst_ask
fetch next from fxo_ccy INTO @ref_pkey
end
end
close fxo_ccy
deallocate fxo_ccy
RETURN
--------
Thank you again.
Bob
May 24, 2007 at 10:06 am
I would love to spend the time to re-write this for you, but I am afraid without your database, testing it would be pretty tricky. I will say that this trigger is probably a performance issue. The initial select is probably not too bad as your primary key is indexed, so the select for that data will probably be pretty fast. However, since the initial loop is through the inserted table, it would be a lot quicker to have the cursor include all of the fields you want to use and not do the first select at all.
The first real issue is that it opens a cursor and loops through. This serializes ever update made to your table making SET-based operations a thing of the past. If there is anything that SQL server is undeniably less efficient at - it is loops.
Then, you have a couple of IF statements and logical IF's are not too speedy either. This could have all been done with set-based updates and inserts that would be much, much faster.
Finally, at the end, there is a call to another stored procedure that could also be inefficient. You may have another set of performance issues in there.
It is going to take you some time, but you should go through and replace the cursor with set-based operations. If you are lucky, there will be someone else on here with a bit more time that will do some of it for you.
Good luck
May 25, 2007 at 2:37 am
posting some DDL, sample input data in the form of insert statements and matching expected results will advance any investigation process must faster.
I totally agree....get rid of the cursor!...You're measuring sugar with a tweezers instead of by weighing the bag.
May 25, 2007 at 9:25 am
Now I would like to confirm one thing before I remove the cursor and Select from INSERTED :
When my single-row updates hit the FXO_CCY table, remember that they are coming in at 4 or 5 updates per second. So every time the trigger fires, I must confirm that I have ONE Inserted table per update.
In other words, for every single-row Update to Fxo_Ccy, my Inserted table should contain ONE record every time the trigger fires within an isolated transaction.
Can someone confirm that this is absolutely correct ?
Thank you again,
Bob
p.s. Andrew, nice analogy above...
May 25, 2007 at 9:44 am
The Inserted table will have one row per updated record (or one per inserted record if doing an insert). If you only insert or update a single row, your Inserted table will always only have one row.
Although this could make it easier to code, it would be wise to program it to handle multiple records when changing it to set-based operations. Some day, you or someone else there will want to update several records in the table and your update should be handled correctly by the triggers. It is more work now, but will probably be a lot more work later if you or someone else does not realize what has been done.
If you do ignore this (don't worry - I won't feel bad if you do), add the little bit of overhead to raise an error if there is more than one row in the inserted table and you will at lease have less to worry about.
May 25, 2007 at 12:32 pm
Yes Michael. I do realize what you mean about the mult table updates. I actually dealt with something like that some time ago and you folks on this forum helped me out quite nicely.
For now at least, the foreign exchange rates are coming one at a time. I will confirm that, of course, and also think about your suggestion to handle multiple updates in the future.
Thanks again,
Bob
June 8, 2007 at 8:28 am
Hello Michael,
Regarding your previous response:
"Then, you have a couple of IF statements and logical IF's are not too speedy either. This could have all been done with set-based updates and inserts that would be much, much faster."
I would like to follow up in terms of rewriting the following update WITHOUT the IF statement :
-- PREVIOUS :
select @base_cur = base_cur from monarque
if @cur_1 = @base_cur and @cur_2 <> @base_cur
update CURRENCY set lst_time = @lst_time,
lst_rate = @lst_rate,
ref_rate = @lst_rate,
lst_bid = @lst_bid ,
lst_ask = @lst_ask ,
rate_high = @rate_high,
rate_low = @rate_low
where currency = @cur_2 and lst_rate <> @lst_rate
-- NEW :
select @base_cur = base_cur from monarque
update CURRENCY set c.lst_time = i.lst_time,
c.lst_rate = i.lst_rate,
c.ref_rate = i.lst_rate,
c.lst_bid = i.lst_bid ,
c.lst_ask = i.lst_ask ,
c.rate_high = i.rate_high,
c.rate_low = i.rate_low
from Currency c, Inserted i
where
i.cur_2 = @base_cur and i.cur_1 <> @base_cur
and (c.currency = i.cur_2 and c.lst_rate <> i.lst_rate)
and (c.cur_1 = i.cur_1 and c.cur_2 = i.cur_2)
So do you think that removing the IF statement and including the Inserted table reference inline with the Select statement will make this Update more efficient ? If so, then perhaps I should go ahead and rewrite any Update that had this design.
In any case, I will definitely go ahead and remove that first SELECT ... FROM FXO_CCY since those fields are already in the Inserted table.
Thank you,
Bob
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply