June 19, 2012 at 7:39 am
I could but I wont =)
I'am not a big fan of exception driven code/queries.
June 19, 2012 at 9:08 am
niclas-1082356 (6/19/2012)
I could but I wont =)I'am not a big fan of exception driven code/queries.
Curious, care to explain why?
June 20, 2012 at 1:07 am
In my opinion error handling shouldn't be handled within an sp/query/method. Catching all exceptions can suppress other important errors which we wouldn't know about if we just return some default value if a query raises an exception.
If you know that a query can fail than you can/should check if the query will fail before executing rather than just catching exceptions
June 20, 2012 at 2:02 am
niclas-1082356 (6/20/2012)
If you know that a query can fail than you can/should check if the query will fail before executing rather than just catching exceptions
Yes, but you should not rely on your check. For instance, if an application has information enough to tell that an insert will fail, it shall not attempt to. However, simulateneous updates may happen to the database, causing your update to fail. In my humble opinion, this must be handled properly, both by the application and in the stored proc or SQL executed.
June 20, 2012 at 3:13 am
Correct me if I'm wrong here but, in addition to forcing the isolation level, couldn't you just do something like this instead?
BEGIN TRY
insert into sometable (col1,col2,CreationDate) values (@value1,@value2,getdate())
SELECT @@ROWCOUNT
END TRY
BEGIN CATCH
PRINT 'Sorry you probably had a duplicate key failure. Check ERROR_NUMBER()'
SELECT ERROR_NUMBER()
END CATCH
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
June 20, 2012 at 3:57 am
That is one way of doing it yes, but there is really no need to attempt to insert a record, if you can do a simple test to check whether it is guaranteed to fail. I do use try catch blocks as well, but mostly to handle unexpected errors. "Expected" errors I try to handle more gracefully. In quite some cases this can be done by checking whether the row already exists, either in the where clause, with a join or with a merge statement.
June 20, 2012 at 4:09 am
okbangas (6/20/2012)
That is one way of doing it yes, but there is really no need to attempt to insert a record, if you can do a simple test to check whether it is guaranteed to fail. I do use try catch blocks as well, but mostly to handle unexpected errors. "Expected" errors I try to handle more gracefully. In quite some cases this can be done by checking whether the row already exists, either in the where clause, with a join or with a merge statement.
I pretty much agree with you on this point. I was just suggesting an alternative to see if someone would step up and say "no, no, don't do that silly thing because [insert whatever good practice it violates]."
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
June 21, 2012 at 12:16 am
Answering the question asked in the original post, this could be a misunderstanding of the transaction and locking mechanisms in SQL Server.
If you are going to try pre-checking an operation, you need to make sure the entire sequence is wrapped in a single transaction and using the appropriate isolation level, or you still aren't protecting against possible concurrent modifications. You need to bump the isolation level all the way up to serializable to prevent other concurrent processes from inserting or deleting rows that might conflict with your transaction. (Use appropriate table hints on the SELECT instead of SET ISOLATION LEVEL if possible.) You can do this within a single T-SQL batch, or at the program level using ADO.NET.
One possible form of pre-checking:
BEGIN TRANSACTION ;
if not exists(select 1 from sometable WITH (serializable) where col1 = @value1 and col2 = @value2)
begin
insert into sometable (col1,col2,CreationDate) values (@value1,@value2,getdate())
-- select @@ROWCOUNT
end
COMMIT TRANSACTION ;
My philosophy for error checking is to check what matters, only add value, and try keep things running smoothly and efficiently.
The pre-checking above is a whole lot of effort for SQL Server; if possible we should avoid it.
Given this particular requirement is "insert if not already there," I tend to agree with Gail Shaw's original suggestion:
insert into sometable (col1,col2,CreationDate)
SELECT @value1,@value2,getdate()
WHERE NOT EXISTS (SELECT 1 FROM sometable WHERE col1 = @value1 AND col2 = @Value2)
This avoids expensive error handling with very little cost. (SQL Server needed to load that index page anyway.)
If we actually need to insert a unique record, and the chances of a concurrent insert attempt was small, I would probably just do the INSERT VALUES and let SQL Server catch the error and throw it to the application. In this case, there isn't much I could do about it, and any error message and code I come up with isn't much better than SQL Server's (and harder to Google).
If I could actually do something useful like provide a better error message, provide additional information, or retry the operation (maybe with a new key), then I'd probably add some old school coding.
DECLARE @rowcount INT, @error INT ;
insert into sometable (col1,col2,CreationDate)
SELECT @value1,@value2,getdate()
WHERE NOT EXISTS (SELECT 1 FROM sometable WHERE col1 = @value1 AND col2 = @Value2) ;
SELECT @rowcount = @@ROWCOUNT, @error = @@ERROR ;
IF @rowcount = 0 OR @error <> 0
BEGIN
-- do whatever is appropriate
END ;
This could be turned into a loop to retry until @rowcount = 1, if I could generate a new key (an alternative to IDENTITY columns).
I'll trap or catch errors if that makes sense. It's expensive when it's used but cheap when it's not tripped, so it's fine for rare errors, when the explicit post-checking is getting too big, or I can consolidate common code in an error handler. Its also more reliable, since it's easy to forget or misplace explict error checks.
If there was a possibility that I would do a lot of work before I actually caused an error, or there is a lot of conflicting concurrent inserts affecting database performance, then I would switch to pre-checking an operation. But some parts would still use post-checking or error trapping. (If I'm locking rows and keys, that will influence the order of checks. Otherwise, providing better error messages or reducing the load on SQL Server guide me.)
David Lathrop
DBA
WA Dept of Health
Viewing 8 posts - 16 through 22 (of 22 total)
You must be logged in to reply to this topic. Login to reply