June 5, 2007 at 9:04 am
Not sure if this is the correct group for this question.
running sql server 2000, and the question has to do with transaction processing.
I have a series of inserts that will be applied against a group of tables, all in the same database on a local server. I begin the inserts with a 'begin transaction' and end the inserts with a 'commit transaction'.
If half way through the inserts SQL aborts for some data reason (non-numeric data in a numeric field, too many characters for a field, etc), what happens?
is the system left with a 'begin transaction' that hasn't been cleared/closed?
are the inserts that did work backed out?
Thanks for any and all help.
June 5, 2007 at 10:48 am
Hate to do this to you, but it depends on the error, and sometimes the context of the error. Funny you should ask this today, as it was part of the Question of the Day and while verifying my choice I found an excellent web site that explains this very concisely. Please see http://www.sommarskog.se/error-handling-I.html#whenwhichaction
Below is an extract from that site (I found this very informative, and learned a few things today, many thanks to Erland Sommarskog).
These are the four main possible actions SQL Server can take:
Statement-termination. The current statement is aborted and rolled back. Execution continues on the next statement. Any open transaction is not rolled back. @@error is set to the number of the error. Since the statement is rolled back, this means that if you run an UPDATE statement that affects 1000 rows, and for one row a CHECK constraint is violated, none of the rows will be updated. But if the UPDATE statement was part of a longer transaction, the effect of the preceding INSERT, UPDATE or DELETE statements are not affected. You need to issue a ROLLBACK TRANSACTION yourself to undo them.
Scope-abortion. The current scope (stored procedure, user-defined function, or block of loose SQL statements, including dynamic SQL) is aborted, and execution continues on the next statement in the calling scope. That is, if stored procedure A calls B and B runs into a scope-aborting error, execution continues in A, just after the call to B. @@error is set, but the aborted procedure does not have a return value, but the variable to receive the return value is unaffected. As for statement-termination, any outstanding transaction is not affected, not even if it was started by the aborted procedure.
Batch-abortion. The execution of the entire batch – that is, the block of SQL statements that the client submitted to SQL Server – is aborted. Any open transaction is rolled back. @@error is still set, so if you would retrieve @@error first in the next batch, you would see a non-zero value. There is no way you can intercept batch-abortion in T-SQL code. (Almost. We will look a possibility using linked servers later on.)
Connection-termination. The client is disconnected and any open transaction is rolled back. In this case there is no @@error to access.
This table lists some common errors, and whether they abort the current statement or the entire batch.
Error | Aborts |
---|---|
Duplicate primary key. | Statement |
NOT NULL violation. | Statement |
Violation of CHECK or FOREIGN KEY constraint. | Statement |
Most conversion errors, for instance conversion of non-numeric string to a numeric value. | BATCH |
Attempt to execute non-existing stored procedure. | Statement |
Missing or superfluous parameter to stored procedure to a procedure with parameters. | Statement |
Superfluous parameter to a parameterless stored procedure. | BATCH |
Exceeding the maximum nesting-level of stored procedures, triggers and functions. | BATCH |
Being selected as a deadlock victim. | BATCH |
Permission denied to table or stored procedure. | Statement |
ROLLBACK or COMMIT without any active transaction. | Statement |
Mismatch in number of columns in INSERT-EXEC. | BATCH |
Declaration of an existing cursor | Statement |
Column mismatch between cursor declaration and FETCH statement. | Statement. |
Running out of space for data file or transaction log. | BATCH |
June 5, 2007 at 10:52 am
Wow...beat me to it and beat the #%^% out of my answer. Nice explanation
June 5, 2007 at 10:58 am
Thanks Shawn, wish I could take full credit but all I did was "Find" the answer as it already existed elsewhere. Credit has to go to Erland Sommarskog who put in a lot of work and wrote a very helpful web article.
James.
June 5, 2007 at 12:33 pm
Thanks for the great infomation.
June 5, 2007 at 1:19 pm
Because of the reasons presented above and because most SQL calls are in fact initated by a client we have implemented the transactions on the client side. So pretty much the client (lets say a .NET app) is responsible for begining and commiting (or rolling back) the transactions. The rest of the SQL is handled by a call to a stored procedure. Since the client library always returns the SQL error (even if you handle the error internally in the SQL code) there was no benefit in handling the errors on the SQL end. The SQL developer has to decide if the SP requires a transaction or not but the .NET developer is responsible for handling it.
---------------------------------------------
[font="Verdana"]Nothing is impossible.
It is just a matter of time and money.[/font]
June 5, 2007 at 2:01 pm
If you perform such insert using INSTEAD IF trigger on a view it will become auto transaction.
Anything fails inside of transaction - and everything rolls back. Whole insert.
No need to declare, commit or rollback transactions.
No chance to mess up with other transactions.
_____________
Code for TallyGenerator
June 5, 2007 at 2:36 pm
Be careful not to generalize client side trnsaction handling, because it will deteriorate the concurrency of the system due to longer transaction times (which is the advantage of doing it on the server) There are cases in it makes sense to do it on the client but there are others in which doing it on the server is what you want!
* Noel
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply