February 2, 2006 at 1:34 pm
We have a stored procedure (sp1) call another stored procedure (sp2). sp2 contains an insert statement followed by the standard error checking i.e. "if @@error <> 0 .... rollback..". However, when an fatal error occurs in the insert statement, sp2 crashes and it never reach the error checking portion. I've read that there is almost impossible to trap the fatal error. But, I'm just wondering if there is a way to record the fatal error in the database or server level log.
Thanks,
--Allan
February 2, 2006 at 3:52 pm
I don't think you can catch fatal errors (high severity levels) using @@ERROR, etc... Actually, even the new try/catch blocks in 2005 don't let you catch above severity 10 (see the article on the front page of SSC)...
February 2, 2006 at 7:01 pm
You can try to use return value to let sp1 know there is a fatal error in sp2.
Create procedure sp2 AS
IF OK
RETURN 1
ELSE
RETURN 2
In sp1
Declare @ReturnValue INT
EXEC @ReturnValue = sp2
IF @ReturnValue = 2
PRINT 'SP2 ERROR'
February 3, 2006 at 12:30 am
Correction. TRY...CATCH only catch errors with severity>10 (anything less than that is a warning or information message and not an error) and will only catch errors that don't close the database connection (severity >20)
Excerpt from Books Online (2005)
A RAISERROR run with a severity of 11 to 19 in a TRY block transfers control to the associated CATCH block. Specify a severity of 10 or lower to use RAISERROR to return a message from a TRY block without invoking the CATCH block.
As for the original question, any error with a severity 20 or higher is written into the SQL error log and possibly into thee server's even log as well. Again from Bol
Severity levels from 20 through 25 are considered fatal. If a fatal severity level is encountered, the client connection is terminated after receiving the message, and the error is logged in the error and application logs.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 3, 2006 at 8:15 am
Thank you all for the useful information!
Now, I can inform the developer that we can't trap or log fatal error, especially the one in between 11 and 18.
Thanks again!
--Allan
February 5, 2006 at 10:50 pm
Yes you can. You can't trap the ones above 20, but those are written into the error log. 11 through 19 are standard errors, less than 11 are warnings or information messages.
SELECT
1/0
IF @@Error!=0
PRINT 'Error occurred'
If you're having problems catching errors in a piece of code, please post it and we can look for the problems.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 9, 2006 at 2:46 pm
Okay folks here's an example that's been causing me some problems. SQL Server 2000 environment: I have a piece of code that executes some dynamic SQL which deletes database objects that are not needed after a restore from another environment.
If for some reason the sp_executesql call fails on a fatal error, I am not able to trap the error. Processing just stops and I am left with a problem, because this activity takes place inside a transaction which has now been left open. Code looks like this:
exec @returncode = sp_executesql @sql_string
select @error = @@error
if @error <> 0 or @returncode <> 0
begin
rollback
select 'error message goes here'
goto procfinish
end
I would have expected the error happening inside sp_executesql to cause a return of 1 which would send me into my error routine upon return to the calling script. But instead the execution stops ungracefully leaving my transaction open and forcing me to rollback manually. The error happened because of a brain fart on my part which caused the dynamic sql string to be constructed as a drop table statement for a table that didn't exist.
Any way I can avoid this issue going forward?
Thanks
February 9, 2006 at 11:22 pm
Can you give an example of @sql_string?
There is an error avoidence technique that you could use:
IF EXISTS (SELECT 1 FROM sysobjects WHERE name='TableToBeDropped')
DROP TABLE TableToBeDropped
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 10, 2006 at 8:12 am
Here's a little more detail. I do have code in place at the top of my while loop that checks the sysobjects table to make sure the object exists before I attempt to drop it. The specific scenario (which I have since put code in place to avoid going forward) was that the code accidentally deleted a log table that I had created to store status messages generated by the proc as it deletes each unwanted object from the database. The code accidentally deleted this log table, and then in a subsequent step it attempts to do an update to that table (which no longer existed), causing the code to throw a fatal error. The step in question would have been:
exec @returnCode = sp_executesql @sql_string
where in this case
@sql_string = 'Update ' + @table_name + ' set status_val = ' + quotename('table dropped successfully','''') + ' where object_name = ' + @table_name
The problem was that in this particular iteration of the loop, the code had just deleted the table that @table_name was set to, and now attempting to update. Like I said, brain fart by me. So I fixed that. I'm just trying to find a way to trap for any other fatal error scenarios using exec @returncode = sp_executesql... that will allow me to gracefully exit and rollback my open transaction.
Thanks!
February 10, 2006 at 9:41 am
I'm not sure if you can do this for dynamic SQL as in your example, but you can certainly do it for stored procedures. Interestingly, I was just putting together a demo of this for a colleague this afternoon to illustrate exactly this point. Here's my test code:
CREATE PROCEDURE philproc
as
DECLARE @error int
SELECT * FROM #fred-- Induces an untrappable error (208)
-- as #fred does not exist
-- Execution flow returns to the calling code
SELECT @error = @@error-- this line is never executed, and @error is left undefined
PRINT 'Point 1'-- this line is never executed
IF @error 0-- These lines are never executed; RETURN status is undefined
RETURN @error
ELSE
RETURN 0
go
DECLARE @error int, @ret_stat int
EXEC @ret_stat = philproc-- SP fails with untrappable error; @ret_stat is undefined.
-- Batch does not terminate, but continues to the next line.
SELECT @error = @@error-- @error is set to the value of the untrappable error within the SP.
SELECT @ret_stat
SELECT @error
RESULTS:
Server: Msg 208, Level 16, State 1, Procedure philproc, Line 5
Invalid object name '#fred'.
-----------
NULL
(1 row(s) affected)
-----------
208
(1 row(s) affected)
By testing both the RETURN status from the SP (for trappable errors) and also for @@ERROR immediately after the proc call (for untrappable errors), you should be able to cater for both.
February 10, 2006 at 9:45 am
Just modified this slightly to use dynamic SQL. It now appears that both the return status and @@error are populated (which did surprise me slightly):
declare @sql nvarchar(2000)
declare @error int, @ret_stat int
select @sql = 'exec philproc'
exec @ret_stat = sp_executesql @sql
select @error = @@error
select @ret_stat
select @error
Server: Msg 208, Level 16, State 1, Procedure philproc, Line 6
Invalid object name '#fred'.
-----------
208
(1 row(s) affected)
-----------
208
(1 row(s) affected)
February 10, 2006 at 9:51 am
Only just noticed - this seems to be EXACTLY what you are doing in your example. I have no idea why your code doesn't catch the non-existent table error, since it does in my example, and this is a technique I have used for some time now.
February 10, 2006 at 9:57 am
Thanks for the response Philip. Only thing I can think of is that right after my call to sp_executesql, I have this line:
select @error = @@error
I do this to capture the initial @@error value from the proc call for use in the error trapping routine, but in looking at the code, I think it's unnecessary. I am going to try removing that line, and substitute @@error for @error in my error trapping routine. Maybe that will help. There are no intervening statements between the proc call and the error trap, so @@error should still contain the error code returned by the failed exec of sp_executesql.
February 10, 2006 at 10:01 am
No, that shouldn't be necessary. Look at my example - I do exactly the same as you and it works perfectly. In fact, it's precisely what you SHOULD do in order to preserve the value of @@ERROR at that point.
February 10, 2006 at 10:18 am
Okay - I've duplicated your problem.
In my first example, my SP code was using straight SQL to access the non-existent table.
If I change this to also usse dynamic sql, then in the code that calls the SP bot @ret_stat and @@ERROR end up being zero:
CREATE PROCEDURE philproc
as
DECLARE @error int
set @sql = 'select * from #fred'
exec @ret_code = sp_executesql @sql -- Induces an untrappable error (208)
-- as #fred does not exist
-- Execution flow returns to the calling code
SELECT @error = @@error -- this line is never executed, and @error is left undefined
PRINT 'Point 1' -- this line is never executed
IF @error 0 -- These lines are never executed; RETURN status is undefined
RETURN @error
ELSE
RETURN 0
go
DECLARE @error int, @ret_stat int
EXEC @ret_stat = philproc -- SP fails with untrappable error; @ret_stat is undefined.
-- Batch does not terminate, but continues to the next line.
SELECT @error = @@error -- @error is set to the value of the untrappable error within the SP.
SELECT @ret_stat
SELECT @error
Server: Msg 208, Level 16, State 1, Line 1
Invalid object name '#fred'.
Point 1
-----------
0
(1 row(s) affected)
-----------
0
(1 row(s) affected)
-----------
0
(1 row(s) affected)
Even more crucially, the SP code does NOT terminate after the "fatal" error but carries on to complete the remaining steps in the SP.
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply