December 2, 2002 at 3:54 am
I've got a stored procedure that basically transfers data from one database to another. It reads each line of data and executes it into the destination database.
When an error occurs, the stored proc logs the error in a table. But it continues on to read the next line.
if @@error <> 0
begin
INSERT INTO DBO.RECOVERY_ERROR_LOG
VALUES (@vchDML, @dtTimestamp, @l_error_message, @intError)
END
I use a job to run this stored procedure every 1 hour. But what happens is, if an error occurs due to a duplicate unique key, the job quits and does not execute the subsequent rows.
How do I set the job to not quit?
December 2, 2002 at 4:03 am
If you are using a unique index (not a unique constraint) you can check the 'ignore duplicate key' check box on the indexes tab of the table properties in enterprise manager. This will not insert duplicates but also won't cause a duplicate to stop processing.
Regards,
Andy Jones
.
December 2, 2002 at 6:45 pm
quote:
If you are using a unique index (not a unique constraint) you can check the 'ignore duplicate key' check box on the indexes tab of the table properties in enterprise manager. This will not insert duplicates but also won't cause a duplicate to stop processing.Regards,
Andy Jones
December 2, 2002 at 7:30 pm
Sorry about that, just hit the wrong button.
Thanks Andy for your quick reply. But the solution does not solve my problem as there may be other errors like field size, syntax etc.
December 3, 2002 at 5:14 am
As SQL has no real error handling you may need to create a job with ActiveX Scripting and use VB script. This way you have a better set of error handling options and can do exactly what you are after.
December 3, 2002 at 8:24 pm
hmm.. that is an option. Thanks for the suggestion.
December 5, 2002 at 8:33 pm
If you have some way to deal with the row causing the problem, you can set you job up to "quit on success" in that step and "go to the next step" on error. The error step is a step that starts a 2nd job that restarts your 1st job. Crude but it work.
You have to deal with the error row though.
December 8, 2002 at 9:14 pm
I've tried that..... it worked the first time but didn't work after that. Not too sure why.
But anyway, i've finally found what was the problem. It is a stored proc related problem. The stored proc had this statement SET XACT_ABORT ON. When i took out the statement, it worked fine.
🙂 oops. Thanks for all your help anyway.
December 9, 2002 at 6:38 am
Just to clairfy SET XACT_ABORT is used to ensure exactly this. That if one inset fails then the whole process will abort.
Often this is not needed as the changes are held in a transaction, which is rolled back if an error occurs. Thus a similar situation is achieved.
Simon Sabin
Co-author of SQL Server 2000 XML Distilled
http://www.amazon.co.uk/exec/obidos/ASIN/1904347088
Simon Sabin
SQL Server MVP
http://sqlblogcasts.com/blogs/simons
December 11, 2002 at 7:04 pm
the reason i put the set xact_abort on is because the stored proc uses distributed transactions. Initially we couldn't get SQL Server to allow us to use distributed transactions. So, we use xact_abort to rollback when an error occurs. Now that we've managed fix the problam and we're able to do distributed transactions, the xact_abort is not needed anymore.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply