Error Handling with multiple Stored procedures

  • I've created a process to delete all client information from our database based on the CustomerID of the client. The top level SP finds all Ids related to that client, then fills a table valued parameter (indexed) with those IDs. Then it executes a series of SPs, passing through that table of IDs. Each SP has a bunch of delete statements for each table in that schema (not really a schema, but a naming convention we use). This runs across nearly every table in the db (200 or so?). It does it in order from Child on up to parent so that I break as few contraints as possible. For smaller customers it takes about 20-40 min. (4000 rows through every table if it has a value in that table), but on larger customers it can take a few hours. I've done my error handling as follows in the top level sp:

    BEGIN TRY

    EXEC sp_X @CustID;

    END TRY

    BEGIN CATCH

    SELECT ERROR_MESSAGE() AS StrMessage,

    ERROR_PROCEDURE() AS ErrProc,

    ERROR_LINE() AS ErrLine;

    RETURN;

    END CATCH;

    BEGIN TRY

    EXEC sp_X @CustID;

    END TRY

    BEGIN CATCH

    SELECT ERROR_MESSAGE() AS StrMessage,

    ERROR_PROCEDURE() AS ErrProc,

    ERROR_LINE() AS ErrLine;

    RETURN;

    END CATCH;

    I'm worried that my error handling might be slowing down this process. Any suggestions on improving speed? I don't really care if the transactions don't get rolled back, with the exception of the final statement that removes the list of IDs. I'll probably put that to an OUTPUT so I retain the list of IDs. But really everything else I don't want rolled back if there is an error. Am I approaching my error handling correctly and in the most efficient way possible considering my requirements? Any suggestions? Any comments or advice greatly appreciated.

  • For performance problems, please post the execution plan (actual if possible, estimated if not), and the stored proc definition. Given that your error handling consists of a single simple SELECT statement which will only execute if there is an error, it is highly unlikely that the performance is significantly affected by your error handling.

    Drew

    J. Drew Allen
    Business Intelligence Analyst
    Philadelphia, PA

  • As Drew shared error-handling is very unlikely to be causing performance issues. You need to expect it to take longer as you increase the volume of data, the goal is to make that change as small as possible. A couple of things are probably happening here that hurt performance:

    1. Your transaction log is probably growing during this operation and that is going to slow things down. You can do a couple of things to help with this:[/li]

    • Pre-size the log file so it does not have to grow.
    • Use Explicit transactions for each table so that log re-use is able to happen when a checkpoint (simple recovery) or a log backup happens (full & Bulk-logged recovery)

    2. Table variables and table-valued parameters do not have statistics so the optimizer estimates 1 row. Once you hit a certain number of rows in a table variable/tvp the nested loop execution plan that is most likely created will NOT be a good choice and performance will get exponentially worse. Some options are:

    • Use a real table to hold the ID's that need to be deleted and you'll get stats. You can truncate and load this table each run, or you could store a client and date in this table to identify when you deleted a client and when. You'd just need to index the table to use the client and date in addition to the ID.
    • Use the TVP to populate a temp table in each procedure that will get stats
    • Use global temporary table instead of a TVP to hold the ID;s to be deleted.

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply