May 16, 2016 at 7:15 am
I've created a process to delete all client information from our database based on the CustomerID of the client. The top level SP finds all Ids related to that client, then fills a table valued parameter (indexed) with those IDs. Then it executes a series of SPs, passing through that table of IDs. Each SP has a bunch of delete statements for each table in that schema (not really a schema, but a naming convention we use). This runs across nearly every table in the db (200 or so?). It does it in order from Child on up to parent so that I break as few contraints as possible. For smaller customers it takes about 20-40 min. (4000 rows through every table if it has a value in that table), but on larger customers it can take a few hours. I've done my error handling as follows in the top level sp:
BEGIN TRY
EXEC sp_X @CustID;
END TRY
BEGIN CATCH
SELECT ERROR_MESSAGE() AS StrMessage,
ERROR_PROCEDURE() AS ErrProc,
ERROR_LINE() AS ErrLine;
RETURN;
END CATCH;
BEGIN TRY
EXEC sp_X @CustID;
END TRY
BEGIN CATCH
SELECT ERROR_MESSAGE() AS StrMessage,
ERROR_PROCEDURE() AS ErrProc,
ERROR_LINE() AS ErrLine;
RETURN;
END CATCH;
I'm worried that my error handling might be slowing down this process. Any suggestions on improving speed? I don't really care if the transactions don't get rolled back, with the exception of the final statement that removes the list of IDs. I'll probably put that to an OUTPUT so I retain the list of IDs. But really everything else I don't want rolled back if there is an error. Am I approaching my error handling correctly and in the most efficient way possible considering my requirements? Any suggestions? Any comments or advice greatly appreciated.
May 16, 2016 at 8:37 am
For performance problems, please post the execution plan (actual if possible, estimated if not), and the stored proc definition. Given that your error handling consists of a single simple SELECT statement which will only execute if there is an error, it is highly unlikely that the performance is significantly affected by your error handling.
Drew
J. Drew Allen
Business Intelligence Analyst
Philadelphia, PA
May 16, 2016 at 12:40 pm
As Drew shared error-handling is very unlikely to be causing performance issues. You need to expect it to take longer as you increase the volume of data, the goal is to make that change as small as possible. A couple of things are probably happening here that hurt performance:
1. Your transaction log is probably growing during this operation and that is going to slow things down. You can do a couple of things to help with this:[/li]
2. Table variables and table-valued parameters do not have statistics so the optimizer estimates 1 row. Once you hit a certain number of rows in a table variable/tvp the nested loop execution plan that is most likely created will NOT be a good choice and performance will get exponentially worse. Some options are:
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply