May 2, 2017 at 6:58 am
Hi ,
I have a very Huge table (millions of the rows). I am performing a batch operation on it via SQL Proc.
When i Initiate the SQL Batch Jobs in a Transaction , which contains delete operations as well.
Psudo code
(
Begin Trans
SQL1(Some Insert); SQL2(Some Update); SQL3 (Some Delete);
Commit Trans
)
When I initiate the the batch job it this hold all the tables and applies the explicit locks on tables.
IN order to avoid the locks i did table partitioning with Year granularity but still facing performance issue when application uses those tables, not available till the process get finished.
Can some body help here how can I optimize or avoid the the table lock so that application users flow is smooth.
VD
May 2, 2017 at 7:41 am
As with most SQL Questions, it depends. First, I would hope that a batch that updates millions of records is run off hours so that application users wouldn't be held up by the processing. If that is not possible then is it possible to break your Transaction into 3 transactions?
Begin Insert_Tran (Some Insert) Commit Insert_Tran;
Begin Update_Tran (Some Update) Commit Update_Tran;
Begin Delete_Tran (Some Delete) Commit Delete_Tran;
You will still experience some locks as they are inherent in the transactions, but hopefully your application now is not held up by a major lock.,
Regards,
Matt
May 2, 2017 at 7:50 am
Hey Matt,
BEGIN TRY
BEGIN TRANSACTION SQL1 COMMIT
END TRY
BEGIN CATCH
IF @@TRANCOUNT > 0
ROLLBACK
END CATCH
Will also help here ??
I will try to break the DML to parts. Its a good idea.
Regards,
VD
May 2, 2017 at 8:11 am
VD, that won't help speed things up, but it is certainly proper use of Try Catch logic. Below is what we use in the CATCH portion: BEGIN CATCH
DECLARE @ErrorMessage NVARCHAR(4000) = ERROR_MESSAGE(),
@ErrorSeverity INT = ERROR_SEVERITY(),
@ErrorState INT = ERROR_STATE();
PRINT '';
PRINT @ErrorMessage;
ROLLBACK TRANSACTION;
END CATCH
Regards,
Matt
May 3, 2017 at 5:58 am
I tried to keep the transactions on separate piece but business rule does not allowing me to do. I can only have single Transaction. Equally I can not perform this activity in Off hours:crying:
VD
May 3, 2017 at 11:46 am
vineet_dubey1975 - Tuesday, May 2, 2017 6:58 AM...
When I initiate the the batch job it this hold all the tables and applies the explicit locks on tables.
IN order to avoid the locks i did table partitioning with Year granularity but still facing performance issue when application uses those tables, not available till the process get finished.
Can some body help here how can I optimize or avoid the the table lock so that application users flow is smooth.
VD
Table partitioning won't really help performance here, unless you were able to utilize partition switching:
https://technet.microsoft.com/en-us/library/ms191160(v=sql.105).aspx
but it sounds like you won't be able to do that since you are required to do an INSERT, UPDATE, and DELETE in the same transaction. How many rows is each the INSERT, UPDATE, and DELETE processing? Is it possible to break down the work those 3 statements are doing in smaller chunks of maybe 5000-10000 rows?
May 8, 2017 at 7:07 pm
You need to have a look at those "Some Insert; Some Update; Some Delete"
They must be very ineffective, having lots of table/index scans, which cause total locking.
Rectify the code, make sure you have appropriate indexing in place (adding an index does not always make it better), and your problem will go away.
_____________
Code for TallyGenerator
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply