December 15, 2008 at 7:20 am
my delete query which runs for 8 tables is consuming full resources, we have 8 CPU's and all are used for 100% and still the query is stuck with one table for deletion which has to delete some 70k records.
December 15, 2008 at 7:30 am
Can you please post us the Query you use, Its not advisible to delete 70 K from the production server in a time. is this a one off thing or a job that runs every X times
December 15, 2008 at 7:35 am
this is the delete code which loops through 10 tables for deleting duplicates.
set @sql = 'delete C
From ' + @PhiscalDB + '.dbo.RevCode' + @Phiscalyr + ' As C
Inner Join ' + @StageDB + '.dbo.state_' + @Phiscalyr + '_Dups As A
On (C.RevNo = A.RevNo and
c.servcdate = a.servcdate and
c.RevCode = a.RevCode and c.RevType = a.RevType)
where adddate in (select max(a.adddate)
from ' + @PhiscalDB + '.dbo.RevCode' + @Phiscalyr + ' as a inner
join ' + @StageDB + '.dbo.state_' + @Phiscalyr + '_Dups as b
on a.RevNo = b.RevNo and a.servcdate = b.servcdate and
a.RevCode = b.RevCode and a.RevType = b.RevType
group by a.RevNo, a.servcdate, a.RevCode, a.RevType)'
exec(@sql)
December 15, 2008 at 7:45 am
The weired thing is it is just hanging at one table 5 tables before that in a job went well and at only 1 table it got stucked.
December 15, 2008 at 3:06 pm
Please check for presence of indexes(for columns mentioned in where clause) for that one table.
MJ
December 16, 2008 at 9:24 am
Try Copying the records which u don't want to delete into another temporary table. Then Truncate the original table & then copy the records from temporary table to the original table or rename it to the original table.
December 16, 2008 at 12:44 pm
Firstly the deletes on Production has to be happen off-hours, this will avoid any blocking.
Secondly the deletes have to be in batches, say in a batch of 3000 to 5000 in a while loop in a transaction, by doing this you are trying to not put any load on the transaction log with the added advantage of making the recovery easy in case the sql server crashes.
Example.
DECLARE @RowCount SMALLINT
DECLARE @errorcode INT
SET @RowCount = 1
WHILE (@RowCount > 0)
BEGIN
BEGIN TRAN
DELETE (TOP 3000) FROM .....
SELECT @RowCount = @@ROWCOUNT,
@errorcode = @@ERROR
IF @errorcode <> 0
ROLLBACK TRAN
ELSE
COMMIT TRAN
END
Amol Naik
December 16, 2008 at 12:49 pm
Amol.Naik (12/16/2008)
Firstly the deletes on Production has to be happen off-hours, this will avoid any blocking.Secondly the deletes have to be in batches, say in a batch of 3000 to 5000 in a while loop in a transaction, by doing this you are trying to not put any load on the transaction log with the added advantage of making the recovery easy in case the sql server crashes.
Example.
DECLARE @RowCount SMALLINT
DECLARE @errorcode INT
SET @RowCount = 1
WHILE (@RowCount > 0)
BEGIN
BEGIN TRAN
DELETE (TOP 3000) FROM .....
SELECT @RowCount = @@ROWCOUNT,
@errorcode = @@ERROR
IF @errorcode <> 0
ROLLBACK TRAN
ELSE
COMMIT TRAN
END
You may also want to add a BACKUP LOG after the delete to help control the growth of the transaction log if the database is using BULK LOGGED or Full Recovery model.
December 16, 2008 at 12:54 pm
The Transaction log backup is assumed to be running every 10 or 15 minutes in a Production environment. BACKUP LOG command could be used in scenarios where the transaction log backups are taken at a higher intervals say 30 minutes to 1 hour, which i think would be very rare.
Amol Naik
December 16, 2008 at 1:01 pm
Amol.Naik (12/16/2008)
The Transaction log backup is assumed to be running every 10 or 15 minutes in a Production environment. BACKUP LOG command could be used in scenarios where the transaction log backups are taken at a higher intervals say 30 minutes to 1 hour, which i think would be very rare.
Bad assumption to make. Not everyone runs t-log backups that frequently. The frequency depends on what the users have determined to be an acceptable loss of data in the event of a failure. If the users have decided that 2 hours of lost data is acceptable, then the t-log backups aren't running every 10 to 15 minutes. Also, if you are deleted large amounts of data, your logs are going to grow at a faster rate than normal, and more frequent t-log backups may be warranted during that time.
December 16, 2008 at 1:11 pm
If you have have read my post completely, i mentioned BACKUP LOG does need to be taken in case the trn backup frequency duration is set as 1 hour. All i am saying is if the backup frequency is 10 to 15 minutes then the trn backup after the delete is not really required.
Amol
Amol Naik
December 16, 2008 at 1:13 pm
Last but not the least. Run the Index defrag commands after the Delete.
Amol Naik
December 16, 2008 at 1:20 pm
Lynn has a great point in that you need to know what your recovery intervals are and what your tolerance is for lost data. Based on that, you have to watch what the log backup size is to ensure you have enough space.
If you are worried about the batched deletes impacting things, perhaps you should schedule the deletes in between the log backups so there isn't a collision of both running at the same time. You could even add multiple schedules to the deletes, so something like
12:05 - log backup
12:10 - delete batch
12:15 - log backup
12: 20 - delete batch
...
You could set your batch size to be sure it fit in the interval between log backups and balance that against the additional logging being done.
No matter what you choose, you can't "assume" log backups are being done on any interval. I've had as many places use 5 minute intervals as 1 hour intervals. If anything, write a routine that gets the info from the server and then schedules the deletes appropriately.
December 16, 2008 at 1:39 pm
Thanks for the explanation Steve. I agree with you. I think i was trying to say the same thing. By saying "assuming", i was trying to give a particular scenario and the transaction log backup frequency in such a scenario. Nevertheless its always good to enhance knowledge especially from Senior members.
Regards,
Amol
Amol Naik
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply