November 4, 2013 at 7:14 am
I have an audit table that is written to and has records deleted from all the time through out day.
some times the volume of inserts if very large, at the same time other records are being delete. thus i am getting lots of table lock. (now don't yell at me) we do not have any Keys on this table it is really a heap table. we do have 3 index that do help but they only account for about 25% of the records.
Am i to under stand that by default deletes trigger a table lock?
if so can i force row locking or the use of hints?
to help reduce the locking issues?
November 4, 2013 at 7:30 am
Deletes do not automatically become table locks. What you are seeing is lock escalation. Locks go from row/page to partition (if enabled) to table. So with all of the activity on this heap SQL Server is deciding that there are too many locks and it would be better to take a higher lock to save on memory. Remember SQL Server is not being mean if you start using lock hints you may impact your system in other ways (think memory).
I would not start using lock hints as a first measure. First question is there a reason why the table has to remain a heap? Is this an OLTP type system inserting new records and deleting old records?
November 4, 2013 at 7:55 am
with the current design, we see no other method then as a heap.
Let me pose this additional info
we have 15 col's that are represent Keys of the tables being entered
3 Char col, 2 int col, 5 decimal 14, 5 guid, plus 4 others that will always be populated.
ie (site, destination,table)
we read in to a temp table from an xml then we do a delete from
audit as a inner join on xmltemp as x on a.site=x.site
a.destination = x.destination
a.table= x.table
a.char1 = x.char1
a.char2 = x.char2
a.char3 = x.char3
a.int1 = x.int1
a.int2 = x.int2
a.dec1 = x.dec1
....
we are doing only one table at a time.
so lets say table 1,3,4,6,7 only uses site, destination,table and Char1 and Dec1
table 2 only uses site, destination,table and char1
would it be better to
do an if statement in my proc
that if table = 1,3,4,6,7 (not correct format just typing shortcut)
delete from
audit as a inner join on xmltemp as x on a.site=x.site
a.destination = x.destination
a.table= x.table
a.char1 = x.char1
a.dec1 = x.dec1
if table = 2
delete from
audit as a inner join on xmltemp as x on a.site=x.site
a.destination = x.destination
a.table= x.table
a.char1 = x.char1
else
delete from
audit as a inner join on xmltemp as x on a.site=x.site
a.destination = x.destination
a.table= x.table
a.char1 = x.char1
a.char2 = x.char2
a.char3 = x.char3
a.int1 = x.int1
a.int2 = x.int2
a.dec1 = x.dec1
....
would this be better approach?
November 4, 2013 at 8:39 am
Trying to understand your table structure. You have a single audit table that handles audit information for multiple tables? Do the base tables have a PK and/or clustered index? I want to make sure I understand you environment before I make any suggestions.
November 4, 2013 at 10:01 am
yes the individual table do have their own keys and indexes.
When a record is added or changed a proc is called to insert the base keys to the audit table.
it first sees if it exists and if so deletes the old audit record and insert the (this is not common as not often is a given table record updated before it is replicated and marked as sent in the audit table)
the problem comes in when the site sends back data that is wrtten to the xmltemp table and joined to the audit table and if they match (confirmed) then it deletes from the audit table. but since the audit table is being written to by the users and in come case (1000's of records) and then the conformation is trying to delete records at the same time i get locking(blocking)
November 4, 2013 at 11:35 am
So, lets get back to your issue. Which is you are experiencing blocking issues (I'm guessing) and you want to remove table locks to prevent blocking? You will still have row level blocking and you may now get memory issues that were prevented before by using table locks. Yes you can try to prevent SQL server from escalating (or at least make it wait until absolutely necessary) by using the following code:
ALTER TABLE tablename SET (LOCK_ESCALATION = DISABLE)
But I still recommend you try to resolve the root cause first. Which seems to me is the single audit table. Have you consider using a separate audit table for each base table? I would also look at making a clustered index on the table (TableName + Base table clustered key). What about loading your XML data into a staging table and doing a single pass delete before you do the insert into the base table.
November 4, 2013 at 1:30 pm
The XML is being loaded in to a staging (temp) table for use in the joining.
Separate audit table for each of the tables would at this time not viable as we have 211 tables
and for our replication issues would cause a major re-write of that process.
what about my idea of multiple deletes base on only the needed fields not joining on all col's (see prev post)
November 4, 2013 at 5:10 pm
Lee Hopkins (11/4/2013)
The XML is being loaded in to a staging (temp) table for use in the joining.Separate audit table for each of the tables would at this time not viable as we have 211 tables
and for our replication issues would cause a major re-write of that process.
what about my idea of multiple deletes base on only the needed fields not joining on all col's (see prev post)
The normal purpose of an audit table is to provide a long term record of what has been done. Of course, you already knew that. I guess my question would be, why are you deleting from the audit table at all? I can understand such deletions only if you have a legal rolloff period but then such an audit table should have a clustered index based on a temporal column of when they were created. That will typically keep enough separation between the inserts and the deletes where you won't actually get table locks.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 4, 2013 at 6:02 pm
I guess the name is of my table is confusing. it is not so much a audit as a replication table.
A table that keeps track of what needs to me moved to different Sites. these change are recorded here by the keys when a rep. cycle kicks off it create an xml of the complete record by tables and sends it to the different sites. when a record is sent it is flagged as sent and is awaiting a confirm from the receiving site when this confirm comes in that is when it is deleted.
Does that help.
we have over 80 sites being received all day long and we send twice a day.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply