July 27, 2006 at 2:16 am
i have a table in my database. it has many about 1000 rows. each row has a programID.
When i say something like :
UPDATE
tblLicRebateStatement
SET
ContactName = c.ContactName
FROM
tblLicRebateStatement l, Customer c
WHERE
l.CustomerID = c.CustomerID
and
l.programID = 319
it will up date all those rows with programID 319, but if i try to update the whole table ( leave out teh where programID =319) it just never finishes. i have killed off most sessions using sp_who2 and kill spid, but that has not helped. any ideas??
just to note, there is 44 thousand rows in the table. i can see all the rows (Select * from table) but i cant update.
July 27, 2006 at 3:09 am
Chances are one or more of your tables is locked.
Try running sp_lock or looking at the current activity in Enterprise Manager.
One possibility is to break down your update into small chunks
DECLARE @UpdateCount INT SET @UpdateCount=1 SET ROWCOUNT 1000 WHILE @UpdateCount>0 BEGIN UPDATE tblLicRebateStatement SET ContactName = c.ContactName FROM tblLicRebateStatement l INNER JOIN Customer c ON l.CustomerID = c.CustomerID WHERE l.ContactName c.ContactName AND l.programID = 319 SET @UpdateCount=@@ROWCOUNT END SET ROWCOUNT 0
What this does is do the update in batches of 1000.
Putting the WHERE l.ContactName c.ContactName ensures that a different set of records will be updated each time.
July 27, 2006 at 3:35 am
The problem is you are updating table from another instance of the same table.
It creates deadlock.
Use this:
UPDATE L
SET ContactName = c.ContactName
FROM tblLicRebateStatement L
INNER JOIN Customer c ON L.CustomerID = c.CustomerID
_____________
Code for TallyGenerator
July 27, 2006 at 4:00 am
this job has run fine for the past year so i dont think thats the problem sergiy.thank you though for the suggestion. i will check to see what is locked
July 27, 2006 at 6:07 am
That's typical problem with badly designed queries.
They seem OK because they use to work for sometime. But one day after next portion of data inserted into database they suddenly become very slow or stop working completely.
Your query is wrong. Definetely wrong. And it does not matter it used to work before. Probably before you did not have some kind of data variations in the tables you have now.
So, first you fix the query and then you may spend some time looking onto locks, if you'll still be interested.
_____________
Code for TallyGenerator
July 27, 2006 at 6:11 am
my job has finally started moving! one stage of the job has a lot of locks on a table all at once. could this be slowing down the job ( that particular sproc took 2 hours to run, fair enough though it has a number of cursors in it and calls a lot of sprocs itself). if i used
select ..... with (nolock), would that speed things up? the job only fills a table with data for a report so if changes were made they wouldnt be too crytical. would that speed stuff up in your opinions?
July 27, 2006 at 6:26 am
i will indeed look into sorting out this query. the query(s) were written in 1997-1998 and have worked since then , hence my belif that they were not at fault. (i only inherited the application and database this year)it would be quite a large task to search through them all and change them all, but i will try to make time to do it.
If i use (nolock) after each select statement, would that sort out some of my locking issues?
July 27, 2006 at 6:28 am
I tell you, it's not about speed!
It's not about locks!
You UPDATE statement is wrong!
You update the whole table tblLicRebateStatement, all rows with any programID, not only where programID = 319.
It was OK while you had only one programID in this table. But it's nt OK anymore.
So, fix it!
_____________
Code for TallyGenerator
July 27, 2006 at 7:39 am
but from day one there has always been up to 50 programId's in the table, since 1997!. i agree with you , that the update is wrong, but why did it work great, up until last night?
July 27, 2006 at 8:14 pm
The query does not make it's own deadlocks... the query is not wrong. Take a look at Books Online... something else is wrong... it may be that someone added a trigger or left a transaction open on one of the two tables, but the query is not wrong. I will admit, though, that Seqiy's method is "more" correct and a good habit to get into because there are cases where if you are updating a column that also appears in the WHERE clause, you may get a naming conflict.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 27, 2006 at 8:24 pm
It may also be that the Customer table is actually some (synonym) view that someone changed or changed the underlying table on. My recommendation is to do an execution plan on the query and check each icon for the expected number of rows... you may be surprised.
Another thing that can cause this type of problem is mabey someone added a whacko index or someone changed/deleted an existing index. It could also be that someone added a trigger to the target table.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 28, 2006 at 1:14 am
It may also be that the indexes are fragmented. Performance degrades as indexes get more fragmented and it may be that recently the fragmentation reached a point where the query optimiser picked a different plan because of the fragmentation.
You can run a contig check (dbcc showcontig ('<table name>')) though bear in mind it will take quite a while on a very large table and it's quite IO intensive. Not something to do during peak processing hours.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply