July 10, 2008 at 7:37 am
Hi,
I've tried this on two installations of SQL Server 2005 service pack 2. If I delete rows of a table, an outer join on the empty table is extremely slow...if I truncate it's ok, if I delete with the tablock hint it's ok. Any ideas why? The code to reproduce this follows, the outer join takes <1 second after truncate or using tablock but 64 seconds if using just delete.
Thanks.
CREATE TABLE [testtab1](
[col1] [bigint] NULL,
[col2] [bigint] NULL
) ON [PRIMARY]
CREATE TABLE [testtab2](
[col1] [bigint] NULL,
[col2] [bigint] NULL
) ON [PRIMARY]
declare @i int
set @i = 1
while @i <= 20000
begin
insert into testtab1 values(@i,floor(10*rand()))
set @i = @i + 1
end
DECLARE @date datetime
SET @date = getdate()
INSERT INTO testtab2 (col1, col2)
SELECT b.col1,b.col2
FROM testtab1 b
LEFT OUTER JOIN testtab2 a
ON a.col1 = b.col1 AND a.col2 = b.col2
WHERE a.col1 IS NULL OR a.col2 IS NULL
print 'Elapsed time for join/insert after table creation:'+CONVERT(CHAR(8), GETDATE()-@date, 108)
truncate table testtab2
SET @date = getdate()
INSERT INTO testtab2 (col1, col2)
SELECT b.col1,b.col2
FROM testtab1 b
LEFT OUTER JOIN testtab2 a
ON a.col1 = b.col1 AND a.col2 = b.col2
WHERE a.col1 IS NULL OR a.col2 IS NULL
print 'Elapsed time for join/insert after truncate table:'+CONVERT(CHAR(8), GETDATE()-@date, 108)
delete from testtab2 -- with (tablock)
SET @date = getdate()
INSERT INTO testtab2 (col1, col2)
SELECT b.col1,b.col2
FROM testtab1 b
LEFT OUTER JOIN testtab2 a
ON a.col1 = b.col1 AND a.col2 = b.col2
WHERE a.col1 IS NULL OR a.col2 IS NULL
print 'Elapsed time for join/insert after delete:'+CONVERT(CHAR(8), GETDATE()-@date, 108)
Output:
(20000 row(s) affected)
Elapsed time for join/insert after table creation:00:00:00
(20000 row(s) affected)
Elapsed time for join/insert after truncate table:00:00:00
(20000 row(s) affected)
(20000 row(s) affected)
Elapsed time for join/insert after delete:00:01:04
Output if tablock hint is uncommented
(20000 row(s) affected)
Elapsed time for join/insert after table creation:00:00:00
(20000 row(s) affected)
Elapsed time for join/insert after truncate table:00:00:00
(20000 row(s) affected)
(20000 row(s) affected)
Elapsed time for join/insert after delete:00:00:00
July 10, 2008 at 8:47 am
Hmmm... I just ran the test and received no difference between truncate and delete.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
July 10, 2008 at 9:45 am
Do you have automatic statistics updating turned off?
July 10, 2008 at 10:27 am
I ran the test on SQL 2K5 SP2 EE & surprisingly got similar results that of the original poster..
And also, checked the execution plans, they look exactly similar to each other...
The differences I found in it was the IO statistics & the locking (which was obvious)...
With tablock:
Table 'testtab2'. Scan count 1, logical reads 20061, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Locks Held: OBJECT(Exclusive)
Without tablock:
Table 'testtab2'. Scan count 1, logical reads 400061, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Locks Held: OBJECT(Exclusive), PAGE(Exclusive)
Do you have automatic statistics updating turned off?
Set to ON
--Ramesh
July 10, 2008 at 2:56 pm
The database has automatic statistic ON
I ran a few more tests, this time on a table with 100,000 rows. The slow behaviour only seems to manifest itself if I delete all rows, or if I delete all but the last row or all but the first row i.e:
delete from testtab2
delete from testtab2 where col1 < 100000
delete from testtab2 where col1 > 1
any other values I tried in the where condition that has different values for col1 i.e.
delete from testtab2 where col1 > 0 and col1 < 99999
delete from testtab2 where col1 > 2
delete from testtab2 where col1 > 20000 and col1 < 80000
etc. don't seem to exhibit the same problem
July 11, 2008 at 4:02 am
I ran the test with similar experience - this due to the effect of Auto Update Statistics as pr. BOL :
"AUTO_UPDATE_STATISTICS - When set to ON, any missing statistics required by a query for optimization are automatically built during query optimization"
And the query optimizer decides that statistics are outdated when you delete a table.
Change the DB options to Auto Update Statistics Asynchronously = On and your query will run in 00:00:00
/Rasmus Glibstrup
July 11, 2008 at 3:51 pm
Thanks for everyone's assistance. Considering the conditions required to reproduce this, I'm not too worried about about it...I'll just try to remember to avoid using delete when getting rid of all the rows of a table.
July 11, 2008 at 4:46 pm
Heh... good idea... just remember that TRUNCATE won't work in the presence of Foreign Keys... 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply