April 15, 2013 at 4:31 am
using sp_spaceused
I get -
DB size - 10011.13 MB
Unallocated space - 288.22 MB
Data - 3779736 KB
Index - 5912608 KB
Unused - 140264 KB
Reserved - 9832608 KB
(Note: I am doing performance testing in my db where i have only 2 tables.)
April 15, 2013 at 4:55 am
Can you run this please?
EXEC sp_spaceused 'mytable'
- change the table name to the name of the table you are deleting from.
Also, run this:
DBCC SQLPERF ( LOGSPACE )
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
April 15, 2013 at 5:05 am
Result of EXEC sp_spaceused 'tablename'
Rows = 12309864
REserverd = 6976112 KB
Data = 5038728 KB
Index = 1795848 KB
Unused = 141536 KB
Result of DBCC SQLPERF ( LOGSPACE )
Log size 7584.117
Log space used = 97.98335
April 15, 2013 at 5:22 am
Last of all, can you post the actual batch please? Change the table name if you need to.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
April 15, 2013 at 5:40 am
BEGIN TRY
CREATE TABLE #aaa
(
Id BIGINT IDENTITY(1,1) NOT NULL PRIMARY KEY ,
xyzmasterCode VARCHAR(15) NOT NULL )
INSERT INTO#aaa (xyzmasterCode,NumberOfRows)
SELECT DISTINCT xyzmasterCode, COUNT(*)
FROM xyz
WHERE xyzmasterCode BETWEEN '0' AND 'Z'
AND filterDate <= @filterDate
GROUP BY xyzmasterCode
SET @Id = 0
WHILE 1=1
BEGIN
BEGIN TRAN
DELETE a
FROM xyz a,#aaa
WHERE a.xyzmasterCode = #aaa.xyzmasterCode
AND #aaa.Id between @Id and @Id + 100
AND filterDate <= @filterDate
DELETE J
FROM xyzmaster J,#aaa
WHERE J.Code = #aaa.xyzmasterCode
AND#aaa.Id between @Id and @Id + 100
COMMIT TRANSACTION;
CHECKPOINT
IF @DeleteRowcount <=0
BREAK
SET @Id = @Id + 100
END
END TRY
April 15, 2013 at 6:26 am
That looks ok to me, other than the joins to the temp table not being optimized. You don't really need the temp table anyway:
SET @RowsDeleted = 1000;
WHILE @RowsDeleted > 0
BEGIN
BEGIN TRAN;
DELETE TOP(1000) s
FROM [Experimental].[dbo].[SampleData] s
WHERE xyzmasterCode BETWEEN '0' AND 'Z'
AND filterDate <= @filterDate;
SET @RowsDeleted = @@ROWCOUNT;
COMMIT TRAN;
CHECKPOINT;
BEGIN TRAN;
DELETE TOP(1000) s
FROM [Experimental].[dbo].[SampleData] s
WHERE xyzmasterCode BETWEEN '0' AND 'Z';
SET @RowsDeleted = @RowsDeleted + @@ROWCOUNT;
COMMIT TRAN;
CHECKPOINT;
END
You could run it as one transaction spanning both deletes or as two as shown -I'd try both.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
April 17, 2013 at 2:22 am
ChrisM@Work (4/15/2013)
That looks ok to me, other than the joins to the temp table not being optimized. You don't really need the temp table anyway:
SET @RowsDeleted = 1000;
WHILE @RowsDeleted > 0
BEGIN
BEGIN TRAN;
DELETE TOP(1000) s
FROM [Experimental].[dbo].[SampleData] s
WHERE xyzmasterCode BETWEEN '0' AND 'Z'
AND filterDate <= @filterDate;
SET @RowsDeleted = @@ROWCOUNT;
COMMIT TRAN;
CHECKPOINT;
BEGIN TRAN;
DELETE TOP(1000) s
FROM [Experimental].[dbo].[SampleData] s
WHERE xyzmasterCode BETWEEN '0' AND 'Z';
SET @RowsDeleted = @RowsDeleted + @@ROWCOUNT;
COMMIT TRAN;
CHECKPOINT;
END
You could run it as one transaction spanning both deletes or as two as shown -I'd try both.
Add a waitfor delay statement after the commit. This will help to reduce the blocking.
April 17, 2013 at 3:20 am
baabhu (4/17/2013)
ChrisM@Work (4/15/2013)
That looks ok to me, other than the joins to the temp table not being optimized. You don't really need the temp table anyway:
SET @RowsDeleted = 1000;
WHILE @RowsDeleted > 0
BEGIN
BEGIN TRAN;
DELETE TOP(1000) s
FROM [Experimental].[dbo].[SampleData] s
WHERE xyzmasterCode BETWEEN '0' AND 'Z'
AND filterDate <= @filterDate;
SET @RowsDeleted = @@ROWCOUNT;
COMMIT TRAN;
CHECKPOINT;
BEGIN TRAN;
DELETE TOP(1000) s
FROM [Experimental].[dbo].[SampleData] s
WHERE xyzmasterCode BETWEEN '0' AND 'Z';
SET @RowsDeleted = @RowsDeleted + @@ROWCOUNT;
COMMIT TRAN;
CHECKPOINT;
END
You could run it as one transaction spanning both deletes or as two as shown -I'd try both.
Add a waitfor delay statement after the commit. This will help to reduce the blocking.
Can you explain what you mean by this, baabhu? I can understand why a delay might be useful - removing obsolete log data isn't instantaneous - but I don't understand what this has to do with blocking. Cheers.
For better assistance in answering your questions, please read this[/url].
Hidden RBAR: Triangular Joins[/url] / The "Numbers" or "Tally" Table: What it is and how it replaces a loop[/url] Jeff Moden[/url]
April 21, 2013 at 2:18 pm
Hi
From your results it seems you have much schema locks.
Just had similar scenario yesterday, without locks but with high cpu. Some update scripts were running for 'forever'. We stopped them, rebuilt the indexes and run again and they passed well. Maybe will help you.
Regards
IgorMi
Igor Micev,My blog: www.igormicev.com
April 21, 2013 at 2:50 pm
IgorMi (4/21/2013)
Just had similar scenario yesterday, without locks but with high cpu. Some update scripts were running for 'forever'. We stopped them, rebuilt the indexes and run again and they passed well.
That was stale statistics resulting in a sub-optimal exec plan (probably estimating one row and getting thousands or millions). Next time just update stats, don't waste time doing a full index rebuild.
p.s. All queries take shared schema locks when they run, this is normal behaviour. Hence having lots of schema locks is not a problem (unless you have a bunch of schema mod locks too)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 22, 2013 at 1:57 am
That was stale statistics resulting in a sub-optimal exec plan (probably estimating one row and getting thousands or millions). Next time just update stats, don't waste time doing a full index rebuild.
You're right. We did that because there were no other options at the moment, and we needed it immediately.
p.s. All queries take shared schema locks when they run, this is normal behaviour. Hence having lots of schema locks is not a problem (unless you have a bunch of schema mod locks too)[/quote]
Igor Micev,My blog: www.igormicev.com
April 23, 2013 at 8:39 am
Below is what I think to be a simplified version of what you're attempting to accomplish. It involves making a single indexed scan / delete against the transactional and master tables, so I expect it would perform better.
-- For purpose of this example, create xyzmaster table.
drop table xyzmaster;
create table xyzmaster (xyzmasterCode char(1) not null primary key);
-- For purpose of this example, create xyz table.
drop table xyz;
create table xyz (xyzmasterCode char(1) not null, filterDate datetime not null);
-- Also, it's important to have an index on xyzmasterCode:
create index ix_xyzmasterCode on xyz (xyzmasterCode);
-- Create a temp table to hold deleted codes. It's important to have
-- an index on xyzmasterCode. Also, we're making xyzmasterCode unique,
-- because that's all we need and will keep this recordset small.
-- The ignore_dup_key property insures xyzmasterCode is unique.
drop table #DeletedCodes;
create table #DeletedCodes (xyzmasterCode char(1) not null );
create unique nonclustered index [uk_xyzmasterCode]
on #DeletedCodes (xyzmasterCode)
with (IGNORE_DUP_KEY = on);
-- Supply the date parameter:
declare @filterDate datetime = getdate ();
-- Delete transactional records from xyz.
-- Note that a distinct list of deleted codes will be inserted into #DeletedCodes.
delete
from xyz
output deleted.xyzmasterCode
into #DeletedCodes(xyzmasterCode)
where (xyzmasterCode between '0' and 'Z')
and filterDate <= @filterDate;
-- Next, delete from xyzmaster all inactive codes:
delete xyzmaster
from xyzmaster
join #DeletedCodes
on #DeletedCodes.xyzmasterCode = xyzmaster.xyzmasterCode;
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
Viewing 12 posts - 31 through 41 (of 41 total)
You must be logged in to reply to this topic. Login to reply