October 15, 2010 at 8:59 am
Hello everyone. I got a fun one I could use some help with.
I have 3 production servers. All 3 have the exact same hardware setup. All 3 are using SQL Server 2005 SP3 64 Bit Standard Editions. Note that when I say same hardware, I am talking same manufactures on disks, mother boards, etc. Exact duplicates (network admin kind of a nazi that way 🙂 ). I setup the server myself, and all servers are configured the same way for the default instance on them.
Now, the code below will create the wait type WRITE LOG. When I run the code, I find that on DB1 and DB2, that it takes 1.75 - 2 minutes to complete. However on DB3, it takes 16 minutes.
Taking it a step further, I actually checked the servers for today to see what the total waittime has been for today only on WRITELOG wait types. HUGE DIFFERENCE!
DB1 – 18 min 24 sec
DB2 – 36 min 46 sec
DB3 – 6 hours 34 minutes 26 seconds
Now normally, as I have read it, WRITELOG indicates that there is a hardware issue. http://www.confio.com/English/Tips/WRITELOG.php However, I have checked Disk Queues, Disk Reads/Sec, Disk Writes/Sec, and CheckPoint pages/Sec. ALL of them are less than the Microsoft recommended baselines, and are in line with our baselines for the system. I verified the stats while running the code below, and there is nothing wrong.
So now we reach my questions. What else could cause this wait type to happen? Could this be an issue with memory and not disk? Is there a setting that might affect this?
Thanks,
Fraggle
if not exists (select * from sys.tables WHERE name = 'tproduct')
begin
create table tproduct
(productid int
, Category int
, name nvarchar(200)
, descn nvarchar(200)
)
end
else
begin
truncate table tproduct
end
declare @starttime datetime
set @starttime = getdate()
DECLARE @i INT
SET @i = 1
WHILE @i < 100000
BEGIN
BEGIN TRANSACTION
INSERT INTO [tproduct]
([productid],
[category],
[name],
[descn])
VALUES (@i,
floor(@i / 1000),
'PROD' + REPLACE(str(@i),' ',''),
'PROD' + REPLACE(str(@i),' ',''))
SET @i = @i + 1
COMMIT
END
SELECT 'run 1 took ', getdate() - @starttime
truncate table tproduct
set @starttime = getdate()
SET @i = 1
BEGIN TRANSACTION
WHILE @i < 100000
BEGIN
INSERT INTO [tproduct]
([productid],
[category],
[name],
[descn])
VALUES (@i,
floor(@i / 1000),
'PROD' + REPLACE(str(@i),' ',''),
'PROD' + REPLACE(str(@i),' ',''))
SET @i = @i + 1
END
COMMIT
SELECT 'run 2 took ', getdate() - @starttime
December 23, 2010 at 10:20 am
Doesnt that site you linked state that the writelog will take much longer if commit each transaction within the loop, versus comitting them all as a single transaction at the end of the loop? Which example are you using?
Also, you might want to look at that database and see what the starting size and current size of the log file is. Perhaps shrinking the log file is necessary.
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply