January 29, 2015 at 10:09 pm
HI
DB1 has 13 GB mdf file size. with full recovery option and about 250 MB Log.
I have a job that runs Once aweek.
and Takes about 5 minutes.
But Day after , Log file grows about 9 GB !!! . exactly after that job . (I traced it on .Trn back file)
And then I Quickly shrink it. (I repeat it about 4 weeks)
Why does this happen?
----------------------------------------------------
this our job steps :
step1 : exec dbo.Admin_reindex
step2 : exec dbo.admin_updateStat
---------------------------------------------------
-- Admin_Reindex
USE [DB1]
GO
/****** Object: StoredProcedure [dbo].[Admin_reindex] Script Date: 1/30/2015 8:26:39 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[Admin_reindex]
as
if exists (select name from master.dbo.sysdevices where name = '#temptable')
drop table #temptable
else
SELECT 'alter index '+ quotename(i.name)+' on '+schema_Name(schema_id)+'.'+o.name+' rebuild' as d1
into #temptable
FROM [sys].[indexes] as i,[sys].[objects] as o
where i.object_id>45 and i.object_id=o.object_id
and o.name not like 'queue_messages%' and o.name not like 'filestream_tomb%'
and o.name not like 'sys%' and i.name <> 'NULL'
and schema_Name(schema_id) not like 'sys%'
declare @execStr varchar(256)
declare DB_Cursor Cursor For
select D1 from #temptable
open DB_Cursor
FETCH NEXT FROM DB_Cursor into @execStr
WHILE @@FETCH_STATUS = 0
BEGIN
BEGIN TRY
EXECUTE (@execStr)
END TRY
BEGIN CATCH
insert into Adm_ErrorIndexStat Values(@execStr, 'Index' , GETDATE())
END CATCH;
--print @execStr
FETCH NEXT FROM DB_Cursor into @execStr
END
CLOSE DB_Cursor
DEALLOCATE DB_Cursor
GO
-------------------------------------------------------------------
-- Admin_Rebuild
USE [DB1]
GO
/****** Object: StoredProcedure [dbo].[Admin_UpdateStat] Script Date: 1/30/2015 8:27:23 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[Admin_UpdateStat]
as
if exists (select name from master.dbo.sysdevices where name = '#temptable2')
drop table #temptable2
else
SELECT 'UPDATE STATISTICS '+ schema_Name(schema_id)+'.'+o.name + ' '+quotename(i.name) as d1
into #temptable2
FROM [sys].[indexes] as i,[sys].[objects] as o
where i.object_id>45 and i.object_id=o.object_id
and o.name not like 'queue_messages%' and o.name not like 'filestream_tomb%'
and o.name not like 'sys%' and i.name <> 'NULL'
and schema_Name(schema_id) not like 'sys%'
declare @execStr varchar(256)
declare DB_Cursor Cursor For
select D1 from #temptable2
open DB_Cursor
FETCH NEXT FROM DB_Cursor into @execStr
WHILE @@FETCH_STATUS = 0
BEGIN
BEGIN TRY
EXECUTE (@execStr)
END TRY
BEGIN CATCH
insert into Adm_ErrorIndexStat Values(@execStr, 'Statistics' , GETDATE())
END CATCH;
--print @execStr
FETCH NEXT FROM DB_Cursor into @execStr
END
CLOSE DB_Cursor
DEALLOCATE DB_Cursor
GO
January 30, 2015 at 12:44 am
When you rebuild indexes, it generates a lot of transaction log records.
How often are your transaction log backups running? If you don't run them often enough, the transaction logs accumulate and force the log file to grow. I usually recommend every 15 minutes, 24x7, as a starting point. If you still get substantial log grow, you may need to run transaction log backups more often while the re-indexing is running, like every 5 minutes.
No matter how often you run transaction log backups, the size of the largest table or index in the database will determine the lower limit for the transaction log file, since a re-index is done as a single transaction.
250 MB is probably too small for the transaction log file. I usually set it at about 25% of the size of the database as a starting point, so that would be about 4 GB. Once the transaction log file size is stabilized, do not shrink it, because that will cause performance problems.
You may need a better solution to avoid unnecessary re-indexing, like the free Minion Reindex:
Grant Fritchey Reviews Minion Reindex
January 31, 2015 at 8:10 pm
I like what Michael has said. I would add that it might be good to go over to sqlskills.com and search the blogs there for transaction log maintenance. There are several good articles there. Brentozar.com and probably other places as well, will have good info on transaction log management.
If you don't have auto-growth set up properly and size the log properly up front, as Michael has suggested, then the log will be structured in such a way as to cause performance problems of various kinds.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply