July 25, 2010 at 9:37 am
Hi,
We have SQL Server 2005 EE x64 with SP3. For our production database, the Indexes have Avg. Fragmentation_in_percent is always <30 (mostly 20 to 25)
This database is configured with log shipping. When I do Index defrag, then the next log backup size its generating is 30 GB more than the database size (27 GB) and copy job is taking 20 hours to copy to secondary and network team is asking us to send a fixed small size of log files (<100 MB). Because copying 30 GB file over the network is effecting the production users.
Do we have any concept in SQL server to generate fixed size log backups and send to secondary?
In Oracle, in data guard,we do in the same way i.e when the log file size is 100 MB then generate log backup and apply to stand by. and looking a similar concept in SQL Server. Is that possible?
and my Indexes fragmentation is never going to >30 %? Even if I do not defragment for a month, then still for all the indexes have Avg. Fragmentation_in_percent as always <30
So if I do Index rebuild, then the generated log backup size will be less as compared to Index Reorg?
please advice
July 25, 2010 at 9:40 am
Why don't you use a custom index rebuild script that only rebuild indexes in need of it? That should reduce the log impact.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 25, 2010 at 12:58 pm
Thanks Gail,
I'm using the below script which defrag the indexes based on Avg.Fragmentation_in_percent
<30 reorg
>30 rebuild
In our case, all indexes have Avg.Fragmentation_in_percent <30 so it's doing only Index reorg and lot of log is generating.
How should I know the indexes that only required to defrag?
Here is the script I'm using
-- Ensure a USE <databasename> statement has been executed first.
SET NOCOUNT ON;
DECLARE @objectid int;
DECLARE @indexid int;
DECLARE @partitioncount bigint;
DECLARE @schemaname nvarchar(130);
DECLARE @objectname nvarchar(130);
DECLARE @indexname nvarchar(130);
DECLARE @partitionnum bigint;
DECLARE @partitions bigint;
DECLARE @frag float;
DECLARE @command nvarchar(4000);
-- Conditionally select tables and indexes from the sys.dm_db_index_physical_stats function
-- and convert object and index IDs to names.
SELECT
object_id AS objectid,
index_id AS indexid,
partition_number AS partitionnum,
avg_fragmentation_in_percent AS frag
INTO #work_to_do
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, 'LIMITED')
WHERE avg_fragmentation_in_percent > 10.0 AND index_id > 0 AND page_count>500
-- Declare the cursor for the list of partitions to be processed.
DECLARE partitions CURSOR FOR SELECT * FROM #work_to_do;
-- Open the cursor.
OPEN partitions;
-- Loop through the partitions.
WHILE (1=1)
BEGIN;
FETCH NEXT
FROM partitions
INTO @objectid, @indexid, @partitionnum, @frag;
IF @@FETCH_STATUS < 0 BREAK;
SELECT @objectname = QUOTENAME(o.name), @schemaname = QUOTENAME(s.name)
FROM sys.objects AS o
JOIN sys.schemas as s ON s.schema_id = o.schema_id
WHERE o.object_id = @objectid;
SELECT @indexname = QUOTENAME(name)
FROM sys.indexes
WHERE object_id = @objectid AND index_id = @indexid;
SELECT @partitioncount = count (*)
FROM sys.partitions
WHERE object_id = @objectid AND index_id = @indexid;
-- 30 is an arbitrary decision point at which to switch between reorganizing and rebuilding.
IF @frag < 30.0
SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REORGANIZE';
IF @frag >= 30.0
SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REBUILD';
IF @partitioncount > 1
SET @command = @command + N' PARTITION=' + CAST(@partitionnum AS nvarchar(10));
EXEC (@command);
PRINT N'Executed: ' + @command;
END;
-- Close and deallocate the cursor.
CLOSE partitions;
DEALLOCATE partitions;
-- Drop the temporary table.
DROP TABLE #work_to_do;
GO
Thanks
July 25, 2010 at 1:57 pm
You could always edit that and change the thresholds if you want. Unless you know that you need to rebuild/reorganise for performance.
If you're rebuilding or reorganising, you're going to generate lots of log. Reorganise will have less log impact, but if there's lots of indexes and your log backup frequency is not often enough, that's still going to be large log backups
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply