April 4, 2007 at 9:58 am
Hi everyone!
I was wondering if anyone can give me advice on how often I should defrag my sql svr databases. I am running sql svr 2005. Also would like the same info for rebuilding indexes. Any input or advice given is greatly appreciated. I found the top 10 best practices from microsoft web site, but doesn't go into detail of how often to do either process.
Thanks!
MSB
April 4, 2007 at 10:04 am
Depends on your update/insert frequency. The more often you do those the more often you might want to defrag/rebuild.
Lots of people do it weekly, some daily, some monthly.
and some never. Not sure I'd go that way.
April 4, 2007 at 10:19 am
Thanks Steve! Basically from everything that I have read and researched, is what you said.
April 5, 2007 at 10:54 am
We took a script from SQL Server Central that checked fragmentation and then decided whether or not to reindex or defragment (great script, used it for years, I'm not sure which one it is anymore or I'd give you the link). I recently rewrote it for 2005 replacing the old 2000 functionality and making it work on all databases on a server instead of on a single database. It's hardly original work, but I can post it if you think it'll be useful.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 5, 2007 at 12:50 pm
I for one would love to have the script, i am up to my ears in this, multiple databases, multiple servers.
i've set up jobs on busy databases to check statistics, check integrity, reorganize, then rebuild indexes, in that order, nightly.
The only problem is that doing this seems to create a huge transaction log after the first full backup the next morning. i have tried changing the recovery model to bulk_logged before running the jobs, and then immediately switching the recovery model back to full, but that does not reduce the size of the transaction log. Any ideas here would be helpful.
I am thinking that running a full backup immediately after the index rebuilding job would reduce the inflation of the transaction log that was caused by the index rebuild.
Also, where did you find the best practices? I am building a collection of best practice references, would like to see if this is one i already found.
Thanks, Mike
April 5, 2007 at 1:43 pm
I've mainly grown my best practices over time. We record them locally as we discover new ones that we didn't know we need. MS has some help in this area. There's a Best Practices web site of the SQL Server site and you can use the SQL Best Practices Analyzer to get the easy stuff. After that it's work.
Rebuilding indexes will grow the log. That's why it's best to only rebuild the one's you absolutely have to.
Check out the script at http://www.sqlservercentral.com/scripts/viewscript.asp?scriptid=1911
It's nothing special. I reworked better scripts from better people to get what I wanted out of it. Hopefully it'll prove helpful.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 6, 2007 at 2:51 pm
Greetings all;
I have been on the same project here at my new position. I have a 2005 database that has over 400 tables, who knows how many indexes. This database is the core of a .com application that gets a lot of activity. I have a window of maintenance between 3 & 5 am to do what ever I can do.
Coming out of a 2000 shop I first had to identify what I had to do to manage this DB. I have been using sqlservercentral.com as a primary source of information and has been a great help. I've looked at the script Grant identified and actually found one I like better.
Check out this link:
http://www.sqlservercentral.com/scripts/viewscript.asp?scriptid=1475
I am in the process of retooling it to;
a) be 2005 compliant
b) run within a time window between 3-5 am. This requires checking the current system time to avoid running outside my maint. window.
c) provide a simple reporting feature to show what activity is taking place
d) provide a history to get a better understanding of the dynamics of the production database activity
e) standardize the process so I can feed stats to a central repository so I can have the process running on several other production DBs.
I've learned a lot over the past week since I've taken on this project. My boss is excited to see something get rolled out to better manage the production environment.
Kurt Zimmerman
DBA
RHWI
Poughkeepsie, NY
Kurt W. Zimmerman
SR DBA
Lefrak Organization
New York, NY
http://www.linkedin.com/in/kurtwzimmerman
April 9, 2007 at 1:34 pm
I'm going to hog up a few posts with this, but here is the strategy i worked out. It's extremely flexible and in our case we went from 100% offline maintence in sql 2000 to 90% online in one database and 100% in most of the others. it's also very nice since it dynamically selects the indexes to be rebuilt online and off.
April 9, 2007 at 1:36 pm
First create a database on each server called index_stats.
Next create the following tables.
USE [Index_Stats]
GO
/****** Object: Table [dbo].[physical_stats] Script Date: 04/09/2007 15:35:17 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[physical_stats](
[server_name] [varchar](25) NULL,
[database_id] [smallint] NULL,
[database_name] [varchar](25) NULL,
[object_id] [int] NULL,
[table_name] [varchar](128) NULL,
[index_id] [int] NULL,
[partition_number] [int] NULL,
[index_type_desc] [nvarchar](60) NULL,
[alloc_unit_type_desc] [nvarchar](60) NULL,
[index_depth] [tinyint] NULL,
[index_level] [tinyint] NULL,
[avg_fragmentation_in_percent] [float] NULL,
[fragment_count] [bigint] NULL,
[avg_fragment_size_in_pages] [float] NULL,
[page_count] [bigint] NULL,
[avg_page_space_used_in_percent] [float] NULL,
[record_count] [bigint] NULL,
[ghost_record_count] [bigint] NULL,
[version_ghost_record_count] [bigint] NULL,
[min_record_size_in_bytes] [int] NULL,
[max_record_size_in_bytes] [int] NULL,
[avg_record_size_in_bytes] [float] NULL,
[forwarded_record_count] [bigint] NULL,
[date] [datetime] NULL,
[index_name] [varchar](50) NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
USE [Index_Stats]
GO
/****** Object: Table [dbo].[command] Script Date: 04/09/2007 15:35:33 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [dbo].[command](
[command] [varchar](1000) NULL,
[date] [datetime] NULL,
[database_name] [varchar](25) NULL,
[server_name] [varchar](25) NULL
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
April 9, 2007 at 1:38 pm
next create a default. Make sure to change the server name to the local server name. I'll explain later.
USE [Index_Stats]
GO
/****** Object: Default [dbo].[server_name] Script Date: 04/09/2007 15:36:46 ******/
create default [dbo].[server_name] as 'Server Name'
April 9, 2007 at 1:42 pm
next create a job that will gather all your index frag statistics. Create a step for each database. make sure to change the database id numbers for each db and change the database names for each of your database names. i run the detailed because my rule is gather as much info as possible even if you don't understand most of it.
-- Run system view to gather fragmentation data and insert into table
INSERT into index_stats.dbo.physical_stats ( database_id,
object_id,
index_id,
partition_number,
index_type_desc,
alloc_unit_type_desc,
index_depth,
index_level,
avg_fragmentation_in_percent,
fragment_count,
avg_fragment_size_in_pages,
page_count,
avg_page_space_used_in_percent,
record_count,
ghost_record_count,
version_ghost_record_count,
min_record_size_in_bytes,
max_record_size_in_bytes,
avg_record_size_in_bytes,
forwarded_record_count
)
select * from sys.dm_db_index_physical_stats (7, null, null, null, 'detailed')
update index_stats.dbo.physical_stats
set date = getdate() where date is null and database_id = 7
update index_stats.dbo.physical_stats
set database_name = 'SCS' where database_name is null and database_id = 7
update physical_stats
set table_name = name from scs..sysobjects b join physical_stats a on a.object_id = b.id where table_name is null and database_id = 7
update physical_stats
set index_name = name FROM scs.sys.indexes b inner join physical_stats a on a.object_id = b.object_id AND a.index_id = b.index_id
where database_id = 7
and index_name is null
and datepart(d, date) = datepart(d, getdate())
April 9, 2007 at 1:44 pm
here is my offline script
USE [SCS]
GO
/****** Object: StoredProcedure [dbo].[usp_alter_index_offline] Script Date: 04/09/2007 15:43:17 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[usp_alter_index_offline]
AS
/* Declare Variables */
DECLARE @objectid int;
DECLARE @indexid int;
DECLARE @partitioncount bigint;
DECLARE @schemaname nvarchar(130);
DECLARE @objectname nvarchar(130);
DECLARE @indexname nvarchar(130);
DECLARE @partitionnum bigint;
DECLARE @partitions bigint;
DECLARE @frag float;
DECLARE @command nvarchar(4000);
/* Prepare list of tables to run maintenance on */
/* Get list of fragmented indexes*/
SELECT
object_id AS objectid,
index_id AS indexid,
partition_number AS partitionnum,
avg_fragmentation_in_percent AS frag,
index_type_desc as index_type,
alloc_unit_type_desc as data_type
INTO index_stats..work_to_do_scs3
FROM index_stats..physical_stats where
datepart(d, date) = datepart(d, getdate()-1) and
datepart(yy, date) = datepart(yy, getdate()) and
datepart(m, date) = datepart(m, getdate()) and
avg_fragmentation_in_percent > 10.0
AND index_id > 0
and index_level = 0
and database_id = 7
/*Get list of indexes with large object data*/
SELECT
object_id AS objectid,
index_id AS indexid,
partition_number AS partitionnum,
avg_fragmentation_in_percent AS frag,
index_type_desc as index_type,
alloc_unit_type_desc as data_type
INTO index_stats..work_to_do_scs4
FROM index_stats..physical_stats where
datepart(d, date) = datepart(d, getdate()-1) and
datepart(yy, date) = datepart(yy, getdate()) and
datepart(m, date) = datepart(m, getdate())
--avg_fragmentation_in_percent > 10.0
AND index_id > 0
and index_level = 0
and database_id = 7
and alloc_unit_type_desc = 'LOB_DATA'
/*delete list of indexes that cannot be maintained online*/
--delete from index_stats..work_to_do3 where objectid in (select objectid from index_stats..work_to_do4) and index_type = 'CLUSTERED INDEX'
/*Get data for cursor because i'm too lazy to add 2 more variables that won't be used in the cursor*/
/* I'm even lazier to change this last comment from the online sp */
SELECT
objectid,
indexid,
partitionnum,
frag
INTO #work_to_do_scs2
FROM index_stats..work_to_do_scs3
where objectid in
(select objectid from index_stats..work_to_do_scs4)
and index_type = 'CLUSTERED INDEX'
-- Declare the cursor for the list of partitions to be processed.
DECLARE partitions CURSOR FOR SELECT * FROM #work_to_do_scs2;
-- Open the cursor.
OPEN partitions;
-- Loop through the partitions.
WHILE (1=1)
BEGIN;
FETCH NEXT
FROM partitions
INTO @objectid, @indexid, @partitionnum, @frag;
IF @@FETCH_STATUS < 0 BREAK;
SELECT @objectname = QUOTENAME(o.name), @schemaname = QUOTENAME(s.name)
FROM sys.objects AS o
JOIN sys.schemas as s ON s.schema_id = o.schema_id
WHERE o.object_id = @objectid;
SELECT @indexname = QUOTENAME(name)
FROM sys.indexes
WHERE object_id = @objectid AND index_id = @indexid;
SELECT @partitioncount = count (*)
FROM sys.partitions
WHERE object_id = @objectid AND index_id = @indexid;
-- 10 is an arbitrary decision point at which to switch between reorganizing and rebuilding.
IF @frag = 10.0
SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REBUILD' ;
IF @partitioncount > 1
SET @command = @command + N' PARTITION=' + CAST(@partitionnum AS nvarchar(10));
insert index_stats..command (database_name, command, date) values ('SCS', @command, getdate());
EXEC (@command);
PRINT N'Executed: ' + @command;
END;
-- Close and deallocate the cursor.
CLOSE partitions;
DEALLOCATE partitions;
-- Drop the temporary table.
DROP TABLE index_stats..work_to_do_scs3;
DROP TABLE index_stats..work_to_do_scs4;
DROP TABLE #work_to_do_scs2;
April 9, 2007 at 1:45 pm
and my online script
USE [SCS]
GO
/****** Object: StoredProcedure [dbo].[usp_alter_index_online] Script Date: 04/09/2007 15:44:21 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[usp_alter_index_online]
AS
/* Declare Variables */
DECLARE @objectid int;
DECLARE @indexid int;
DECLARE @partitioncount bigint;
DECLARE @schemaname nvarchar(130);
DECLARE @objectname nvarchar(130);
DECLARE @indexname nvarchar(130);
DECLARE @partitionnum bigint;
DECLARE @partitions bigint;
DECLARE @frag float;
DECLARE @command nvarchar(4000);
/* Prepare list of tables to run maintenance on */
/* Get list of fragmented indexes*/
SELECT
object_id AS objectid,
index_id AS indexid,
partition_number AS partitionnum,
avg_fragmentation_in_percent AS frag,
index_type_desc as index_type,
alloc_unit_type_desc as data_type
INTO index_stats..work_to_do_scs1
FROM index_stats..physical_stats where
datepart(d, date) = datepart(d, getdate()-1) and
datepart(yy, date) = datepart(yy, getdate()) and
datepart(m, date) = datepart(m, getdate()) and
avg_fragmentation_in_percent > 10.0
AND index_id > 0
and index_level = 0
and database_id = 7
and index_level = 0
/*Get list of indexes with large object data*/
SELECT
object_id AS objectid,
index_id AS indexid,
partition_number AS partitionnum,
avg_fragmentation_in_percent AS frag,
index_type_desc as index_type,
alloc_unit_type_desc as data_type
INTO index_stats..work_to_do_scs2
FROM index_stats..physical_stats where
datepart(d, date) = datepart(d, getdate()-1) and
datepart(yy, date) = datepart(yy, getdate()) and
datepart(m, date) = datepart(m, getdate())
--avg_fragmentation_in_percent > 10.0
AND index_id > 0
and database_id = 7
and alloc_unit_type_desc = 'LOB_DATA'
and index_level = 0
/*delete list of indexes that cannot be maintained online*/
delete from index_stats..work_to_do_scs1 where objectid in (select objectid from index_stats..work_to_do_scs2) and index_type = 'CLUSTERED INDEX'
/*Get data for cursor because i'm too lazy to add 2 more variables that won't be used in the cursor*/
SELECT
objectid,
indexid,
partitionnum,
frag
INTO #work_to_do_scs1
FROM index_stats..work_to_do_scs1
-- Declare the cursor for the list of partitions to be processed.
DECLARE partitions CURSOR FOR SELECT * FROM #work_to_do_scs1;
-- Open the cursor.
OPEN partitions;
-- Loop through the partitions.
WHILE (1=1)
BEGIN;
FETCH NEXT
FROM partitions
INTO @objectid, @indexid, @partitionnum, @frag;-- @index_type, @data_type;
IF @@FETCH_STATUS < 0 BREAK;
SELECT @objectname = QUOTENAME(o.name), @schemaname = QUOTENAME(s.name)
FROM sys.objects AS o
JOIN sys.schemas as s ON s.schema_id = o.schema_id
WHERE o.object_id = @objectid;
SELECT @indexname = QUOTENAME(name)
FROM sys.indexes
WHERE object_id = @objectid AND index_id = @indexid;
SELECT @partitioncount = count (*)
FROM sys.partitions
WHERE object_id = @objectid AND index_id = @indexid;
-- 10 is an arbitrary decision point at which to switch between reorganizing and rebuilding.
IF @frag = 10.0
SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REBUILD with (online = on)' ;
IF @partitioncount > 1
SET @command = @command + N' PARTITION=' + CAST(@partitionnum AS nvarchar(10));
insert index_stats..command (database_name, command, date) values ('SCS', @command, getdate());
EXEC (@command);
PRINT N'Executed: ' + @command;
END;
-- Close and deallocate the cursor.
CLOSE partitions;
DEALLOCATE partitions;
-- Drop the temporary table.
drop table index_stats..work_to_do_scs1;
drop table index_stats..work_to_do_scs2;
DROP TABLE #work_to_do_scs1;
April 9, 2007 at 1:46 pm
and my update statistics script
USE [SCS]
GO
/****** Object: StoredProcedure [dbo].[usp_update_all_stats] Script Date: 04/09/2007 15:45:56 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER PROCEDURE [dbo].[usp_update_all_stats]
AS
/* Declare Variables */
Declare @table_name nvarchar(4000);
DECLARE @command nvarchar(4000);
/* get distinct list of tables in SCS database*/
SELECT distinct table_name
INTO index_stats..work_to_do_scs_stats
FROM index_stats..physical_stats where
database_id = 7
and datepart(d, date) = datepart(d, getdate()-1) and
datepart(yy, date) = datepart(yy, getdate()) and
datepart(m, date) = datepart(m, getdate())
--and object_id != 921119118
-- Declare the cursor for the list of partitions to be processed.
DECLARE stats CURSOR FOR SELECT * FROM index_stats..work_to_do_scs_stats;
-- Open the cursor.
OPEN stats;
-- Loop through the partitions.
FETCH NEXT
FROM stats
INTO @table_name;
WHILE @@fetch_status = 0
BEGIN;
--IF (@@fetch_status -2)
--BEGIN;
/*
FETCH NEXT
FROM stats
INTO @table_name
*/
-- 10 is an arbitrary decision point at which to switch between reorganizing and rebuilding.
SET @command = 'update statistics ' + @table_name;
insert index_stats..command (database_name, command, date) values ('SCS', @command, getdate());
EXEC (@command);
PRINT N'Executed: ' + @command;
FETCH NEXT FROM stats INTO @table_name;
end;
-- Close and deallocate the cursor.
CLOSE stats;
DEALLOCATE stats;
-- Drop the temporary table.
drop table index_stats..work_to_do_scs_stats;
SET ANSI_NULLS OFF
April 9, 2007 at 1:56 pm
i posted earlier versions here a few months ago and i spent time cleaning them up a bit. This whole system is at around version 0.8 and i still have a long way to go to finish everything.
and here is an explanation for my madness.
the command table is to keep a record of how long it takes to run each command as well as what time it was done. this is mostly for user complaint purposes to see when something was offline.
the reason for the server and database names is we are dumping the records daily into a master physical_stats table on another server with reporting services for a report on row counts, frag statistics, etc.
the scripts will work on only those indexes that need maintenance and depending on your schema the amount of offline maintenance will be kept to a minimum. We have seen reductions from 4 hours down to 30 minutes or so. the reason for the index_level=0 is that we found the same index being rebuilt 2-3 times in a row. looks like 0 is the actuall index and index level 1 and 2 are the index pages.
the original script was taken from books online and i added a bunch of other parts since it had some bugs. i had the most trouble with the online part so i decided to dump everything into temp tables, and work on the data until i had only what i needed to work on and then dump that into the table for the cursor. i also had trouble with the stats cursor since it would go into a loop and people here helped me fix it
My next step is to build reports and we want to track maintenance by tracking index frag levels over a month or so. i'm also going to write a few queries to pull the indexes that need maintenance and then join the data from the command table to see if they were worked on.
we have different schedules for different servers, but i'm also testing it running on a daily basis on one of the servers to see how it goes. the idea is that it will work on only a small amount of indexes once it gets going after a few days and the entire process will take maybe 30 minutes per night.
Viewing 15 posts - 1 through 15 (of 26 total)
You must be logged in to reply to this topic. Login to reply