December 7, 2009 at 9:52 am
-- =======================================================
-- QUESTIONS
-- What is optimum cut-off for page_count?
-- Do statistics need to be redone before or after this?
-- =======================================================
-- This job reorganizes or rebuild all indexes of a specific database depending
-- on their current level of fragmentation.
-- x < 5% = No action
-- 5% < x <= 30% = Reorganize Index
-- 30% < x = Rebuild Index
-- Ensure the job name matches database name and USE statement. Setup job so it
-- appends to file with the appropriate naming convention used in email located
-- at bottom of script. Ensure statistics are updated after this job has run.
-- =======================================================
USE Cabinet
GO
SET NOCOUNT ON
-- Declare variables
DECLARE @database_name NVARCHAR(128)
DECLARE @schema_name NVARCHAR(128)
DECLARE @table_or_view_name NVARCHAR(128)
DECLARE @index_name NVARCHAR(128)
DECLARE @avg_fragmentation_in_percent FLOAT
DECLARE @start_time CHAR(23)
DECLARE @finish_time CHAR(23)
DECLARE @command NVARCHAR(4000)
DECLARE @table_variable TABLE
([database_name] NVARCHAR(128)
,[schema_name] NVARCHAR(128)
,[table_or_view_name] NVARCHAR(128)
,[index_name] NVARCHAR(128)
,[avg_fragmentation_in_percent] FLOAT
)
-- Log time stamp for the beginning of the script
PRINT 'START: ' + CONVERT(CHAR(23), CURRENT_TIMESTAMP, 126)
INSERT INTO
@table_variable
SELECT
DB_NAME(DDIPS.[database_id]) AS [database_name]
,S.[name] AS [schema_name]
,O.[name] AS [table_or_view_name]
,I.[name] AS [index_name]
,DDIPS.[avg_fragmentation_in_percent]
FROM
sys.dm_db_index_physical_stats(DB_ID(), NULL, NULL, NULL, 'DETAILED') DDIPS
INNER JOIN sys.sysdatabases SD
ON DDIPS.[database_id] = SD.[dbid]
INNER JOIN sys.objects O
ON DDIPS.[object_id] = O.[object_id]
INNER JOIN sys.indexes I
ON DDIPS.[index_id] = I.[index_id]
AND I.[object_id] = O.[object_id]
INNER JOIN sys.schemas S
ON S.[schema_id] = O.[schema_id]
INNER JOIN sys.partitions P
ON DDIPS.[object_id] = P.[object_id]
AND I.[index_id] = P.[index_id]
WHERE
DDIPS.[page_count] > 8 -- Ignore tables < 64K (1 extent)
AND DDIPS.[avg_fragmentation_in_percent] > 5 -- Ignore indexes that have < = 5% fragmentation
AND DDIPS.[index_type_desc] IN ('CLUSTERED INDEX','NONCLUSTERED INDEX') -- Only get clustered and nonclustered indexes
AND I.[is_hypothetical] = 0 -- Only real indexes
AND O.[type_desc] = 'USER_TABLE' -- Restrict to user tables
SELECT TOP 1 @database_name=[database_name],@schema_name=[schema_name],@table_or_view_name=[table_or_view_name],@index_name=[index_name],@avg_fragmentation_in_percent=[avg_fragmentation_in_percent] FROM @table_variable
WHILE( @@rowcount <> 0 )
BEGIN
-- Grab time stamp for the beginning of this index
SET @start_time=CONVERT(CHAR(23), CURRENT_TIMESTAMP, 126)
IF @avg_fragmentation_in_percent <= 30 -- REORGANIZE
SET @command='ALTER INDEX [' + @index_name + '] ON [' + @database_name + '].[' + @schema_name + '].[' + @table_or_view_name + '] REORGANIZE;'
ELSE -- REBUILD
SET @command='ALTER INDEX [' + @index_name + '] ON [' + @database_name + '].[' + @schema_name + '].[' + @table_or_view_name + '] REBUILD WITH (SORT_IN_TEMPDB = ON, STATISTICS_NORECOMPUTE = ON, ONLINE = ON, MAXDOP = 0);'
EXECUTE (@command);
-- Grab time stamp for the ending of this index
SET @finish_time=CONVERT(CHAR(23), CURRENT_TIMESTAMP, 126)
PRINT @start_time + ' | ' + @finish_time + ' | ' + CAST(@avg_fragmentation_in_percent AS CHAR(7)) + ' | ' + @command
DELETE FROM @table_variable WHERE [database_name]=@database_name AND [schema_name]=@schema_name AND [table_or_view_name]=@table_or_view_name AND [index_name]=@index_name AND [avg_fragmentation_in_percent]=@avg_fragmentation_in_percent
SELECT TOP 1 @database_name=[database_name],@schema_name=[schema_name],@table_or_view_name=[table_or_view_name],@index_name=[index_name],@avg_fragmentation_in_percent=[avg_fragmentation_in_percent] FROM @table_variable
END
-- Log time stamp for the ending of the script
PRINT 'STOP: ' + CONVERT(CHAR(23), CURRENT_TIMESTAMP, 126)
-- Send email to the SQL gods!
DECLARE @profile_name VARCHAR(5)
DECLARE @recipients VARCHAR(200)
DECLARE @subject VARCHAR(200)
DECLARE @body VARCHAR(200)
SET @profile_name = 'Jason'
SET @recipients = 'jason.stephens@anmedhealth.org'
SET @subject = @@SERVERNAME + ' - Index rebuild of ' + DB_NAME(DB_ID())
SET @body = '\\' + @@SERVERNAME + '\c$\anmed\logs\' + DB_NAME(DB_ID()) + '\' + DB_NAME(DB_ID()) + '_index_maintenance.txt'
EXEC msdb.dbo.sp_send_dbmail @profile_name=@profile_name, @recipients=@recipients, @subject=@subject, @body=@body;
The email works just fine. The print statements work just fine. The execute statement appears to work but we don't see anything in profiler when we fire off the script. The indexes don't appear to get smaller, they just stay the same fragmentation (most of the time). There were a few times that the fragmentation changed but I am beginning to wonder if it wasn't the other operations happening on the server that are causing it to change. Is there anything wrong with the above?
I am going to put this in a job to automatically reorg/rebuild indexes as needed. It just doesn't appear to do much :unsure:
December 7, 2009 at 10:20 am
Something that would be helpful in answering this would be to know page counts for your tables. It would also be helpful to see what kinds of indexes you are trying to defrag (heap, clustered, nonclustered).
8 pages as your minimum seems rather small. If you have an index with 9 pages, you may get no result at all from trying to defrag it. As a matter of personal preference, I only look for indexes >= 50 pages on the scheduled runs, and then go back and check for smaller page counts on a manual run to see if I might be able to get some performance out of them.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
December 7, 2009 at 10:41 am
CirquedeSQLeil (12/7/2009)
Something that would be helpful in answering this would be to know page counts for your tables. It would also be helpful to see what kinds of indexes you are trying to defrag (heap, clustered, nonclustered).
The tables I am playing with show page counts to just over 200 on some but most are a lot lower. This is being run on a test system so their isn't as much data in it as will be on the live system. I would imagine that 200 pages would be enough to show some difference in the frag level.
It is only pulling the indexes that are clusted and nonclustered.
CirquedeSQLeil (12/7/2009)8 pages as your minimum seems rather small. If you have an index with 9 pages, you may get no result at all from trying to defrag it. As a matter of personal preference, I only look for indexes >= 50 pages on the scheduled runs, and then go back and check for smaller page counts on a manual run to see if I might be able to get some performance out of them.
The page count is actually a note I have in the top of the script. I couldn't find any specific documenation published by microsoft on a good cut-off.
By the way...why have you named your pain after me? 🙂
December 7, 2009 at 10:47 am
jason.stephens (12/7/2009)
I couldn't find any specific documenation published by microsoft on a good cut-off.
A rough rule-of-thumb, given by a former program manager of the storage engine is around 1000 pages. It's not an absolute hard-and-fast number but it's a decent guideline.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 7, 2009 at 11:33 am
GilaMonster (12/7/2009)
jason.stephens (12/7/2009)
I couldn't find any specific documenation published by microsoft on a good cut-off.A rough rule-of-thumb, given by a former program manager of the storage engine is around 1000 pages. It's not an absolute hard-and-fast number but it's a decent guideline.
talks about how small indexes might not show any improvement. I just thought that some of my indexes were large enough to notice. It appears that my guess as to their idea of small wasn't very close. 1000 pages is larger than I would of thought.
December 7, 2009 at 11:40 am
Above 24 pages you may see a change in the fragmentation. The 1000 pages is more or less, somewhere around where there's a measurable performance impact from defragmenting.
If you want to set the threshold lower, it's not a problem. If you do see a performance difference when rebuilding 200 page indexes then rebuild from that level.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 7, 2009 at 11:41 am
jason.stephens (12/7/2009)
GilaMonster (12/7/2009)
jason.stephens (12/7/2009)
I couldn't find any specific documentation published by microsoft on a good cut-off.A rough rule-of-thumb, given by a former program manager of the storage engine is around 1000 pages. It's not an absolute hard-and-fast number but it's a decent guideline.
This Technet Article talks about how small indexes might not show any improvement. I just thought that some of my indexes were large enough to notice. It appears that my guess as to their idea of small wasn't very close. 1000 pages is larger than I would of thought.
Different than I use as well. As Gail said, this is a rough recommendation. For your environment, you may find that you need to increase or decrease that number for best results.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply