March 17, 2009 at 10:04 am
Hi
I started tuning my company database, it's a must , but I have some problems
I started with indexes, and I wanted to rebuild them all, though they are quite a few.
Sp as a test I said to try on a specific index to see if the result is ok.
So in SMSS - right-click on the index and rebuild. My surprise was that avg_fragmentation in percent is the same : 60. I tried with drop - create but the result is the same. Am I the problem,or am I missing something?
Pls help
March 17, 2009 at 10:13 am
I'm trying a script for all indexes in the database and I have almost 1/2 hour since I started it and he is still runing.Is it normal?Or should I take them one by one?
March 17, 2009 at 10:14 am
If the table is very small you will see that effect.
How many pages does your table has?
* Noel
March 17, 2009 at 10:29 am
If you're indexes are badly fragmented, it can take a while. You're not experimenting on a live production database are you? That's a recipe for disaster.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 17, 2009 at 11:06 am
17 pages, but I have some with 2 pages or 34 that "respond" the same way to my script
And no, is not on production, I've restored the database on my computer
I started a Sql Profiler and while executing my script I received :
Lock:Timeout1:125488Microsoft SQL Server Management Studio - Querysa05080822009-03-17 18:08:26.3972009-03-17 18:08:26.397
That means that my script is "dead"?:)
If you say it takes a while I will run it during this night, but I'm afraid that he will time out again.
Anyway I'll try this later
The script looks like this :
DECLARE @Table VARCHAR(255)
DECLARE @cmd NVARCHAR(500)
DECLARE @fillfactor INT
SET @fillfactor = 90
SET @cmd = 'DECLARE TableCursor CURSOR FOR SELECT table_catalog + ''.'' + table_schema + ''.'' + table_name as tableName
FROM Database2.INFORMATION_SCHEMA.TABLES WHERE table_type = ''BASE TABLE'''
-- create table cursor
EXEC (@cmd)
OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @Table
WHILE @@FETCH_STATUS = 0
BEGIN
-- SQL 2000 command
--DBCC DBREINDEX(@Table,' ',@fillfactor)
-- SQL 2005 command
SET @cmd = 'ALTER INDEX ALL ON ' + @Table + ' REBUILD WITH (FILLFACTOR = ' + CONVERT(VARCHAR(3),@fillfactor) + ')'
EXEC (@cmd)
FETCH NEXT FROM TableCursor INTO @Table
END
CLOSE TableCursor
DEALLOCATE TableCursor
I hope this will work this night.:)
March 17, 2009 at 2:02 pm
shnex (3/17/2009)
17 pages, but I have some with 2 pages or 34 that "respond" the same way to my script
Don't worry about fragmentation on tables that small. Fragmentation's only an issue when doing large scans of tables from disk into memory. Tables with small numbers of pages are often in memory anyway, and even if on disk, aren't big enough to need such scans.
The usual rule-of-thumb is 1000 pages to a table/index before you needs to worry about fragmentation.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 17, 2009 at 3:34 pm
Hi Gail,
quick question - I have the doubt.. so you said indexes having 1000 pages or more than that should only be considered..
along with this, dont we have to take avg_fragmentation_in_percent , avg_fragment_size_in_pages into account..
please advise
March 17, 2009 at 3:47 pm
Of course. No point in rebuilding an index that isn't fragmented.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 17, 2009 at 3:55 pm
Joseph (3/17/2009)
Hi Gail,quick question - I have the doubt.. so you said indexes having 1000 pages or more than that should only be considered..
along with this, dont we have to take avg_fragmentation_in_percent , avg_fragment_size_in_pages into account..
please advise
As Gail advised, no you don't have to unless it is a large table. If you have a proper index created SQL server can find them efficiently in your case.
March 17, 2009 at 4:12 pm
so that means all these three factors should be true together i mean :-
avg_fragment_size_in_pages >= 8 AND fragmentation > 10 AND page_count >=1000
please correct me If i am wrong?
March 17, 2009 at 4:19 pm
Joseph (3/17/2009)
so that means all these three factors should be true together i mean :-avg_fragment_size_in_pages >= 8 AND fragmentation > 10 AND page_count >=1000
please correct me If i am wrong?
Yepp, Ideally for a large table. I have concern on fragmentation though. I would not defrag it untill its 20 or so. But, I take it depends on your requirement.
March 17, 2009 at 5:35 pm
GilaMonster (3/17/2009)
The usual rule-of-thumb is 1000 pages to a table/index before you needs to worry about fragmentation.
This is a common view but I don't agree, I think 64-ish pages is a sensible cutoff.
Why? Because if I have a reference table (e.g. For Products or Organisational Units) that is used by lots of queries, has grown to 999 pages because of the way I insert into it but could occupy only 99 pages optimally I think it should be defragmented.
What I typically do is look for greater than 10% fragmentation on tables larger than 64 pages and if I find it 'ALTER INDEX with REBUILD'.
I also reduce the fill factor if this happens too quickly.
.
March 17, 2009 at 8:57 pm
sorry i think there is small correction:-
index defrag or rebuild comes into picture when:-
avg_fragment_size_in_pages =1000
avg_fragment_size_in_pages should be less than or equal ro 8, right?
please advise
March 18, 2009 at 12:08 am
Joseph (3/17/2009)
avg_fragment_size_in_pages >= 8 AND fragmentation > 10 AND page_count >=1000
Regarding the fragment size, larger is better. The ideal is that fragment size = number of pages in index. Generally, I'd worry about fragmentation from 30%, not 10.
index defrag or rebuild comes into picture when:-
avg_fragment_size_in_pages =1000
No. 1000 pages in the index.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 18, 2009 at 12:29 am
Tim Walker (3/17/2009)
GilaMonster (3/17/2009)
The usual rule-of-thumb is 1000 pages to a table/index before you needs to worry about fragmentation.This is a common view but I don't agree, I think 64-ish pages is a sensible cutoff.
The 1000 page rule-of-thumb comes from Paul Randal. Apparently (he talked about this during one of his recent presentations) people kept on asking him what size tables should be defragmented. He guessed, based on what he knew of the storage engine, then went back and tested later. It turned out in testing that around 1000 pages was the point where the effects of fragmentation on IO speed became noticeable.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 1 through 15 (of 63 total)
You must be logged in to reply to this topic. Login to reply