March 12, 2009 at 10:17 am
My attempts at reindexing a table are nto working. Follow along and tell me what is wrong, or what I did wrong.
Starting out:
DBCC SHOWCONTIG ('POLLDATATEMPLATE') WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS
Index Name Logical Fragmentation
PollDataTempl_PK50
POLLDATATEMPLATE_IDX150
After running
Executing DBCC DBREINDEX ('POLLDATATEMPLATE',PollDataTempl_PK)
DBCC SHOWCONTIG ('POLLDATATEMPLATE') WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS
Index Name Logical Fragmentation
PollDataTempl_PK80
POLLDATATEMPLATE_IDX150
So, ran this, to drop and create the index:
IF EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[POLLDATATEMPLATE]') AND name = N'PollDataTempl_PK')
ALTER TABLE [dbo].[POLLDATATEMPLATE] DROP CONSTRAINT [PollDataTempl_PK]
Go
IF NOT EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[dbo].[POLLDATATEMPLATE]') AND name = N'PollDataTempl_PK')
ALTER TABLE [dbo].[POLLDATATEMPLATE] ADD CONSTRAINT [PollDataTempl_PK] PRIMARY KEY CLUSTERED
(
[GRAPHID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
Now:
DBCC SHOWCONTIG ('POLLDATATEMPLATE') WITH FAST, TABLERESULTS, ALL_INDEXES, NO_INFOMSGS
Index Name Logical Fragmentation
PollDataTempl_PK80
POLLDATATEMPLATE_IDX150
Why am I not getting this index to defrag, even when rebuilding it?
I have done several runs of UpdateUsage and UpdateStats, to make sure all of the data is current, but to no avail.
Thanks,
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
March 12, 2009 at 10:19 am
How many pages does the index have?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 12, 2009 at 12:03 pm
5 pages in that index.
Thanks,
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
March 12, 2009 at 12:08 pm
Too small to defragment. The way pages are allocated in smaller indexes makes it impossible to defrag them. Besides it's unnecessary. Fragmentation starts to be an issue around 1000 pages.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 12, 2009 at 1:15 pm
Yeah, so I have been reading, since you mentioned pages in your last post. I vaguely remember reading smoething about that many moons ago but seldom run into an index that small so it completely slipped my mind.
Then again, if there are few pages, it seems like it would be alomost impossible for it to get fragmented in the first place. It must be something in the way they calculate that number for it to be able to be so high with so few pages.
Anyway, thanks for putting up with me.
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
March 12, 2009 at 1:34 pm
Stamey (3/12/2009)
It must be something in the way they calculate that number for it to be able to be so high with so few pages.
Fragmentation (for small and large indexes alike) is the % of pages that are 'out of order' 1. The number of pages does not factor into the calculation. Out of order meaning a page that's higher in the index order has a lower pageID. So, for your 5 page index it can look like this (numbers 1-5 indicating the logical order based on the index key)
1 2 3 4 5 (0% fragmented)
2 3 4 5 1 (20% fragmented, only page 2 is out of order)
5 4 3 2 1 (80% fragmented. All but page 1 are out of order)
The reason that it's unimportant with smaller indexes is that fragmentation only causes an issue with large range scans from disk. If the data's in memory (as is likely with a small index) then the fragmentation's immaterial. If only singleton seeks are ever done, then fragmentation's immaterial.
(1) From what I gather the actual algorithm's a bit more complex, but same idea)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 13, 2009 at 5:23 am
I understand, but it just seems like it would be so simple, with small indexes, to just rewrite the index in the correct order so that there would be no fragmentation. The bigger reason for this being so that maintenance procedures would not flag it as a fragmented index.
Now, due to this, I am modifying my reindex script to look at number of pages as well as logical fragmentation. If it has less than 1,000 pages in the index, don't bother with defragging it, regardless of the logical fragmentation.
Thanks,
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
March 13, 2009 at 8:35 am
Stamey (3/13/2009)
I understand, but it just seems like it would be so simple, with small indexes, to just rewrite the index in the correct order so that there would be no fragmentation.
It's not that simple. It has to do with the way the allocations are done for the first few pages among other things.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 13, 2009 at 9:13 am
I think the reason that very small tables don't appear to defrag is that normally the first 8 pages are always in mixed extents so appear in eight different extents.
So you defrag an 8 page table, it's probably still in 8 extents and it still looks fragmented afterwards.
This is better reported in SQL Server 2005 upwards than 2000 because they 'improved' the accuracy of the reporting.
As per previous posts there's no point in doing such small tables anyway.
.
March 16, 2009 at 6:09 am
Well, thanks for teaching me something.
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
March 16, 2009 at 4:17 pm
Also, I would want to add on this after looking at the code. For your large tables,The Fillfactor is 90%, I take it your are defragging them frequently. If you are not, then use lower fillfactor so that the fragmentation does not occur frequently. You also need to know how frequent data modifications are against this database and set accordingly.
March 17, 2009 at 6:38 am
The plan is for busy DBs to get their indexes defragmented once a week, by a scheduled job. Just part of maintenance.
Thanks,
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
March 17, 2009 at 6:48 am
That sounds good to me, but how heavy is the table getting modified. 90% FIllfacotr prone to more page splits, if have this setting and if there is huge amount of modification done against your table then better to think of defragging it frequently.
Poll your table using the dmv ' _physical_stats' and organize your maintenance plan.
As a genral rule of thumb Fillfactor should be:
100% for no activity
90% for low activity
70% for medium activity
50% for higher activity or more
March 17, 2009 at 6:53 am
Krishna (3/17/2009)
As a genral rule of thumb Fillfactor should be:100% for no activity
90% for low activity
70% for medium activity
50% for higher activity or more
I'm not sure you can have a general rule of thumb for fill factors other than 100% is fine if the data never changes.
How have you established these figures Krishna?
.
March 17, 2009 at 7:00 am
Krishna (3/17/2009)
That sounds good to me, but how heavy is the table getting modified. 90% FIllfacotr prone to more page splits, if have this setting and if there is huge amount of modification done against your table then better to think of defragging it frequently.Poll your table using the dmv ' _physical_stats' and organize your maintenance plan.
As a genral rule of thumb Fillfactor should be:
100% for no activity
90% for low activity
70% for medium activity
50% for higher activity or more
I think that I’ll have to disagree with you on this one. In my opinion the fill factor is dependent on many things and not just the amount of the activities. For example, if the clustered index is based on identity column, then you can use fill factor of 100%. If there is lots of SQL Statement on the table with the clustered index, but almost all of them are select, then maybe you can use high fill factor. You also have to take into account the length of the record. In short, I wouldn’t work by the rules that you specified.
Adi
--------------------------------------------------------------
To know how to ask questions and increase the chances of getting asnwers:
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
Viewing 15 posts - 1 through 15 (of 22 total)
You must be logged in to reply to this topic. Login to reply