July 12, 2004 at 5:17 am
Question: What is better, a fill factor of 0, a fill factor of 100 or something in between? I know what a fillfactor is for, but something in BOL blurs the issue for me. Under Fill factor options it is more or less stated, that there is no difference between a value of 0 or 100:
A fill factor value of 0 does not mean that pages are 0 percent full. It is treated similarly to a fill factor value of 100 in that SQL Server creates clustered indexes with full data pages and nonclustered indexes with full leaf pages. It is different from 100 in that SQL Server leaves some space within the upper level of the index tree.
So when I take a value 0(defaultvalue) then some space is left in the datapages/leafpages. How much space? Enough for 1 extra entry? 5, 10, a percentage maybe? While when entering a value like say 80 I know how much free space there is. In would think value 0 means 1 entry at creation, the rest to be filled up when necessary. If 0 is almost like 100, what is the use of a defaultvalue of 0?
Any advice is apreciated.
Another question in the same line: I have a table where records are inserted, not updated nor deleted. The table is read as well, always searching on the primkey, clustered. There are a 1.000.000+ entry each day, reach are a multiply of this. What is the optimal fillfactor for this table? Since only inserts are done I would think a fillfactor of 100 would be best for the readoperations, since the inserts would not be hindered; is this so?
Greetz,
Hans Brouwer
July 13, 2004 at 4:53 am
My approach is to use 100% only if its static data or I konw that I am only ever adding records to the end (eg clustered on an identity field). Otherwise, leave it to the default.
If there are lots of inserts throughout the table and you are rebuilding indexes frequently, then a lower fill factor might be justified but, as with everything, would need careful monitoring to optimise the tradeoff between read and write efficiency.
I interpret the zero fill factor as meaning that some (unspecified) amount of space is left in each level of the index tree but the lowest pages are filled. Offhand, I can't think of a scenario that this would benefit.
There was an article here a while ago (late 2003?), 'A methodolgy for determining fill factors', which I recommend. Sorry, I didn't keep the reference!
There also was a white paper from Quest Software distributed on this site. Again, no reference - sorry! Perhaps someone else can point you to these and other sources.
July 13, 2004 at 6:35 am
Hi,
http://www.quest-pipelines.com/newsletter-v5/0204_B.htm
the link joslyn specified.
--Jeswanth
--------------------------------
July 13, 2004 at 6:50 am
0 and 100% are the exact same.
If you have any tables regularly insered, updated and deleted you should be looking at the range of 70 - 90%.
Start at a higher point and check the fragmentation of your index for a couple of days and lower as you go on. It's all about trial and error.
July 13, 2004 at 7:27 am
Jonathan,
According BOL they are not the same:
It is treated similarly to a fill factor value of 100 in that SQL Server creates clustered indexes with full data pages and nonclustered indexes with full leaf pages. It is different from 100 in that SQL Server leaves some space within the upper level of the index tree.
What is the indextree here? The toppage(s) of all the indexpages? And why is this so implemented, what is the use?
Further: when installing SQL Server the defaultvalue is 0. This is presumably considered the optimum(?) value. Either somebody f****d up when implementing this deafult value, or is IS a very good default value, because of the space left.
On my practical question in first posting: any answer, somebody?
Greetz,
Hans Brouwer
July 13, 2004 at 9:33 am
Thanks for the correction, Hans. I never knew there was a difference. Have to extend my reading material.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply