September 6, 2007 at 8:20 am
Hi,
Currently checking the indexing at a new company I've started at, and in one DB half the indexes are set to a fill factor of 10. I understand the advantage of reducing it from a 100 but not sure that 10% is ever useful - especially in a not very large table > 1/2GB
Any thoughts on why this would be valid? there is data frequently added but the table is referenced by queries..
Any thoughts appreciated
September 6, 2007 at 8:31 am
Hi
It depends on the level of read/writes/inserts/deletes, this needs to be balanced, obviously different tables will differ, Generally what i look at is:
Read only intensive only tables: 90% + Fillfactor
Updates & Reads: 60% +
Heavy Deletes: Less than 50%
This is just to give you a rough idea, remember if the fillfactor is too low, it will cause page spliting.
Thanks
September 6, 2007 at 9:22 am
Actually, a low fill factor (apart from 0 which is equivalent to 100) will cause fewer page splits. If I had a read-only table, I'd make it 100%. And I can't understand why you'd use a low fill factor for a table with heavy (you mean lots of?) deletes.
To answer the original poster's question, 10% does seem excessively low, unless you're not concerned about read or backup performance or disk space.
John
September 6, 2007 at 2:42 pm
I second John's view on this. 10% seems like a BUG to me
* Noel
September 7, 2007 at 2:13 am
Thats what I thought (person who set it up has left so will presume he's an idiot)
Thanks for the input guys
September 7, 2007 at 8:27 am
One very important issue has gone unmentioned here. What is the distribution of the data for the index?? If you have some field that does not get values inserted/updated THROUGHOUT the range of values, it is suboptimal to use a lower fillfactor than 100%. Take an identity (that doesn't get updated to a different value) as the obvious example. EVERY value will be greater than the next --> NO page splits will be causes anywhere other than the last page in the chain. Anything other than 100% is wasting space and throughput. Same with a date that is always increasing (such as some log/audit date). There is much more to indexing than just cluster/noncluster and fillfactor. You gotta know your data as well as data access patterns too! 🙂
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
September 7, 2007 at 11:46 am
Keeping in mind TheSQLGuru's points about data distribution, if you ARE spreading your inserts/updates, although rare - it's not an entirely impossible number to justify. For example if you have an extreme amount of inserts during the data, and your record lengths are small, AND your inserts are spread across the entire table, (and you don't care much about read access,) then 10% might work. The assumption would then be that someone would have gone through and figured out the "typical" amount of records being inserted into the type data page between reorgs, and optimized for inserts (not necessarily updates, since reads would suck).
Now - of course - your assessment that the former owner of process as an "idiot" might invalidate this, but there are circumstances this might squeek through.
It's extreme no doubt, but sometimes extreme solutions are required for extreme circumstances. think audit trail (insert records before they change somewhere else) on an exceedingly busy app, and it might make more sense.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
September 7, 2007 at 3:34 pm
Did you noticed how many "ANDS" you wrote in one sentence ?
* Noel
September 7, 2007 at 3:52 pm
I count a max of 3 for any one sentence he wrote, and his average is <1. Whats the big deal?? 🙂
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
September 7, 2007 at 4:01 pm
That's just an example on how many "ifs" have to happen in order to start considering a 10% fill factor
* Noel
September 7, 2007 at 9:50 pm
OR... NOT...
--Jeff Moden
Change is inevitable... Change for the better is not.
September 13, 2007 at 9:31 am
Once again thanks for your input.
In response to Matt Miller there is a lot of data input but also a fair amount of reads (approx 2/3 ratio) now that I've set up profiler monitoring this table
And just to clarify I wasnt calling the previous guy an idiot...prior to finding this, just that the decision seemed poor in this one instance...
September 13, 2007 at 11:44 am
A little tired after writing up a tech design doc so I may have overlooked but consider this what has been said and how your data behaves.
Suppose the column is an IDENTITY column defiend as 1,1 so it started at 1 and increments by 1. Common mistake is to think anything lower than 100% is needed. Using 90% will not save you any page splits because they don't ever split (unless you force something in an empty spot after reindexing after a huge delete of various rows). In this case SQL will evaluate that a new page is needed and not a split since any new value will be beyond the value of the last page when full. So fill to 100% will be best.
When deciding your big thing to consider is where will the insert occur, how often you will be maintaining the system, and how much growth you expect in between. If you use an IDENTITY as I said previously 100% is fine, however if you use something unpredictable like a NEWID and have a lot of transactions where performance would be a concern then 50% or lower might be your best choice. If it is occasional data changes then you will have to make a rough decision and monitor until you get a better idea of what to expect.
As for Deletes if intensive remember data movement doesn't happen unless you do something like INDEXDEFRAG or REINDEX, you just have free space. So don't let Deletes way in unless you Insert/Update as often.
Now Updates are kinda hard to determine because of how you may manipulate the data. Consider what will happen when the record is deleted from the index and inserted again, if it will insert into the same page on average you might consider 50% to begin with depending on your transactions but again montor until you understand what you really need.
As for 10% I would be concerned with retrieval speed as that means a lot of extranious pages to dig thru (especially on an index scan) depending on the width of the data this might cause a lot of bloat on your system. I personnally would be hard pressed to use below 50% unless I have a lot of transactions per time frame between maintenance.
All in all said thou, index fillfactor requires you to check yourelf for a period before you make a concret decision.
September 14, 2007 at 6:30 am
In a past life, I had to setup some indexes in a similar way for a database. It was touched on by an earlier post, but this might help explain why it was done that way (because I also agree, 10% is really low).
An application went in to production, and the process involved code that (very poorly written) that went back over and over again, updating data 1 "COLUMN" at a time. The initial insert was the PK and the rest of the fields where NULL, then they went back and updated the columns all 10 one at a time.
Explaining the error in their ways to the developer was useless. Changing the code myself was not an option, and although I was tempted to let it perform like dog droppings, I chose the option of doing what I could.
Just be careful about making a change that extreme. I expect whatever change you make will result in an extreme change in performance. Unfortunately I think it could swing either way.
September 14, 2007 at 1:59 pm
How often does the table get reindexed? Fill factor only applies when indexes are built or rebuilt, if there's no maintenance plan then it has no effect.
I agree that 10% sounds like a mistake, but there may be cases where it is correct. Say you're setting up a table where records are inserted randomly (or with an unpredictable distribution), for instance ordered by last names. You have an initial set of data but expect it to grow by a factor of 5 or more very quickly once the application goes into production. It is a heavily transactional application and data space is not an issue, so you want to create a large sparsely-populated table initially to avoid page splits. If all that is true, a 10% fill factor might be a very intelligent thing to do.
However I'd make sure I redefined it after the application went live, before I set up any reindexing maintenance plan.
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply