Value FILLFACTOR for best perfomance

  • Hi,

    I have a table created as:

    CREATE TABLE Mytable (Pr_key int PRIMARY KEY IDENTITY....

    This table they has are very high frequently INSERT rows (no UPDATE rows).

    Which value of FILLFACTOR should be used in statements CREATE TABLE and ALTER INDEX REBUILD for best perfomance?

    Thanks

  • For what index on that table?

    So you NEVER, NEVER, EVER have updates once the insert in done?

  • The only index is on the first column (Pr_key).

    Only INSERT rows. Never UPDATE statement on rows.

    Thanks

  • Try 100%.

    See how long it takes for that table (if ever) to get picked up by a smart defrag script like this one : http://sqlfool.com/2011/06/index-defrag-script-v4-1

    Then see if it's smart to change the FF to something else.

    Come back if you need more help then.

  • I forgot to say that rows are periodically deleted according to the date of entry.

    Thanks again

  • gdraghetti (11/17/2011)


    I forgot to say that rows are periodically deleted according to the date of entry.

    Thanks again

    Which is the only reason I suggested the check of fragmentation. Assuming you can enter data from multiple days in the same insert then you can eventually have fragmentation.

    That being said I'd be really surprised to see massive fragmentation in that table with the use you're talking about now.

  • what a great post

    owsome work that is

  • As it's an identity column, the values inserted are always increasing in value so they are inserted at the end of the index. Lowering the fill factor won't improve insert performance, but it will increase the space used by the index and decrease read performance. You've already said the rows are never updated and deletes won't benefit from decreasing the fill factor.

    Note: Pages might become less full as a result of delete operations, but it's likely that you will be deleting contiguous ranges of identity values from the other end of the index. It probably won't need to be defragmented (no need for rebuild), but there is no harm in checking fragmentation levels.

    Use a fill factor of 100.

    If you are unsure, it can often be a good idea to construct your own performance tests. It's always worth searching the internet and asking for advice, but there is also value in constructing tests to verify things.

    DBA Dash - Free, open source monitoring for SQL Server

  • I've always been curious about which value should be set as a fillfactor and why as well...clearly there's different reasons for it when it's a clustered or non-clustered index, or whether or not the table will be updated/inserted/or deleted from on a regular basis...but what exactly are those?

    Can anyone point me to some clear & concise reading on the subject? I'd rather know a little more about than to just set it at 92% because that's what the the prior DBA did :hehe:

    ______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience

  • MyDoggieJessie (11/17/2011)


    I've always been curious about which value should be set as a fillfactor and why as well...clearly there's different reasons for it when it's a clustered or non-clustered index, or whether or not the table will be updated/inserted/or deleted from on a regular basis...but what exactly are those?

    Can anyone point me to some clear & concise reading on the subject? I'd rather know a little more about than to just set it at 92% because that's what the the prior DBA did :hehe:

    Here's the reasoning I used. I'm not saying it's good, bad, best, just what seemed to make sense to me at that point.

    I work with an ERP without any identity nor identity columns for that matter :-D. There are a few tables with ever increasing PKs but most are not. And even the ones that are get multiple updates / deletes anywhere in the PK span so with 100% FF that causes many page splits.

    I have Michelles' index defrag script which was setup with the MS "optimal" settings (link further down).

    Even running it everyday I was getting AVG 80% of indexes being worked on which made a 9-10 GB log backup every day (with the full at 20).

    That system has 30 000+ INDEXES (yes 30K). So going 1 by 1 was completely out of the question.

    I checked the AVG space used on the page and it was around 92%.

    This gets your the database_id

    SELECT database_id, name, * FROM sys.databases ORDER BY 2

    This gets you the % where 9 is your DB id. You don't have to do detailed, but I preffer the most accurate info here.

    SELECT * FROM sys.dm_db_index_physical_stats (9, NULL, NULL, NULL, 'DETAILED') ORDER BY avg_fragmentation_in_percent DESC

    I then took an educated guess that using a 85 FF would only cost us 7-10% more on the current DB used data. In our case that meant a 1 time 1 GB extra and then 1 GB extra per year. This meant no change in the backups, RTOs, RPO or anything else I could think of.

    I did the change overnight and starting the very next day, over 95% of the daily defrag work had been eliminated.

    I kept on checking the work history of Michelle's script and kept lowering the index that were worked on too often for my taste. This was a gradual move but I eventually went as far down as once / week is still too much and that became my decision point.

    AFAIK, no FF is under 70% at the moment from that little valse.

    The added benefit from that is that I could now keep 1 full backup and NEVER delete any tlog backups. At our current pace I'll be able to have up to 1 YEAR 24 / 7 point in time recovery. This is nice for us because every one in a while someone realizes they messed up way before out normal retention period and we can save the day. I currently test those backups once a week so that if 1 of them gets corrupted and I can save it from our other backups or start a new restore chain. Yes this is scripted because at 96 backups / day, I don't want to type that in (30K + tlogs backups / year)!

    DB size 25 GB, ERP system with 80 concurrent users.

    I had not taken a reading on the page splits per second before doing that work, however now it averages under 1 PS/Sec during the normal business hours.

    Here's the rest of that story in case I missed a few details :

    http://www.sqlservercentral.com/Forums/Topic1126147-146-1.aspx

    Upated version of Michelle's script :

    http://sqlfool.com/2011/06/index-defrag-script-v4-1

    As always, assume I'm a baboon with an IQ lower than your favorite amiba and test this for your system. Your milage WILL vary. >>> I'd never do that on a 50 TB system with 1M transactions / sec. That would most likely be a 1 on 1 approach.

  • MyDoggieJessie (11/17/2011)


    I've always been curious about which value should be set as a fillfactor and why as well...clearly there's different reasons for it when it's a clustered or non-clustered index, or whether or not the table will be updated/inserted/or deleted from on a regular basis...but what exactly are those?

    Can anyone point me to some clear & concise reading on the subject? I'd rather know a little more about than to just set it at 92% because that's what the the prior DBA did :hehe:

    Fair question. I'll try to give you the short version, but there's no concrete source I know of that's web accessible.

    Fillfactor is simply space left on a page when reindexing occurs. What it does is leave some 'give' for changes to lengths in VARCHAR() fields and new records that need to be placed on that page according to the index. If you need to go lower than 50% fillfactor you're basically saving yourself a split operation for 2x (or more) the data read space.

    The purpose of fill factor is to avoid page splits. There's 8k on a page (roughly). If each row is 250 bytes that gives you roughly 32 records/page. If you fill it tightly and then simply edit a comments field from 20-40 characters, you might cause a page split. If you have to insert a new record into a tightly fit page, you cause a page split. These are expensive operations to the engine.

    So, general rule of thumb for me: Leave enough room for all varchar fields to go from an average of half to an average of 3/4s, and if my clustered index isn't leading edged on an ever increasing value (which causes a different issue, hotspotting), room for 4 new records. This is of course taken with a grain of salt about the size of Montana. If my records are 2k a piece there's only going to be room for 1 at 75% (on average). If my records are 30 bytes each I'll default to 10%.

    Fillfactor is an artform. It's as much an analysis of data usage as it is of storage mechanics. So, you will find a ton of explanations of WHAT it is, but very few of how to use it effectively. There's just too many answers.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Evil Kraig F (11/17/2011)


    MyDoggieJessie (11/17/2011)


    I've always been curious about which value should be set as a fillfactor and why as well...clearly there's different reasons for it when it's a clustered or non-clustered index, or whether or not the table will be updated/inserted/or deleted from on a regular basis...but what exactly are those?

    Can anyone point me to some clear & concise reading on the subject? I'd rather know a little more about than to just set it at 92% because that's what the the prior DBA did :hehe:

    Fair question. I'll try to give you the short version, but there's no concrete source I know of that's web accessible.

    Fillfactor is simply space left on a page when reindexing occurs. What it does is leave some 'give' for changes to lengths in VARCHAR() fields and new records that need to be placed on that page according to the index. If you need to go lower than 50% fillfactor you're basically saving yourself a split operation for 2x (or more) the data read space.

    The purpose of fill factor is to avoid page splits. There's 8k on a page (roughly). If each row is 250 bytes that gives you roughly 32 records/page. If you fill it tightly and then simply edit a comments field from 20-40 characters, you might cause a page split. If you have to insert a new record into a tightly fit page, you cause a page split. These are expensive operations to the engine.

    So, general rule of thumb for me: Leave enough room for all varchar fields to go from an average of half to an average of 3/4s, and if my clustered index isn't leading edged on an ever increasing value (which causes a different issue, hotspotting), room for 4 new records. This is of course taken with a grain of salt about the size of Montana. If my records are 2k a piece there's only going to be room for 1 at 75% (on average). If my records are 30 bytes each I'll default to 10%.

    Fillfactor is an artform. It's as much an analysis of data usage as it is of storage mechanics. So, you will find a ton of explanations of WHAT it is, but very few of how to use it effectively. There's just too many answers.

    Sums it up quite well I'd say. 🙂

    The default value of 100 works well in a lot of cases. If your tables are not heavy on inserts/updates stick with the default value. Lowering the fill factor will increase the amount of space used by the index and decrease read performance. In a lot of cases indexes will be read from a lot more than they are written to, so having more compact indexes can be beneficial.

    If you have tables with high volumes of inserts/updates, these might be candidates for a lower fill factor. If the index is based on an incremental value like an identity, sequential guid, GETDATE() etc, a lower fill factor probably won't help. If it's based on a non-sequential GUID or something else that isn't incremental a lower fill factor can definitely help. You will need to perform your own experiments to see which value to use. Be conservative though. A value of 50 will double the size of your index - twice as much I/O reading from disk, twice as much storage space on disk and twice the memory used in buffer pool.

    DBA Dash - Free, open source monitoring for SQL Server

  • If it's OLTP transaction table, I would recommend 85 - 90% fill factor. There should be some air to breath in, when some rows move (DML operations).

    For OLAP, it should be close to 100% (I would still prefer 95%). The reason is simple, NO DMLs on table so you can bundle the rows tight.

    In any case don't forget to check Fragmentation at regular interval.

  • It's worth noting that even in OLTP systems, the read to write ratio is normally biased towards reads. Also, some tables are likely to have high volumes of inserts/updates which could be possible candidates for a lower fill factor if they don't have an incremental value. Other tables might be relatively static and a low fill factor won't help with performance in any significant way. Ideally it should be looked at on an index by index basis. The tables with the highest volumes of inserts/updates should be looked at first and anything else should be ok with the default fill factor.

    For example, you might have a large product table containing a few million products. This might be updated on a weekly basis, but read from quite heavily. In this case it makes sense to have a high fill factor - 100. There might be other tables in this database relating to customer orders that have very high volumes of inserts, but a modest amount of updates. These tables might have an identity column clustered index so a low fill factor on this index might not help much with performance. If data is going to be more evenly distributed throughout the index, using a lower fill factor could help a lot with performance.

    DBA Dash - Free, open source monitoring for SQL Server

  • the read to write ratio is normally biased towards reads.

    Actually "It depends"

    Also, some tables are likely to have high volumes of inserts/updates which could be possible candidates for a lower fill factor if they don't have an incremental value.

    i.e. OLTP Transaction tables

    Other tables might be relatively static and a low fill factor won't help with performance in any significant way.

    i.e. OLTP Master / Type Tables. Usually very small 5 - 5000 rows.

    Ideally it should be looked at on an index by index basis.

    Ideally Yes. Practically NO.

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply