July 20, 2012 at 5:07 pm
Hi there
I have a table with 320 million records.the clustered index on this table includes three int columns. I'm using fn_dblog to get the page splits rate on this index which is 2.5 page splits per second.the server page split/sec is something around 69 per/sec.By knowing the access trend on this index is much more index seeks compared to index scans (1000 times more seeks than scans) , can I assume reducing the fill-factor on this index (which is the default 100%) ends in some performance benefits? if so which is the appropriate value for fill-factor?
Pooyan
July 20, 2012 at 9:53 pm
A lower page fill can lead to improved performance.
High transaction, I might test as low as 70% fill factor but would be very cautious about it. You will need to check the different fill factors (90,80,70 etc etc and numbers in between) to see where the best level fits for the activity in your database.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
July 20, 2012 at 9:56 pm
Page splits/sec, at least the server metric, not sure how you calculated it using fn_dblog, counts a new page allocation at the end of the index as a page split so it is not entirely accurate in terms of them being detrimental and causing index fragmentation. Index fragmentation levels over time is a much better way to judge whether you should change your fill factor. If you are having to reorg/rebuild the index often due to fragmentation being higher than some acceptable threshold for your environemnt then consider lowering the fill factor.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
July 20, 2012 at 11:47 pm
By reducing the fill factor page splits and so the external fragmentation would be reduced but on the other hand If there is lots of index scan on the index then lowering the fill factor has negative impact on the read perfomance because more pages need to be read. It' not so clear for me how can I exactly say which fill factor setting is the optimal number even by trying different values (90 80 70,...)
Pooyan
July 21, 2012 at 7:25 am
Fragmentation over time.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
July 21, 2012 at 3:44 pm
pooyan_pdm (7/20/2012)
Hi thereI have a table with 320 million records.the clustered index on this table includes three int columns. I'm using fn_dblog to get the page splits rate on this index which is 2.5 page splits per second.the server page split/sec is something around 69 per/sec.By knowing the access trend on this index is much more index seeks compared to index scans (1000 times more seeks than scans) , can I assume reducing the fill-factor on this index (which is the default 100%) ends in some performance benefits? if so which is the appropriate value for fill-factor?
More than anything else, it sounds like you may have selected the wrong columns for a clustered index. Consider changing the clustered index to something that is unique, narrow, and "ever-increasing" like and IDENTITY column if you have one. If the rows don't suffer many "row size increases" from updates, you might be able to get away with even a 100% fill factor here. Then create a noclustered index to replace what you currently have as a clustered index.
All of this will virtually eliminate page splits for INSERTs. It will, however, increase "extent splits" for the new index. To help resolve those, consider a FILLFACTOR of 80 or even 70 on the new index. Because the index is narrow compared to the rest of the data, this will take a whole lot less space than using a 70 or 80 FILLFACTOR on a clustered index.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 21, 2012 at 8:11 pm
Unfortunatly I'm a new dba in this company and know nothing about the idea the previous dba had in mind by creating this index. But just changing the clustered index to for example an identity column and adding a noncludtered index with these three columns perhaps would ends in bookmark lookups which have negative impact on query performance.I should check all the queris on this table first to see if this is a good idea.
Pooyan
July 21, 2012 at 11:20 pm
pooyan_pdm (7/21/2012)
....But just changing the clustered index to for example an identity column and adding a noncludtered index with these three columns perhaps would ends in bookmark lookups which have negative impact on query performance.I should check all the queris on this table first to see if this is a good idea.
Very wise thinking. Kudos to you for that!
July 21, 2012 at 11:35 pm
opc.three (7/20/2012)
not sure how you calculated it using fn_dblog
Page splits have a Transaction Name of 'pagesplit'. It's just a matter of counting them and dividing by the number of seconds in the time range.
July 22, 2012 at 6:42 am
Robert Davis (7/21/2012)
opc.three (7/20/2012)
not sure how you calculated it using fn_dblogPage splits have a Transaction Name of 'pagesplit'. It's just a matter of counting them and dividing by the number of seconds in the time range.
Oh I agree it's available I just am not sure how it was calculated by the OP. I glossed over it because it likely doesn't matter. What you mentioned is the same method as the counter page splits/sec where a page spilt is logged even when a new page is allocated at the end of the index due to lack of space in the last page, i.e. counting it using fn_dblog also is not guaranteed to tell us accurate information about middle page splits which ismreally what we care about when talking about adjusting fill factor.
I suspect if someone were to get serious about using fn_dblog to count 'middle page splits/sec' they could somehow achieve this by referring to the object to find out if it was a middle page split or an end page split but it has not been anything I have seen online as a shared technique and am not even sure it's feasible.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
July 22, 2012 at 7:33 am
You're right. I doubt it's feasible to even try. Though it is possible to do with Extended Events in SQL 2012. Excellent article on that from Jonathan Kehayias: Tracking Problematic Pages Splits in SQL Server 2012
July 22, 2012 at 8:07 am
I was aware of Jonathan's work in this area with EE on 2008 but had not kept up with the latest developments on 2012. The Xevent A Day series he did is excellent and day 27 is relevant here, and was as current as I was on the topic so thanks for sharing the link.
I worked my way through the series some time ago and I highly recommend doing the whole thing on these forums to those just getting into EE. Although at this point if I pass along the link I'll have to note to compare the information with the latest developments for 2012. It would be neat if he were to update the series for 2012 because I found the daily walkthrough format great.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
July 22, 2012 at 9:00 am
opc.three (7/22/2012)
I suspect if someone were to get serious about using fn_dblog to count 'middle page splits/sec' they could somehow achieve this by referring to the object to find out if it was a middle page split or an end page split but it has not been anything I have seen online as a shared technique and am not even sure it's feasible.
I've done it before (SQL 2000) by looking for the log operation LOP_DELETE_SPLIT.
I wouldn't recommend it, put some serious overhead on the log. Took about 5 min to read 15 min of tran log.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 22, 2012 at 9:15 am
pooyan_pdm (7/21/2012)
Unfortunatly I'm a new dba in this company and know nothing about the idea the previous dba had in mind by creating this index. But just changing the clustered index to for example an identity column and adding a noncludtered index with these three columns perhaps would ends in bookmark lookups which have negative impact on query performance.I should check all the queris on this table first to see if this is a good idea.
Agreed. For things like this, the oldest rules in the book always apply. "It Depends", make sure you have viable backups before you try anything, and "Test It." If you have the luxury to make a copy of the table or even a substantial subset of the table to test it, even better.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 22, 2012 at 7:11 pm
GilaMonster (7/22/2012)
opc.three (7/22/2012)
I suspect if someone were to get serious about using fn_dblog to count 'middle page splits/sec' they could somehow achieve this by referring to the object to find out if it was a middle page split or an end page split but it has not been anything I have seen online as a shared technique and am not even sure it's feasible.I've done it before (SQL 2000) by looking for the log operation LOP_DELETE_SPLIT.
I wouldn't recommend it, put some serious overhead on the log. Took about 5 min to read 15 min of tran log.
Thanks for confirming my suspicion. Good to to know it's in fact possible with fn_dblog, and that it is not recommended. In reading the link Robert provided to Jonathan's current efforts in this area I learned the same log operation entry is still relevant but much more accessible in 2012 via the sql_server.transaction_log xevent. Watching fragmentation over time has been ample for tuning my index's fill factors but it is quite a nice addition to the new version and I could see it being useful for a lot of things.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply