I was reading Jonathan's article about captruing page splitting article which I found very interesting so I went a ahead and tested in one of my dev server. I am following along the article and I see some DBs with page splitting problems. 2nd thing I did was created another session to capture objects. So I create a XE, I am watching live data, I see XE is captung object info based on the DB I was intesreted in but when I ran a tsql, I get 0 records. I even checked the name of the session I had created but not sure what I am missing.
1st step
CREATE EVENT SESSION [TrackPageSplits]
ON SERVER
ADD EVENT sqlserver.transaction_log(
WHERE operation = 11 -- LOP_DELETE_SPLIT
AND database_id = 22 -- Captruing top splitting on my DB.
)
ADD TARGET package0.histogram(
SET filtering_event_name = 'sqlserver.transaction_log',
source_type = 0, -- Event Column
source = 'alloc_unit_id');
GO
-- Start the Event Session Again
ALTER EVENT SESSION [TrackPageSplits]
ON SERVER
STATE=START;
GO
I watch live data, see some events, run this command below and get 0 records....
SELECT
o.name AS table_name,
i.name AS index_name,
tab.split_count,
i.fill_factor
FROM ( SELECT
n.value('(value)[1]', 'bigint') AS alloc_unit_id,
n.value('(@count)[1]', 'bigint') AS split_count
FROM
(SELECT CAST(target_data as XML) target_data
FROM sys.dm_xe_sessions AS s
JOIN sys.dm_xe_session_targets t
ON s.address = t.event_session_address
WHERE s.name = 'TrackPageSplits'
AND t.target_name = 'histogram' ) as tab
CROSS APPLY target_data.nodes('HistogramTarget/Slot') as q(n)
) AS tab
JOIN sys.allocation_units AS au
ON tab.alloc_unit_id = au.allocation_unit_id
JOIN sys.partitions AS p
ON au.container_id = p.partition_id
JOIN sys.indexes AS i
ON p.object_id = i.object_id
AND p.index_id = i.index_id
JOIN sys.objects AS o
ON p.object_id = o.object_id
WHERE o.is_ms_shipped = 0;
FYI: this is the article I was readind.
Use DB.....That's where the problem was...... 🙂
October 7, 2021 at 2:55 am
It's amazing what people will do. Yep... Jonathan's method for tracking page splits is awesome.
But what are YOU going to do with the information? Ah... use it to identify which indexes are doing page splits so you lower the FILL FACTOR, right?
Let me ask you a question... how many clustered indexes do you have that have an ever-increasing key that are fragmenting?
Let me save you a boat load of time and frustration. You'll find that most people will recommend lowering the FILL FACTOR on such indexes to 90 and see what happens. That also means that you have to keep track of any indexes you did that to. When it doesn't work, they'll tell you try 80 and when that doesn't work, to try 70. Some will even tell you to do it in 5% decrements and some will tell you to go all the way down to 60 or even 50.
Save yourself all that time, tracking and frustration, not to mention that your bad page split tracking code will very accurately track that lowering the FILL FACTOR probably hasn't made any difference at all on the number of page splits being produced. How? Set the fill factor to 60 or 70 the very first time. If you get the same number of page splits in a day as you always have, then Lowering the Fill Factor is a waste of time, memory, disk space, disk maintenance, resource usage, and possible larger performance issues.
The reason for the fragmentation in such ever increasing indexes is simple... you're suffering from "hot spot fragmentation". This is where you insert rows and they, of course, all go into nice new pages at the logical end of the index. And then some one does an update on those nice new rows... that went into the table at a realized "fill factor" of 100%, and they update some variable width column like a Comment Column or a Modified_By column and BOOM!!! Instant fragmentation because there wasn't enough room left on the new page to handle the updated large size of the row.
You could lower the FILL FACTOR to 1% and it won't help with this type of fragmentation.
The same holds true for "Sequential Silo" indexes but you'll never see bad page splits from them. They don't create bad page splits, usually have a nearly 100% page density, and fragment overnight. Yeah... no. Lowering the Fill Factor on those isn't going to do anything for the fragmentation there, either.
Here's another fun one... find all the indexes that have almost zero fragmentation and then look at their page density (average percent of page fullness). How many do you have that are less than 90% full? And you know there were no page splits to cause such low page densities because you have a tool that couldn't find them. 😉
And now ask yourself...is it really worth lowering the Fill Factor if you don't actually know WHY an index is doing page splits? Especially your clustered indexes because they're almost always your widest and largest indexes.
And stop and think about it... what is the bad page split counter telling you? It's telling you which indexes are fragmenting and you've already got something that tells you which indexes are fragmenting... sys.dm_db_index_physical_stats!
And even knowing that isn't going to help you much because you don't know WHY a particular index is fragmenting. Worse yet, if you're using what most of the world has adopted as the "Best Practice" of reorganizing between 5 and 30% logical fragmentation and rebuilding when there there's more than 30% or anything even remotely similar, then you're using practices that actually are NOT "Best Practices" and were never meant to be "Best Practices". In fact, because almost no one realizes just exactly how REORGANIZE works and doesn't work, any "generic" use of REORGANIZE is the worst thing in the world you can do short of shrinking your database!
So here's the bottom line...
1. Stop following anything that even comes close to operating like the current world-wide accepted mistake called "Best Practices". It's actually perpetuating page splits all day every day and the big reason is that it uses REORGANIZE the wrong way.
2. Stop doing any index maintenance on any indexes that have the default "0" FILL FACTOR because you're just removing all free space. You have a tool that measures pages splits... figure out how many page splits they suffer in a day and then seriously lower the Fill Factor to 60 or 70%. If they do ANY page splits in the next 24 hours, then lowering the Fill Factor is a waste of time. You need to immediately raise the FILL FACTOR back to 100% and put up with the fragmentation until you can fix the cause of the fragmentation. If they DON"T fragment in the next day, the raise the FILL Factor by 10 an see what happens the next day. You need to find the "sweet spot" on those. The same holds true for all ever-increasing indexes.
3. Another possibility is to stop worrying so much about logical fragmentation. Once an index is in memory, it doesn't matter. If you index is subject to single row reads like in an OLTP system, logical fragmentation makes absolutely no difference whatsoever! I went 4 years on my production system without doing any index maintenance except to recover page density if an index went below 70% page density and then I'd do a REBUILD. And there was no performance degradation... in fact, performance actually got better and I no longer had the massive "morning after" slowness due to massive blocking cause by, you guessed it, page splits. The only thing I did religiously was to rebuild statistics with a FULL SCAN for those things that needed it. I'll also tell you that I don't recommend that method. It was an interesting experiment that I did until I really figured out what was going on and started doing it right Doing it right is damned complex and I'm working on a way to have the system figure it all out but I've got a wee bit more to go on that.
4. Physical Fragmentation and Page Splits are the big problems. If you have logical fragmentation, then you have page splits. If seriously lowering the FILL FACTOR still allows page splits to occur in the next 24-48 hours, then lowering the FILL FACTOR is just a waste. Rebuild such indexes back to 100% and then figure out what is causing the page splits and fix the problem. If it can't be fixed, then those might be the place to use REORGANIZE with the understanding that REORGANIZE is a whole lot worse on the log file than people say it is so consider doing a minimally logged REBUILD (you need to be in the SIMPLE or BULK LOGGED Recovery Model for that) or do an ONLINE rebuild. If you can't do an ONLINE rebuild, then wait until you CAN do an OFFLINE REBUILD.
I have to tell you again that the current "Best Practices" should never be used. They're so bad that they're the actual reason why Random GUID and other "evenly distributed" indexes continue to fragment so badly.
Ok... by now, you probably thing I'm a madman. So let me prove to you that I'm not... Watch the following presentation where, at the very end, I show you how I inserted 100,000 rows for 58 days (that's 5.8 MILLION rows, folks!) into a Random GUID Keyed clustered index with LESS THAN 1% FRAGMENTATION. Here's the link... like I said... what it to the end and not where I'm interrupted by the moderator due to time.
https://www.youtube.com/watch?v=qfQtY17bPQ4
And the VERY BOTTOM LINE IS something that you always need to remember and practice...
"It's better to do NO index maintenance than it is to do it wrong! ... and you're probably doing it wrong!" 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply