January 3, 2014 at 4:02 am
Yeah...Drop and reinserting the data and create index after that would be good....but if we already have million of rows into the table, I don't think we must drop and re-insert it. So all I can do is rebuilding the index
January 3, 2014 at 4:05 am
Jeff Moden (1/2/2014)
prakashr.r7 (1/2/2014)
Why are we not able to rebuild indexes having BLOB data? do we need to drop and recreate in alternate?You can rebuild indexes that contain BLOB data as INCLUDEs. You just can't rebuild them in an ONLINE fashion prior to SQL Server 2014.
What error message(s) are you getting?
This is the message that I am getting
Msg 2725, Level 16, State 2, Line 1
An online operation cannot be performed for index 'Ix_Att_PK' because the index contains column 'KpiFile' of data type text, ntext, image or FILESTREAM. For a non-clustered index, the column could be an include column of the index. For a clustered index, the column could be any column of the table. If DROP_EXISTING is used, the column could be part of a new or old index. The operation must be performed offline.
January 3, 2014 at 4:14 am
If you're inserting data in a table, then more than likely the insert transaction is locking the tables therefore the application is not going to be able to access the data.
If you are inserting huge amounts of data into a table, the most efficient way would be to remove the index then re-create it. From previous experience, this has reduced loads time by 500%
You can also look at changing your recovery model to bulk logging but again don't change this unless you know what you are doing.
What I'm trying to put across is that you may want to take an holistic view of your process rather than focusing on defraging.
Hope this helps.
--------------------------------------------
Laughing in the face of contention...
January 3, 2014 at 5:14 am
And also, if you drop and recreate the indexes, that also defrags them.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
January 3, 2014 at 8:11 am
prakashr.r7 (1/3/2014)
Jeff Moden (1/2/2014)
prakashr.r7 (1/2/2014)
Why are we not able to rebuild indexes having BLOB data? do we need to drop and recreate in alternate?You can rebuild indexes that contain BLOB data as INCLUDEs. You just can't rebuild them in an ONLINE fashion prior to SQL Server 2014.
What error message(s) are you getting?
This is the message that I am getting
Msg 2725, Level 16, State 2, Line 1
An online operation cannot be performed for index 'Ix_Att_PK' because the index contains column 'KpiFile' of data type text, ntext, image or FILESTREAM. For a non-clustered index, the column could be an include column of the index. For a clustered index, the column could be any column of the table. If DROP_EXISTING is used, the column could be part of a new or old index. The operation must be performed offline.
Like I said, things that contain blob data (and a clustered index contains ALL of the data at the leaf level) can't be rebuilt online. One trick you can try is to recreate the index using CREATE INDEX WITH DROP_EXISTING to save a little space and to keep all of the NCIs from rebuilding twice but you're not going to be able to do this in an online fashion within just the one table.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 6, 2014 at 4:15 am
Ok, I will drop and recreate the index for either case that huge data insert as well as BLOB containing tables.
March 21, 2018 at 6:13 am
Any comments on rebuilding all indexes once a week as a housekeeping task.
Our DBA team do that to all production SQL Server databases.
March 21, 2018 at 7:01 am
paul s-306273 - Wednesday, March 21, 2018 6:13 AMAny comments on rebuilding all indexes once a week as a housekeeping task.
Our DBA team do that to all production SQL Server databases.
Its overkill honestly. One needs to assess how many inserts and deletes are happening. If there is virtually no data change then rebuilding the indexes only grows the transaction log and has increased I/O and CPU consumption needlessly. There are key tables that are high activity that have a decent amount of new and sometimes deleted rows that I find a maintenance window once a month to rebuild the indexes. There are other key tables that have a TON of inserts and deletes daily that I rebuild the indexes more often. One needs to understand the activity of data in the tables.
March 21, 2018 at 7:10 am
Thanks Summer90 - confirmation of what I assumed.
March 21, 2018 at 8:34 am
Please note: 5 year old thread.
Please post new questions in a new thread. Thanks
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 21, 2018 at 7:35 pm
paul s-306273 - Wednesday, March 21, 2018 6:13 AMAny comments on rebuilding all indexes once a week as a housekeeping task.
Our DBA team do that to all production SQL Server databases.
Actually, yes.... I have comments. Unless you've determined the correct Fill Factor for your indexes and you rebuild (not reorganize) them just as fragmentation starts (1% fragmentation), then stop doing index maintenance because you're actually doing more harm than good. The recommended best practices were responsible for massive blocking for me back in 2016. I've developed (and am writing a new presentation about) a new technique for index defragmentation routines that virtually eliminates all fragmentation and all page splits and all the blocking that occurs when using supposed "Best Practice" settings (10/30) recommended in Microsoft documentation. Of course, I've also proven, through the testing that I've been doing, that not doing any index maintenance is just about as effective (Brent Ozar,thank you for the inspiration on that).
--Jeff Moden
Change is inevitable... Change for the better is not.
March 22, 2018 at 3:38 am
I read this response from Tom Kyte in 2002 on the rebuilding of indexes from
'Ask them for the technical reasons WHY they rebuild.
When they say "performance of course" -- ask them for the performance metrics they took BEFORE the rebuild and AFTER the rebuild. They won't have any (no one ever does, no one seems to think about doing that). They will say "it just goes faster trust me".
When they say "to reclaim space of course" -- ask them "but how long does it take before the index is just a big as it was before the rebuild". You see they have not reclaimed any space -- the index will just grow again, eventually hitting a steady state size that it likes to be.
If you need to rebuild your indexes, you need 2x the space -- you'll have the old and the new index for a period of time. If you do it online, you'll need additional space to hold the changes that are made during the rebuild as well. '
Though things may have moved on since then and SQL Server <> Oracle, I guess in general terms that applies to SQL Server. DBAs rebuild indexes because 'that's what they do'.
March 22, 2018 at 8:04 am
paul s-306273 - Thursday, March 22, 2018 3:38 AMI read this response from Tom Kyte in 2002 on the rebuilding of indexes from'Ask them for the technical reasons WHY they rebuild.
When they say "performance of course" -- ask them for the performance metrics they took BEFORE the rebuild and AFTER the rebuild. They won't have any (no one ever does, no one seems to think about doing that). They will say "it just goes faster trust me".
When they say "to reclaim space of course" -- ask them "but how long does it take before the index is just a big as it was before the rebuild". You see they have not reclaimed any space -- the index will just grow again, eventually hitting a steady state size that it likes to be.
If you need to rebuild your indexes, you need 2x the space -- you'll have the old and the new index for a period of time. If you do it online, you'll need additional space to hold the changes that are made during the rebuild as well. '
Though things may have moved on since then and SQL Server <> Oracle, I guess in general terms that applies to SQL Server. DBAs rebuild indexes because 'that's what they do'.
Oh, man.... I'm glad I'm not the only one that came to that conclusion based on Brent's suggestion years ago. Here's a teaser of what I'm working on...
The chart contains the first 5 weeks of what a single index goes through (for 4 different Fill Factors) if it's not based on an "ever increasing" and "immutable" key. Every upswing is a change in page count due to "bad page splits". Every sharp decrease is when the "Best Practice" (do nothing until 10% fragmentation, Reorg between 10 and 30%, Rebuild at 30% or more) kicks in. The Black line is the "Baseline" for an "ever increasing" and "immutable" key, which is 100% only "good page splits". The supposed "Best Practice" method holds back on page splits until the last day or two prior to the over 10% mark and then they all happen at once. This was the cause of huge blocking episodes every Monday (index maintenance occurred every Sunday night) on my main production box.
Guess what happens if you simply stop rebuilding the index?
The Brown Line with the Yellow "glow" is the "No Defrag" line.
I've not finished my performance testing nor have I finished on the exception of rows first being inserted and then expanding the next day or so due to updates but I can tell you that defragging a fragged index that has a 100% Fill Factor is just plain stupid because it causes THOUSANDS of "bad page splits" per day (all of which are blocking episodes and take a heavy toll on the log file) and, remember, this is just for ONE index!!!
The charts for all of this stuff led me to a method to virtually eliminate all pages splits if you actually are addicted to index maintenance. I'm saving that for the presentation I'm giving in May and the article that will follow that.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 22, 2018 at 9:25 am
Love the
Brown Line with the Yellow "glow" is the "No Defrag" line.
March 22, 2018 at 10:59 am
Hello all,
This was very interesting ideas exchange about how often, defrag or not defrag indexes... having in mind that this was generated by a performance issue, I am sharing with you my toughs:
1.- In a perfect world, if there is no performance problems, then you do not need to defrag anything.
2.- In the real world, performance troubles arises when ever, you expected or not.
Under the assumption that the query is ok and the trouble happens after the massive data insert, my best approach is there is typical cardinality problem. This presents when a query that works normal, starts to run slow from nowhere (at least that is that the users says).
If you see the execution plan, and notice that there is a big difference between estimated row count and actual row count then you have find out the confirmation of the cardinality problem.
You can solve this issue by many ways:
1.- Reestart the instance. This will force to rebuild every execution plan.
2.- Clear the stats on the server.- This also force the rebuild of every execution plan.
3.- Update statistics for that specific table.- Since option 1 and 2 are the last resources, you can choose this.
4.- Index defrag.- Of course, this is one of the main factors that you must have in mind. I have put this at the end due to if the problem is solved with the option 3 you don´t need to go here.
As somebody have told before, every environment it is different. My recomendation is that you DBA team, have to dig more in order to determine what is the root cause of this performance issue.
To be practical: If your issue have been solved with the nightly index defrag process, and this is no causing you any other problem, my guess is that you problem has been solved!
Hope this helps.
Viewing 15 posts - 16 through 30 (of 42 total)
You must be logged in to reply to this topic. Login to reply