March 1, 2013 at 9:23 am
Hi all
What is you opinion about killing alter index rebuild jobs? Can this be done easily, never ever or depends? I'm asking because recently we had the issue that an alter index rebuild job had to be killed because it's a huge performance impact and the system was performing poorly. After killing this jobs the system was almost unusable and under heavy load from the rollback. So, should I never kill an alter index job again?
Regards, Christian
March 1, 2013 at 9:58 am
Kill a command is not something you plan, but you should be aware that rollback is expensive, depending on how much transaction log is generated. What recovery model db is in? Was it online rebuild?
If you can afford offline rebuild during off-peak hours, it is faster. You could put the db in bulk logged recovery model so it is minimally logged and therefore significantly faster, sort in tempdb, grow the log, tempdb and data files enough to fit the operation. It could be paralellized also, and used several equally sized files in filegroup that table belongs to. You could split the table into partitioned view or partitions and rebuild piece by piece. There are many options for you to make that operation faster, without having to rollback.
March 1, 2013 at 10:08 am
The answer is it depends. Killing a rebuild will take some time to release the locks and you will continue to suffer.
Do you run 24/7 or do you have a window where you could perform some maintenance?
If I am running a rebuild during the day, it's because I absolutely have to. Normally, I'd take the reorganize route. My maintenance jobs take care of rebuilds - I'm not even sure I can remember when I last ran a rebuild in production if I am honest.
March 1, 2013 at 1:15 pm
One option is to lower the fill factor, so future rebuilds will be needed rarely. Default fill factor of 0 (0 or 100 means 100% full pages) guarantees fragmentation will occur very fast if your clustering key is not ever-increasing and there are lots of inserts/updates during the day. So other than fill factor, one more option is to choose a different clustering key which is ever-increasing. Then you will not need to rebuild the table so often, maybe never. Notice that primary key does not have to be changed, PK and CL index keys are not the same thing and can be completely different sets of columns. Still, non-clustered indexes will be left to rebuild/reorganize occasionally, but they are much smaller than table itself. And that frequency can be lowered by using carefully selected fill factor.
March 1, 2013 at 1:36 pm
Thanks for all replies so far. The affected database is part of a database mirror and therefore in full recovery mode, also edition is standard, so no online rebuild for me. At the moment there is only a single filegroup with one big file, would it be really better to split it into several filegroups?
At the moment there is no real maintenance window, just two days with less transactions. I'd be happy if there is one, as then the maintenance job will not be disturbed by something else. One more questions remain, would it be possible just to go with reorg route and do a rebuild only when it's absolutely necessary? At the moment I do reorg on weekdays and rebuild at weekends, and to be honest, I'd choose this route because it seemed best to me. I have stopped rebuild job for this weekend, and will check how much fragmented the indexes are on monday, so I'll see how they behave.
And the point about fill factor and clustered index, I have to check and read a little bit more about it.
Thanks, Christian
March 1, 2013 at 1:51 pm
Reorgs are definately better during the week. I think my script is set up to reorg over 10% and rebuild at 30%. The fill factors will make a difference. If they are 100% or too high, you'll be fragmenting your indices far quicker.
Whilst doing a reorg, you can stop the process pretty much straight away. Should you then find a quieter time in the day or night where you could resume the reorg, it would pick up more or less where you left off (plus any additional fragmentation that may have occured between you killing the process and when you next run it).
Do you do selective maintenance, i.e. the indexes which need it or do you do everything?
March 1, 2013 at 1:51 pm
mickdunde (3/1/2013)
The affected database is part of a database mirror and therefore in full recovery mode, also edition is standard, so no online rebuild for me. At the moment there is only a single filegroup with one big file, would it be really better to split it into several filegroups?
If you have multiple drives to move the files to, maybe.
One more questions remain, would it be possible just to go with reorg route and do a rebuild only when it's absolutely necessary? At the moment I do reorg on weekdays and rebuild at weekends, and to be honest, I'd choose this route because it seemed best to me.
It's certainly an option.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 1, 2013 at 1:53 pm
First, make sure you're checking index fragmentation and only rebuilding indexes that actually need it.
Also, if you have tempdb properly tuned, look into using "SORT_IN_TEMPDB = ON" option when rebuilding indexes. That can significantly reduce the rebuild activity that has to occur in the main db itself.
Multiple files/filegroups potentially could help I/O not only for rebuilds but for all access, but it depends on the details: which disk the new files/filegroups are placed, is it a separate access path, etc..
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
Viewing 8 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply