May 9, 2011 at 3:00 am
Hi,
So im trying to find a good way to do index maintenance.
I started by creating Maintenance Plans, Step1 was CheckDB (). Step2 then Rebuild all indexs. (Thinking this would update all statistics as well and even if there are not passed the standard about of defrag to be rebuilt it wouldn't do any harm (Could someone confirm that!)
Im sure there are better ways to do this! I know there's a script by Michelle Ufford: http://sqlfool.com/2010/04/index-defrag-script-v4-0/ but just looking at all that tsql gives me the shivers!
What do other people do to deal with there indexes? The SQL boxs are only used 9-5, 7 days a week so i have large maintenance windows,
Good to hear some advice and real life experiences.
Thanks
S
May 9, 2011 at 6:31 am
Pretty much use Michelle's scripts. They're solid as can be. We had reasonably wide maintenance windows and the databases weren't terribly large (100gb each) so we did full maintenance of the indexes nightly because we could.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 9, 2011 at 6:36 am
I use her script and it works perfectly.
Rebuild the index will update those stats.
I personnally just run this daily (large maint. window too and it takes 15 minutes to run on our 18GB DB)
DECLARE @Exec VARCHAR(MAX)
SELECT @Exec = ''
SELECT @Exec = 'UPDATE STATISTICS dbo.[' + CONVERT(VARCHAR(200),name) + '] WITH FULLSCAN ; ' + CHAR(13) + CHAR(10) + @Exec FROM sys.tables ORDER BY name DESC
PRINT LEN(@Exec)
PRINT @Exec
EXEC(@Exec)
May 9, 2011 at 7:48 am
Thanks for input!
I will try to get grips with them this week!
S
May 9, 2011 at 8:00 am
The great thing about Michelle's script is the history. With further analysis I could spot ± 100 indexes that needed to be moved to 80 fill factor instead of 100.
That move alone stopped 95% of the maintenance work.
I then change the rebuild minimum so that I always rebuilt instead of reorg. That Slashed pretty much the rest of the work.
In total I went from 8 GB / day in t-logs to barely 200 MB average.
May 9, 2011 at 8:04 am
Ninja's_RGR'us (5/9/2011)
The great thing about Michelle's script is the history. With further analysis I could spot ± 100 indexes that needed to be moved to 80 fill factor instead of 100.That move alone stopped 95% of the maintenance work.
I then change the rebuild minimum so that I always rebuilt instead of reorg. That Slashed pretty much the rest of the work.
In total I went from 8 GB / day in t-logs to barely 200 MB average.
well this is clearly the way forward!
Anyone know if there's a guide or how to use blog that has been made by chance for us n00bys?
S
May 9, 2011 at 8:32 am
No need, just run the script and setup the job to run daily.
I can send you my "tweaked" parameters if you want but I'm not sure you need them.
May 11, 2011 at 2:46 pm
We just started testing Ola Hallengren's maintenance scripts and they're very well thought out and thorough. However, it logs to a text file, not a table which will make it harder to review what's frequently getting fragmented. I may have to modify Ola's to do that.
May 12, 2011 at 5:30 am
Michel's script sends data to table. So that may be eaiser to setup for you. And AFAIK there's nothing left out of her script.
May 12, 2011 at 10:36 am
Could you please elaborate more on how you spotted that fillfactors need to be changed and which portion of her code helped you to do it?
Thanks.
May 12, 2011 at 10:53 am
RimonTL (5/12/2011)
Could you please elaborate more on how you spotted that fillfactors need to be changed and which portion of her code helped you to do it?Thanks.
I let the code run for a 4-5 months (don't have to wait that long, maybe a couple weeks).
I then aggregated the data to figure out how many reindexes / reorgs I had gotten compared to the # of business days the server had been online.
Anything above 20% (or once per week) got reindexed to 80% fillfactor. Our indexes are really small here. So that meant only 1 GB extra data or 6% in the system. So I chose to be really agressive rather than passive about this. I also knew at this point that the system does 550+ reads for every write. With that huge of a ratio, I'm optimizing almost only for reads when talking about indexes... knowing that writes won't ever be an issue. (Now don't go overboard and build 100 index on the same table or include all the columns)
That took the tlog backups from 5-11 GB / day to 200 MB average. At that point I also chose to only do rebuilds. So at 10% fragmentation I rebuild instead of reorg up to 30%. I still keep the 1000 pages minimum tho.
It also sped up a very annoying rbar process that needs to run daily (loads of updates).
I then finally shrunk the tlog file to the bare minimum. In my case I chose the biggest table in the sytem + 50%. So the log file is now 3GB. Even if the biggest log backups I get are around 500MB. I don't want to let it auto-grow for nothing.
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply