April 9, 2007 at 2:06 pm
my strategy is i run the offline script first then the online script, repeat for each database. and then run the stats scripts.
i start with the biggest database and then work my way down to the smaller ones.
April 9, 2007 at 2:10 pm
This is great. Thanks very much.
April 10, 2007 at 2:37 pm
Ok, so from the laymans point of view (ie. my boss/supervisor), if I have 122 indexes in a particular database that is used on a daily basis with defragmentation all >70%, what kind of literal performance overall could one expect?
Is it going to blow his socks off with vast performance increases or is it just good practice? I realize that without going into details it's hard to say but im looking for ballpark.
--Chris
April 11, 2007 at 7:46 am
fragmentation will result in index seeks turning into index scans or worse table scans.
April 11, 2007 at 8:16 am
Ball park. As indexes degrade, performance tails off because retreiving the data just requires navigating across more pages. Then, as the statistics are more & more out of date as compared to fragementation, you'll start to get poor query plans, further degrading performance. You may even start to see deadlocks as more & more pages get locked that probably shouldn't.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 12, 2007 at 6:46 am
Ok, so perhaps the stupid question of the moment but I'll ask.
Why go thru all the trouble of a customized process to evaluate and rebuild/reorganize indexes vs. the standard maintenance process that can be scheduled to rebuild/reorganize indexes?
Is there a particular difference, or is it simply to have a finer degree of control?
Thanks,
Chris
April 12, 2007 at 7:20 am
Chris;
I will give you an example of what I need to do and why it is so important to have a process like the ones described in this thread. I manage several large databases. The one that sees the most activity has 1.4 million subscribers to a .COM website. There are over 400 tables, 2200 stored procedures and who knows how many indexes on these tables.
My goal is to find a process that dynamically manages worse case fragmentation during a very short time window. If I were to run normal maintenance jobs to manage my database the job would take hours and would never finish in time (running beyond my maintenance window).
I have to also make sure the database is running at it's best so that the several thousand concurrent users don't run into performance problems.
So, as you can see, the scope of managing my database falls outside the realm of simple maintenance procedures that would reindex & update stats every night.
Processes like what was describe here leads me to an automated process to deal with those issues described.
Once I get an automated process working to my liking I will then roll out the same process to other large database that I manage to do the same thing.
The beauty of an automated process is it can manage any scale database, hands free.
I pride myself of being old-school when it deals with maintenance. I'd rather put the coding effort in place and set it into motion then "forget about it"... or as they would say in the "hood" foegetabouit
Kurt Zimmerman
DBA
RHWI
Poughkeepsie, NY
Kurt W. Zimmerman
SR DBA
Lefrak Organization
New York, NY
http://www.linkedin.com/in/kurtwzimmerman
April 12, 2007 at 8:28 am
Kurt,
Thanks for the taking to the time to explain. It DOES actually make sense when you put it like that. I guess the biggest problem I have when it comes to alot of things is just my scope and the pond that I swim in. My last job we had a small application with a very limited number of users, thus the 'set it and forget it' mentality kinda made sense. At my new job here now, day 8 and counting, we do reporting for financial brokerage/portfolio management. The pond as gotten larger, although it is still nothing like your situation.
I guess that forming good habits initially pays off in the long run.
Thanks for the info.
Chris
April 12, 2007 at 8:58 am
Another good reason for only rebuilding indexes only when it is significant to do so is that the index rebuilds get logged as transactions. These transaction logs, which are really only related to restructuring the database, and which may not be related to Update, Delete, or Insert operations, can really increase the size of your logs and your backup files. Sometimes the size of the transaction log can be at least as big as the entire .mdf file. Which attracts the attention of our Sysadmin.
April 12, 2007 at 9:47 am
SQL Noob;
Again thanks for sharing your procedures. I have taken the liberty to modify the process to allow it to run from a separate database I call DBA. I keep all of my database maintenance utilities there. This way I don't clutter up the production databases with extra tables & stored procedures. In addition since I have several databases that I need to manage on the same box it makes sense to go through that effort.
I've been also thinking about combining your offline & online procs and make them a single proc passing offline/online so that I'm managing less code... did I tell you I'm lazy in that regard... trying to get more out of less??? I've also added a database pass parameter to the stored procs again with the effort to make it portable across multiple databases.
One last thing I have to do, as I mentioned in a previous post is to put a timelimit (ending time) so the job doesn't run past a maintenance window.
This is all good stuff.. once again thanks for sharing.
Kurt Zimmerman
DBA
RHWI
Poughkeepsie, NY
Kurt W. Zimmerman
SR DBA
Lefrak Organization
New York, NY
http://www.linkedin.com/in/kurtwzimmerman
April 12, 2007 at 11:46 am
my senior dba found one big all in one script here in the script database but it didn't work too well. i played around with the online script for a while and the biggest stumbling block i found was that clustered indexes on a table with LOB data had to be rebuilt offline. Otherwise it would have been easy.
if i played with the code in the cursor and the code to identify the tables to rebuild, then dump the commands into another table and then run them it should work. i might take it up just for the challenge.
for your maintenace window you can try to split the job to run the large tables first at night and the smaller tables let them run into the daytime. the system view gathers the row count into the record_count column. I've run my maintenance jobs in the daytime for testing as well as to see if anyone complains and no one has ever done so. tomorrow we upgrade our main sql servers to sql 2005 and next week we might run maintenance in the daytime just to see what happens. also try running maintenance every night because after a few days it will reduce the number of indexes to be rebuilt so it should be very quick after a week or so.
on a replicated copy of one of the databases with the most lob data i'm seeing the offline time reduced to 10 minutes or so. this is down from hours while running it on sql 2000. there is also a bug in the script so it never runs anything except rebuild that i never bothered to fix so we only run rebuild here. i also put the offline script into jobs where there is no offline indexing required just in case of future db changes so nothing is forgotten later.
next week i think i'll play with your table variable idea. i've only been doing this dba thing for a little over a year and so far this is the peak of my accomplishments.
April 25, 2007 at 1:57 pm
SQL Noob;
As for your bug where everything gets rebuild... check your selection in your on-line & off-line scripts. You are selecting anything that is > 10% fragmented, which according to your logic, if it is > then 10% then it would it would get REBUILD. Therefore you are not selecting anything that has a fragmentation percent < 10% which your logic would perform a REORGANIZE.
I am now faced with an issue to have something like this on a 2000 box. I'm back to the drawing board... Hopefully I can find enough information on 2000 index structure to use these SProcs and just have to tweak them to make them 2000 compliant.
Kurt Zimmerman
DBA
RHWI
Poughkeepsie, NY
Kurt W. Zimmerman
SR DBA
Lefrak Organization
New York, NY
http://www.linkedin.com/in/kurtwzimmerman
Viewing 12 posts - 16 through 26 (of 26 total)
You must be logged in to reply to this topic. Login to reply