February 5, 2009 at 12:47 pm
... am seeing a lot of recompiles .. is there something that can be done with that?
A profiler trace should help in finding the reason those recompiles are happening.
February 5, 2009 at 1:22 pm
Okay I've done that .. and actually was able to dig up some historical perfmon data to look for Recompiles/sec and six checks that were done over .75. When it was run the monitoring after the changes, the average was .83
February 6, 2009 at 6:28 am
One update .. I did a DBCC SHOWCONTIG WITH TABLERESULTS, ALL_INDEXES against some of the databases to check for fragmentation, and more than a few things came back with about 30% (upwards of 90%) fragmentation on somethings which I figure can cause some of the problems I am seeing.
My question is, can I run a DBCC INDEXDEFRAG without locking out or causing any further issues to the end user? Basically I need to run it without any chance of effecting them since the databases are in heavy use constantly .. even with being slower.
February 6, 2009 at 6:34 am
PER BOL...
DBCC INDEXDEFRAG is an online operation. While this operation is running, the underlying table is available to users of the database. A relatively unfragmented index can be defragmented faster than a new index can be built because the time to defragment is related to the amount of fragmentation. A very fragmented index might take considerably longer to defragment than to rebuild. In addition, the defragmentation is always fully logged, regardless of the database recovery model setting (see ALTER DATABASE). The defragmentation of a very fragmented index can generate more log than even a fully logged index creation. The defragmentation, however, is performed as a series of short transactions and thus does not require a large log if log backups are taken frequently or if the recovery model setting is SIMPLE.
Also, pay attention to what it says about it being fully logged, you want to make sure you're taking frequent log backups as you don't want to run yourself out of disk space...
-Luke.
February 6, 2009 at 7:21 am
Thanks for your reply. That gives me a better feeling about trying to clean up the fragmentation.
Now is there a way to resolve fragmentation across all databases?
Remember .. I'm a bit of a SQL newbie 🙂
February 6, 2009 at 7:35 am
there are a number of scripts on this site and others which will at the very least be able to identify fragmentation levels. some even include code that lets you specify a threshold to run an index rebuild/defragment based on fragmentation levels.
Couple of important things to note... Don't try to do all of them at the same time. Particularly on a server that is already resource constrained. Keep an eye on disk space and log file utilization. For badly fragmented indexes it's gonna be faster to rebuild than reindex. Particularly since this is the first time you're attempting to clean up your indexes since your migration I'd suggest a manual review of the fragmentation levels and start slowly, 1 or a few at a time, until you get a sense for how it will affect your users and such. If you have a good development system this would be the place to start so that you have an idea of how long it will actually take to do this, and if you'll need to schedule a maintenance window or something.
-Luke.
February 6, 2009 at 7:38 am
Thanks a ton ... I found some scripts that should help me out.
My question is ... can a simple file copy and what not like I did cause the indexes and fragmentation to get all out of whack like I'm starting to see?
February 6, 2009 at 8:03 am
If you never updated your stats when you copied the file and or depending on the size of your database and how it was copied, then yes, maybe.
When you re-attached the databases to your new server your first steps should have been to update stats, then look at fragmentation and to re-index and such.
Or if there are no jobs/maintenance tasks setup to handle fragmentation it could have been that bad before the copy, and now that you're seeing performance problems you're noticing and taking steps to make sure it doesn't happen again. Or perhaps the maintenance tasks to reindex your tables never were recreated on your new server and since it's been a week since they've been in production that's just how badly your indexes fragment in any given week...
-Luke.
February 6, 2009 at 8:13 am
That is what I kinda figured. the problems could have existed before but we did move to "lesser" hardware that may just be exposing them now.
We haven't/can't re-index since it will lock the user out and we're a 24x7 operation so I really am at best able to do work the defragmentation.
Now we did run update stats but only did 10% ..... we will be running 100% next because I don't think it has ever been done that high. Which should help I hope ...
I did notice on one of the problem databases, that the fragmentation is at 83% for Extent Scan fragmentation, 33% for logical scan and scan density is 16%.
Viewing 9 posts - 16 through 23 (of 23 total)
You must be logged in to reply to this topic. Login to reply