February 1, 2007 at 5:00 am
We currently do DBCC Checktable and Index Defrag nightly on our main DB server and the process takes anywhere between 3 hours and 7 hours. The time that this takes to run increases gradually over time. Once in a while we do a DBReindex and this usually bring the time for the defrag back down again. However the first time we do a defrag after night the reindex, the defrag takes all day and then miraciously the following night it goes down to 3 hours.
Can anyone explain this. Am I missing a step?
Thanks Jeet
February 1, 2007 at 10:19 am
February 1, 2007 at 10:38 am
The 2 of them do nearly thesame thing, what you will need to think of is your enviroment and which one suits best:
To Summarise
Dbreindex: Faster but causes blocking, and you can also specify your fillfactor.
Indexdefrag: Slower, but doesnt cause blocking while its in action
It also depends on how frragmented your indexes are.
Hope this helps
February 2, 2007 at 3:49 am
The DBReindex takes about 3-4 hours and we've now started to run it once a week as this allows us to keep the defrag & check table time down to a minimum.
Thanks Jeet
February 2, 2007 at 3:53 am
You would think that they would do the samething, however without a reindex the defrag time will continue to creep up, and so although it would be nice just to get away with doing a less intrusive defrag, the reality is a reindex is needed.
Thanks Jeet
February 2, 2007 at 5:35 am
actually dbreindex and indexdefrag are not the same thing and do not produce the same end result. Please check BOL.
An index defrag may take much longer, will generally cause far more t log growth and may not actually do much good.
To speed up index rebuilds get enterpise edition / more procs and mainly faster disks ( more spindles ) and no raid 5!
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
February 2, 2007 at 6:50 am
Hi Jeet,
DBREINDEX and INDEXDEFRAG are not one and the same thing. DBREINDEX will rebuild the entire index structure, update stats, and if the table has a clustered index, set an exclsive table lock. INDEXDEFRAG only cleans up fragmentation at the leaf level of the index, does not update stats.
[Colin] damn, you beat me to it.
Rgds iwg
February 2, 2007 at 8:51 am
I would recommend using the DBREINDEX as your primary index maintenance strategy. I would also recommend to run DBCC SHOWCONTIG directly prior to, directly after, and 1/2 way in between DBREINDEX runs. This will allow you to analyze your fragmentation better and help you fine tune when you should be running both DBREINDEX and INDEXDEFRAG.
February 2, 2007 at 10:49 am
Why no RAID 5? Are you comparing speed only to disc mirroring? Or are you comparing it to RAID 10?
I always figured that RAID 5 was the best bang-for-the-buck solution.
February 3, 2007 at 9:13 am
Use raid 5 where you only want to read data, as a recreation of an index involves reads and writes raid 5 is poor. I see absolutely no justification or use for raid 5 ( except for read only file groups ) for a SQL Server environment. There is no "bang" .
http://sqlblogcasts.com/blogs/grumpyolddba/default.aspx
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
February 5, 2007 at 6:11 am
The bang would be how much space is lost compared to any RAID with mirroring.
February 5, 2007 at 7:28 am
vs performance? No contest I'm afraid. With the growing size of disks and adoption of SAN technology the bean counters look more and more at storage as something to be used rather than something that should perform. In the overall scheme of things a few extra hard disks cost nothing , except in an overpriced and poorly configured SAN. It's a shame the concept of a storage area network is seen by most as a fibre attached managed disk array box ( which it's not ), the vendors get rich and database performance suffers.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
February 5, 2007 at 7:37 am
So if I was going to go out and buy a new server tomorrow and I had about $10k and I need at least 1tb of disk space, what level RAID would you suggest?
I guess that I'm a little behind on the times. We haven't had to buy a RAID controller in about 3-4 years.
February 6, 2007 at 5:51 am
RAID 5 has a number of issues with it. One is performance when writes are involved - twice as many IOs as RAID 10. Second is its redundancy. It has the ability to withstand the loss of one disk. Thats ONE disk. If you lose a disk, until you have swapped that disk and rebuilt the parity and data on that disk you are at risk of losing the array, just as if you were running RAID 0. Now this will depend on how much data and parity is to be rebuilt. If its a large DB set then it can take hours.Statistically there is a much greater risk of multiple disk failure taking out the array in RAID 5 compared to RAID 10 (I won't do the math here as I'm sure you can do this yourself). RAID 10 can withstand multiple drive failure, as long as its not the same mirrored pair. Add to this the risk of someone swapping out the wrong drive when they go to replace the failed one don't laugh, we actually had this event, but because it was RAID 10 and not of the same mirror as the failed drive, we survived (we have also witnessed multiple drive failures in the same arrays around the same time-period - I won't name the supplier here).
This also links in to the problem of performance, because, whilst the system is running with the loss of the ONE disk, it will have to make use of parity information from other drives in the array. Then, when you have swapped the drive , the system will not only have to cope with the requests from users and continue to make use of parity info from the other drives, it will also have to use paritry info to start rebuilding to the new drive. Performance may drop up to 80% during all this time - and your still exposed because until the new disk rebuild is complete, another disk failure will invalidate the whole array. On top of this, there is the issue of space. This is a strange one as it depends on the nunber of disks you have in the array - with RAID 5 the more disks, the less space as a percentage you lose to parity.
When you balance the cost of the extra disks to get RAID 10 againts what it would cost your business in time to recovery/data loss/business continuity, then this cost may pale into insignificance. This is on top of the benefits your likely to get in performance, esp with IOs.
Usual caveat though, it all depends. Whats your risk? What can you afford?
rgds iwg
February 6, 2007 at 6:08 am
I see your point. How much would a 1tb RAID 10 array cost? (assuming all 15k drives)
I am interested in going to RAID 10 on my next array, but we are a little limited on resources.
Do you have a ballpark figure on how much faster a RAID 10 would be over a RAID 5? This is assuming a 30gb database with about 50/50 time spent or reading/writing. I understand the dependability issue, but is it faster by magnitude? (I know this is a 'it depends question', but I just want a general figure)
Viewing 15 posts - 1 through 15 (of 24 total)
You must be logged in to reply to this topic. Login to reply