March 11, 2011 at 6:58 am
We use two commonly available stored procedures for reindexing and update statistics. The reindexing/rebuilding usually takes 6 hours and update stats 14 hours. The database is about 900GB with 35 million records now. By next year it may have twice that. Reindex rebuild/reorg is done based on fragmentation: over 30% frag= rebuild. These are done "online" since our system is considered 24/7.
Update stats is run only on those "needing it" and are done FULLSCAN due to problems we've had doing it on a lower sampled rate. ( poor execution plans with incorrect record count estimates )
I can't get approval to do any of this on weeknights. My first thought was increase the frag % for rebuilds and lower the fullscan to a high sampled rate, but I'm reluctant to just guess at that. We don't have a true dev system where we can replicate the Monday morning business load, so any missteps could lead to poor performance and lots of help desk calls.
Management just seems to think I'm uninformed and that "Amazon can run a 24x7 system and maintain it... why can't we?"
We seem to have solved a major IO problem last weekend so we'll see how long it runs this weekend.
SQL 2005 64-bit SP4 Enterprise on active/passive cluster running Server 2008R2 64-bit, 128GB RAM with sql allowed 82GB.
March 11, 2011 at 7:07 am
I'm not sure about the consequences or running update stats (locking wise). But maybe you could move to a model where you're constantly updating stats.
You could then control those pesky plans that need absolute fresh stats during the day, and then at "night" or any slower time, update the stats on other, bigger tables that require more ressources.
I don't know which script you use, but SQL fool's script is highly customizable (exclusion, timeouts, time end, etc). Maybe you could use that as well to get inspired.
Moreoever when you approach your boss about it, make sure you tell them this is a 1 time test and that we won't implement long term without proving it's not harming the system. It's easier to approve that way.
March 11, 2011 at 7:49 am
Current db size is 900 gb and you are saying it could get flat raise in next year.you shud also think about following steps.
1 manage your DB on multiple disk.
2 think to split DB.
3 segregate your tables according to fragmentation percentage.I am sure there would be some tables which can be defragmented at weekly basis.
-------Bhuvnesh----------
I work only to learn Sql Server...though my company pays me for getting their stuff done;-)
March 11, 2011 at 7:49 am
Current db size is 900 gb and you are saying it could get flat raise in next year.you shud also think about following steps.
1 manage your DB on multiple disk.
2 think to split DB.
3 segregate your tables according to fragmentation percentage.I am sure there would be some tables which can be defragmented at weekly basis.
-------Bhuvnesh----------
I work only to learn Sql Server...though my company pays me for getting their stuff done;-)
March 11, 2011 at 7:53 am
We have numerous sql data files for this database, on separate Netapp luns. However in recent discussions here and elsewhere, the value of splitting the database out into numerous files and filegroups is questionable when an intelligent disk system like Netapp manages the luns.
We are looking at putting one of our largest clients on their own database and separate hardware when they convert from our legacy systems this Fall. The immediate problem is reducing the weekend maintenance window. ( not really a true maintenance window because nothing is shut down during that time ).
So far approval is not available for small weeknight "windows" run update stats.
March 11, 2011 at 7:57 am
I would look at breaking down the process and doing a chunk of it each night. Considering the size of your system, you should probably be updating the stats a bit more often anyway. You'll have to run UPDATE STATISTICS on individual tables or even indexes & stats, instead of running sp_updatestats. That will allow you to run it on a series of tables one night, then a different series the next night, etc.
Any time your bosses start talking about Amazon running 24/7, just ask them how much money they want to spend, because Amazon spends serveral large fortunes on infrastructure.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
March 11, 2011 at 7:57 am
Bhuvnesh (3/11/2011)
Current db size is 900 gb and you are saying it could get flat raise in next year.you shud also think about following steps.1 manage your DB on multiple disk.
2 think to split DB.
3 segregate your tables according to fragmentation percentage.I am sure there would be some tables which can be defragmented at weekly basis.
What's the impact on your system? Are you dropping below required perf levels?
Have you actually tested it?
March 11, 2011 at 8:03 am
... and seriously, consider asking nicely for a 1 off test to solve a problem your manager is having. I've never had anyone say no to me to a proposition like that.
March 11, 2011 at 8:13 am
Can you elaborate on the "We seem to have solved a major IO problem last weekend so we'll see how long it runs this weekend." ?
Are there typical slowdowns / app errors you receive ?
what's te pressure this instance esperiences ? (cpu bound / io bound / ram bound ) ?
Can you split the maintenance so e.g. you only have to perform the bigger tables / indexes in the weekend and the smaler ones during the rest of the week ?
If you have partitioning, are you indexes alligned so you can perform index maintenance at partition level ?
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
March 11, 2011 at 8:28 am
Ninja's_RGR'us (3/11/2011)
What's the impact on your system? Are you dropping below required perf levels?
Have you actually tested it?
actually posted ones are some suggestion to improve the DB performance and manageability
-------Bhuvnesh----------
I work only to learn Sql Server...though my company pays me for getting their stuff done;-)
March 11, 2011 at 8:28 am
Ninja's_RGR'us (3/11/2011)
What's the impact on your system? Are you dropping below required perf levels?
Have you actually tested it?
actually posted ones are some suggestion to improve the DB performance and manageability
-------Bhuvnesh----------
I work only to learn Sql Server...though my company pays me for getting their stuff done;-)
March 11, 2011 at 8:33 am
ALZDBA (3/11/2011)
Can you elaborate on the "We seem to have solved a major IO problem last weekend so we'll see how long it runs this weekend." ? [font="Times New Roman"]Our primary problem has been IO. High average read wait on the primary client database, and very high average write wait on tempdb. Much of this was apparently solved by turning off the host bus adaptor HBA Network cards to our Netapp disk array along with MPIO and using the "Microsoft ISCSI Initiator configuration" I'm not a network guy, but that's how it was explained to me. Previously we had lots of network errors with packets being resent.[/font]Are there typical slowdowns / app errors you receive ? [font="Times New Roman"]I think previously most of our issues were with IO performance.[/font]
what's the pressure this instance esperiences ? (cpu bound / io bound / ram bound ) ? [font="Times New Roman"]Sql is currently allowed to use 82GB of memory, with plan cache typically using 7.5GB As I said, IO was the problem. Monday we saw low IO and CPU maxed out for extended periods. Freeing plan cache fixed it. No major problems since.[/font]
Can you split the maintenance so e.g. you only have to perform the bigger tables / indexes in the weekend and the smaler ones during the rest of the week ? [font="Times New Roman"]So far no approval for weeknight maintenance, I think mainly because various batch processes run at night.[/font]
If you have partitioning, are you indexes alligned so you can perform index maintenance at partition level ? [font="Times New Roman"] I don't believe we have that kind of partitioning, just tables and indexes split out into several filegroups and spread over about 18 data files.[/font]
March 11, 2011 at 8:34 am
We're definitely not Amazon. Management seems to think we can get the maintenance done quickly no matter how large the system. And the reason we can't is I'm not knowledgeable enough.
March 11, 2011 at 8:38 am
Indianrock (3/11/2011)
We're definitely not Amazon. Management seems to think we can get the maintenance done quickly no matter how large the system. And the reason we can't is I'm not knowledgeable enough.
Well ya, buy 10 extra server, setup load balancing, replication, clusters, extra sans, etc.
Once that's done, take 1 server down at a time and maintain it. Can be done anytime you want because you still have servers online to handle the demand.
Got a couple million $ budget for this??? Didn't think so :w00t:.
March 11, 2011 at 8:44 am
Have you implemented table partitioning? With online index rebuilds of only the partitions that are fragmented, you should be able greatly reduce the amount of time needed for maintenance.
Also, if you are reindexing instead of defragmenting, you should not have to update the statistics.
Viewing 15 posts - 1 through 15 (of 21 total)
You must be logged in to reply to this topic. Login to reply