August 19, 2013 at 2:55 pm
I want to know your opinions when a customer says the database is slow today.
Usually I would:
*Check for blocking using the sp_who2
*Check for statistics out of date (and perform update statistics)
*Check for wait types
*Identify fragmentation % (and then do an index reorg or rebuild accordingly)
*Run profiler and identify deadlocks (or enable deadlock traceflags)
*Run missing indexes script and check if any indexes are required
*Run unused indexes script and check for indexes with 0 reads and delete them
*Identify queries that are taking longer time to complete and tune them with DTA and add recommended indexes or statistics in a test env. first and move it to prod database
*Check for IO bottlenecks by running the perfmon with appropriate counters from a remote machine
*Verify with the network admins if there was a change in the network
“If your actions inspire others to dream more, learn more, do more and become more, you are a leader.” -- John Quincy Adams
August 20, 2013 at 3:18 am
I would add baseline comparisons (OS & SQL) to the above as well, allows you to see quite quickly if something has changed from the norm.
Rod
August 20, 2013 at 3:45 am
Do you have a baseline of how your procedures should perform? We have a daily trace setup on each of our production servers that we load into our DBA management warehouse each day.
We retain 30 days of raw trace data available and the rest is summarrised.
This means that should someone report any slowness we can compare how the procedures have performed before till now and identify any with a significant change in peformance.
MCITP SQL 2005, MCSA SQL 2012
August 20, 2013 at 5:51 am
Sapen Could you help me with some Scripts or Tasks procedure which you do step by step for below points??? I am new to this Tasks & will help me if incase i face such issues
*Check for blocking using the sp_who2
Ans - I know how to check this
*Check for statistics out of date (and perform update statistics)
Ans - I know to Update Statistics, but dont know how to Check Out of date Stats (Please tell me)
*Check for wait types
Ans - I dont know, (Please tell me)
*Identify fragmentation % (and then do an index reorg or rebuild accordingly)
I Know to Rebuild & Reorg Index, but dont know how to identify Fragmentation (Please tell me)
*Run profiler and identify deadlocks (or enable deadlock traceflags)
Ans- I Know how to run profiler.
*Run missing indexes script and check if any indexes are required
Ans - Need Missing Index Scripts, (Please tell me)
*Run unused indexes script and check for indexes with 0 reads and delete them
Ans - Need Unsued Index Script, (Please tell me)
*Identify queries that are taking longer time to complete and tune them with DTA and add recommended indexes or statistics in a test env. first and move it to prod database
Ans - How to Check queries taking longer time & how can we recommend Indexes or stats, (Please tell me)
*Check for IO bottlenecks by running the perfmon with appropriate counters from a remote machine
Ans - How to do this.
*Verify with the network admins if there was a change in the network
Ans - I know to do this..
Sapen Please dont mind... i am asking you too much.. but if you have scripts & procedure available with you.. please do help 🙁
Awaiting your reply... when you free please do help me
************************************
Every Dog has a Tail !!!!! :-D
August 20, 2013 at 6:02 am
*Check for wait types
Ans - SELECT * FROM sysprocesses WHERE SPID > 50 and WaitTime > 0
*Identify fragmentation % (and then do an index reorg or rebuild accordingly)
I Know to Rebuild & Reorg Index, but dont know how to identify Fragmentation (Please tell me)
http://technet.microsoft.com/en-us/library/ms188917.aspx
*Run missing indexes script and check if any indexes are required
Ans - Need Missing Index Scripts, (Please tell me)
http://technet.microsoft.com/en-gb/library/ms345524(v=sql.105).aspx
*Run unused indexes script and check for indexes with 0 reads and delete them
Ans - Need Unsued Index Script, (Please tell me)
http://sqlserverplanet.com/dmvs/find-index-usage
*Identify queries that are taking longer time to complete and tune them with DTA and add recommended indexes or statistics in a test env. first and move it to prod database
Ans - How to Check queries taking longer time & how can we recommend Indexes or stats, (Please tell me)
Capture traces to get a baseline performance of your queries \ procedure calls. Store the results so you can then compare them to a trace captured at the time when you have performance issues. Once you have compared them and found any with a significant difference in Reads \ Writes \ CPU \ Duration, you can then look at the execution plan for the procedure \ query, explaining execution plans is a book in itself so you need to stufy up on this.
*Check for IO bottlenecks by running the perfmon with appropriate counters from a remote machine
Ans - How to do this.
http://blogs.msdn.com/b/askjay/archive/2011/07/08/troubleshooting-slow-disk-i-o-in-sql-server.aspx|
MCITP SQL 2005, MCSA SQL 2012
August 20, 2013 at 6:54 am
Thanks Taylor for prompt reply..
Will go through all Docs soon & in case i come across anything.. will revert back..
Thanks a TON Man 🙂
************************************
Every Dog has a Tail !!!!! :-D
August 20, 2013 at 7:01 am
I believe that one of the most important steps has been missed altogether. You must first confirm that the database appears to be slow by logging onto the app/website and giving it a try for yourself.
If the app or website is internally located and it runs fast, then the problem may not actually have a thing to do with the database. It could be a network problem on the customer side even if the app or website isn't internal. If they've saturated their own network or are having problems with their ISP, then your stuff will seem slow to them. This has been the case in about 90% of the customer complaints that we've had. The other 10% of the time was when someone ran a performance challenged, resource hungry batch file at the wrong time of the day or when someone that knows better started a long running transaction on some key tables.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 20, 2013 at 7:19 am
First question, always, slow as compared to what? You need to understand where to focus your efforts.
Assuming, as Jeff points out, we have indications of a database problem, then move on to troubleshooting. By and large your list is OK, but I'd rearrange some of the order on the steps and use different tools
I like checking for blocking first because that is a frequent cause of "performance" problems. But sp_who2 is WAY out of date. Use the dynamic management objects such as sys.dm_exec_requests, sys.dm_exec_sql_text, sys.dm_exec_query_plan. These give you a lot more useful information in a single step rather than having to run sp_who2 to identify process IDs and then what, querying the input buffer? Too old school, pre-2005 approach.
I wouldn't immediately rebuild stats either. Even if they are "out of date" unless you know that auto update has been turned off or manual updates have failed. Other things should be checked here, are queries running slow (or is there some other situation) and if slow, are they slow because of the stats (mismatches in the estimated & actual row counts, stuff like that). And move this a little further down the list.
After blocked processes, I'd move on to wait stats. What is causing things to run slow. And remember, just pulling up the wait stats isn't enough. You need to see changes over time.
Exercise EXTREME caution when using the missing index information. It's frequently bad advice. Remember, those out of date statistics? Yeah, those can cause suggestions in the indexes that are utterly wrong.
I've got a much more detailed set of steps in my book on query tuning.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 20, 2013 at 7:52 am
And one more
* Check the query execution plans for the slow running queries and see if there are any table scans and add indexes appropriately (again adding/dropping indexes should be something that needs to be done in a staging/test/dev env. and be monitored for few days before moving to production)
Great to see all your inputs. Thanks Again.
“If your actions inspire others to dream more, learn more, do more and become more, you are a leader.” -- John Quincy Adams
August 20, 2013 at 9:42 am
Do you have a baseline of how your procedures should perform?...
This means that should someone report any slowness we can compare how the procedures have performed before till now and identify any with a significant change in peformance.
I didnt quite get that..can you please explain? Thanks
“If your actions inspire others to dream more, learn more, do more and become more, you are a leader.” -- John Quincy Adams
August 20, 2013 at 9:53 am
Well, for example how do you know if a procedure is taking up any more resources or taking any longer than usual without having a baseline figure?
As I mentioned before if you run traces to capture this information when all is well. When you do get a problem you can compare the stats of the current calls to the procedure to when all was well and quantify if they are any better or worse.
A method I use is to capture a daily 10 minute trace from our production servers at the peak traffic time of the day and summarise that data, for example no. executions of each proc, avg \ max reads, avg \ max writes, avg \ max cpu, avg \ max duration for each procedure. This then lets me have a baseline for comparison. In my particular setup this is stored in a data warehouse with 30 days worth of the traces availble in detail and everything also summarised as described above going back 1 year so its easily and quickly accessible.
People can make changes to procedures via releases, stats can become out of date, bad execution plans can arise, these are some of the many reasons a procedures performance may degrade but without knowing how they should perform you have little to compare to.
The same logic applies to perfmon metrics as well.
MCITP SQL 2005, MCSA SQL 2012
August 20, 2013 at 10:36 am
I should point out that the organisation I work for has quite habitual use, so taking the trace for 10 minutes per day covers 99% of all procedure calls made, this may not necessarily suit your own use of you databases.
MCITP SQL 2005, MCSA SQL 2012
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply