April 27, 2015 at 4:43 am
Hi all
I have an upcoming task of trying to improve overall performance of a database.
Our setup is that we have a Windows product based on a SQL server database. The customers have their own installation of the SQL server. We provide the schema, and mantains it in our upgrade procedure.
We have a few candidates for change that we think will make improvements, but I would like to do some up-front measurements on the production environment at the customer site.
What I am looking for here, is what indicators I should focus on.
What I would like to know if possible is:
- What queries/tables take up the most amount of processing time on the SQL server (To focus on what puts the most load on the server)
- What is the memory load over time
- Is indexes loaded/unloaded all the time (or do we have sufficient memory to hold all our indexes in memory - or at least the most commonly used)
- What is the average disk I/O
- What is the network performance/usage
I do not have direct access to the specific SQL server instance, so I would like to be able to construct an email telling which numbers I am interested in - and preferable instructions in how to collect them.
The thing I expect to be our biggest challenge is that our indexes cover too much data, in relation to what is being asked for. And gathering some numbers of memory load, and indexes load/unload could either back this thesis up - or categorize it as not urgent.
And of course, by measuring numbers up front, I will also be able to see if things actually improve afterwards.
I hope some of you understand my questions - and what I am looking for.
Best regards
/Anders
April 27, 2015 at 5:32 am
I do not have direct access to the specific SQL server instance, so I would like to be able to construct an email telling which numbers I am interested in - and preferable instructions in how to collect them.
All I can tell you is that is going to be a long email if you want to cover all of that in it. 😀
What you are basically looking for is capturing a baseline of the database server to initially find the pain points for an environment. You do this with Performance Counters and/or Extended Events/server-side trace. There are basic counters and events that can be captured to show you what the overall workload is like, you just need to find the ones that match up to the information you need.
You can also capture wait stats, waiting task, file stats (IO latency), index usage, etc. to see what is going before and after the changes are made. I would say with the wait stats you clear those after making the changes and then check them after the system runs for a period of time. You can then compare what SQL Server is waiting on most before and after the changes.
Shawn Melton
Twitter: @wsmelton
Blog: wsmelton.github.com
Github: wsmelton
April 27, 2015 at 5:40 am
All I can tell you is that is going to be a long email if you want to cover all of that in it. BigGrin
What can i say.. I like typing.. 😉
Fun aside, I know it's a non-trivial task to figure out what hurts the most. Lack of memory, lack of disk i/o, lack of network, latency or others. And on top of that an mature application which, let's face it, not always does things the most optimal way.
But basically, you're telling me that looking over the standard performance counters, and reading up on Extended Events monitoring will take me far in my search for numbers.
I will take a look at it tomorrow in our test environment where I have a restored copy of the database. And then I will have a talk with the DBA at the customer site to see if he can find the time to give me the numbers from production.
Thanks for you reply.
/Anders
April 27, 2015 at 5:46 am
April 27, 2015 at 5:57 am
I will take a look at those scripts as well in my own little sandbox at first.
Then I can try to push them on to the customer DBA as well.
April 27, 2015 at 7:06 am
berrys's stuff is great look at ozar's script for indexes
April 27, 2015 at 8:24 am
Have a read through of at least chapter 1 of http://www.red-gate.com/community/books/accidental-dba, as well as this series
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 29, 2015 at 7:30 am
Hi all
Thank you very much for all the resources.
I have been reading through a great deal of it by now and forwarded Glenn Berrys scripts to the customer. They have already executed Brent Ozar suite before I wrote this post and found some missing indexes.
But my concerns are that we are more in the league if Gila's articles and Accidental DBA this time.
I don't think indexes will be sufficient, I think we are doing something wrong in our application - and need to rethink some of our concepts.
Hopefully when I get the results from our customer I will be enlightened and get something to start from.
Once again, thank you very much for the pointers to the resources - it's a great help.
Best regards
/Anders
April 29, 2015 at 7:56 am
i would be looking at sp_blitzcache or Pinal Dave's query which shows what is chewing up your cache. Look at the statements it is returning and check their query plan. It may help identify code issues. We had some stored procs which loved to use table variables (which MS will tell you can become an issue with large data sets), We changed two of them which were doing hundreds of thousands of logical reads. We changed to a true #temp table and took the reads down to 2000. I am sure this is an obvious one, what is your page life expectancy, if this is poor and you are pounding your disk subsystem more memory along with better code may be the answer
April 29, 2015 at 8:01 am
April 29, 2015 at 1:34 pm
Anders Hansen (4/29/2015)
I don't think indexes will be sufficient, I think we are doing something wrong in our application - and need to rethink some of our concepts.
If you want a 3rd party to look at it and are either in South Africa or happy to work across borders, drop me a PM. This is the kind of thing I do for a living.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
April 30, 2015 at 12:22 am
You may find that the application is doing thousands of calls to the DB the way only Vendor applications can.
You are not that far from the truth. 🙂
We have a few obvious painpoints caused by bad design in an 5+ year old application that has had a quite large increase in data and users.
So we do have a few candidates already for improvements - my experience just tells me to gain measurements to back it up before starting to improve something. Most often what you think is the biggest hurdle might not be - and the trouble may not be the big tables, but the excessive amount of queries.
E.g we have a rules engine to do validations in the user interface - which is overly trigger happy in regards to verifying with the database. Some screens do quite a lot of roundtrips on each keystroke - and with a rising user base this tends to become hurtfull... :w00t:
And Gila, thanks for your offer, in this round we will try to handle it ourselves though. As I said we have some candidates.
Best regards
/Anders
Viewing 12 posts - 1 through 11 (of 11 total)
You must be logged in to reply to this topic. Login to reply