So you are a SQL Server DBA, and you are responsible for a database, which gives you a hard time. Users constantly complain about poor performance, and the developers blame the database (what else?). It’s time to dig in and do some serious performance tuning once and for all. But how? Where do you start?
First, you need to identify the most heavy queries that run in your database. You can find them in two ways. The first is to monitor query statistics using DMVs, and the second is to run an event session or a SQL trace for a period of time, and collect “Statement Completed” events. Each method has its own pros and cons, but both of them can lead you to the most heavy queries in your database.
Once you have a problematic query in your hand, you need to execute it and examine its actual execution plan. That would probably mean executing the query in SQL Server Management Studio (SSMS). At this point, you have to be aware to an issue with the setting of ARITHABORT, which is different by default between the application and SSMS. This is a very common pitfall, which might lead you in the wrong direction and result in a lot of frustration.
The next step in your performance tuning journey is to analyze the actual execution plan of the query (the correct one) and find the bottlenecks in it. There are many things you need to know about execution plan analysis. But where do you begin? A common mistake is to begin with the operator with the highest relative cost. This is wrong for several reasons. The main reason is that the cost is always estimated, even when you look at the actual plan. And in most cases, the reason for the performance problem is poor estimations, so the cost numbers are probably lying to you. The right place to begin your execution plan analysis is to compare the estimated vs. actual number of rows in the various operators. If you find a big gap between the estimated and the actual number of rows, then there is a good chance that this is the reason for the performance issue, and now all you need to do is to understand why and fix it.
There can be all kinds of reasons for poor estimations, such as parameter sniffing, use of table variables or outdated statistics. Each one has its own remedies and solutions. For example, if the problem is outdated statistics, then you can update statistics more often, use filtered statistics, use trace flag 2371 (or upgrade to SQL Server 2016), etc. You can also instruct the optimizer how to execute the query by using query hints or plan guides. But you should be very careful with these tools, as they might constrain the optimizer and cause poor plans in the future in ways that you didn’t expect.
An important aspect of performance tuning is measuring. You have to be consistent about how you measure performance. If you change something in order to improve performance – how do you know if it really helped? Maybe it made things worse. And if it helped, by how much? Does it have a meaningful impact or is it a small change that users don’t even realize. Sometimes, you might change something that really improves the query at hand, but at the same time degrades performance of other queries. So you need a consistent and reliable method for measuring query performance, and use it in order to benchmark your changes.
This is such an important topic, and there is so much to cover. This is why I created the Advanced Query Tuning in SQL Server online course. In this course I go into details about all the things I mentioned here, and more, with extensive demos that you can download and play with. This course will take you many steps forward in your performance tuning journey and make you a query tuning professional. Pay only once, and get a lifetime access to the course.
The post What Do You Do When Performance Sucks? appeared first on Madeira Data Solutions.