The need for performance tuning your SQL Server workloads has never been more critical. As workloads are moved into cloud services, their efficiency has a direct impact on the recurring bills. A workload that consumes more resources than necessary costs more to host in the cloud and affects the bottom line for your organization.
You can do performance tuning for your SQL Server workloads with a free toolkit! SQLFacts is a free suite of tools (42 and counting) for SQL Server database professionals. It includes tools for database development, database administration, and performance tuning. I wrote a series of six articles about SQLFacts for SQLServerCentral. They were a general description of the toolkit. There's a list of those articles here.
This article is going to focus on the tools for performance tuning a SQL Server workload. The tools make it very easy to identify performance tuning opportunities and they put all the information you need right at your fingertips!
Here's a chart showing how to apply the SQLFacts tools for a variety of performance tuning activity:
The tools I have been using the most are IndexHistory, Indexer_SQLCode, IndexNeeds, IndexNeedsPlus, QueryHistory, QueryTracker, Threshold, and Sessions. They allow me to quickly find performance tuning opportunities. They also guide me as I make changes to indexes and/or queries. They are critical to my productivity.
There are three tools, in particular, that offer some unique performance tuning functionality. They include features that I have not seen any other party (person or company) even consider, much less deliver, at any price, much less free. The QueryTracker, Threshold, and IndexNeedsPlus tools are very interesting and very useful.
QueryTracker
The QueryTracker tool is a very simple idea, but I have never seen anything like it, not even a blog post script. The tool returns very useful information about a stored procedure by digging into the plan cache.
You specify a stored procedure near the top of the T-SQL code for the tool. It returns three result sets.
The first result set lists statistics at the stored procedure level.
The second result set lists statistics at the query level within the stored procedure. The output includes the query text, the cost estimate for the query, and a percentage for how much CPU time (or duration) the query uses within the stored procedure. As we all know, the query cost (relative to the batch) in an execution plan is not always an accurate reflection of CPU/run time. This result set shows exactly where the stored procedure is actually spending time.
The third result set lists the parameters for the stored procedure and the values passed in when an execution plan was initially created.
The QueryTracker tool is a great way to very quickly focus your attention on the query that needs improvement. It also allows you to easily verify that your query/index changes are having the desired effect.
Threshold
The Threshold tool has a few different purposes.
It can be used to gain a sense of your overall workload.
It can be used to identify specific performance tuning opportunities.
It can be used to determine a specific value for Cost Threshold for Parallelism (CTFP). Wait, What?
The Threshold tool returns two result sets.
The first result set breaks the workload (queries) into various percentiles based on CPU time (or duration). It shows the average/minimum/maximum time for each percentile. The vast majority of queries for a typical OLTP workload are very quick, so the lowest several percentiles may not be very informative. The highest several percentiles give you a sense of your overall workload. They tell you how much of your workload takes various amounts of CPU time (or duration).
The second result set lists all the queries at, or above, the average/minimum/maximum for a given (configurable) percentile. The output includes the query text, the object for the query, statistics for the query, and the cost estimate for the query. This information can be used to identify performance tuning opportunities. If the list contains queries at (and above) the 95th percentile then all of them could be potential opportunities. Further, very large values for CostEstimate, especially where Seconds_AVG seems disproportionately low, could indicate issues with cardinality estimation. The most interesting use of this information is for determining a specific value for CTFP.
The conventional wisdom is that a CTFP of 5 is too low for modern server hardware because work (executing queries) can be completed faster now than when the default value of 5 was originally established. I question the logic of this. I might even call it a myth. GASP! Heresy!
The conventional wisdom does not make sense to me.
The CTFP is used as the point at which the overhead of coordinating parallelism is more/less than the anticipated benefit of a parallel plan over a serial plan. It's CTFP (for cost), not DTFP (for duration). The cost of a query is an estimate for how much work will need to be performed, not how long it will take to perform the work. The estimated cost in a query plan does not vary with the capability of the server, at least not in my very limited testing. If the cost, which is compared to the CTFP, does not vary then why would the CTFP?
Why would one assume the work can be done faster on modern server hardware, but coordinating parallelism always happens at the same old speed? If executing queries and coordinating parallelism both run faster on a better server then the CTFP does not need to change. I assume Microsoft understands CTFP and would not ship SQL Server with a default value for CTFP that makes the system run slower than it could. They understand that SQL Server will automatically curtail parallelism if the system is busy. The CTFP allows an administrator to keep more CPU resources in reserve for bursts of activity.
If you need to curtail parallelism beyond what SQL Server does automatically then how do you determine an appropriate value for CTFP?
I recently did a poll on LinkedIn to see how others set the CTFP on their systems. There were 209 responses. Here's the poll with the breakdown of responses:
How do you set your Cost Threshold for Parallelism?
25% - keep default value (5)
14% - set it to N and forget it
38% - set it to N and adjust it
23% - set it using empirical data
The typical advice for setting the CTFP is to use an arbitrary value much greater than 5 (with 50 mentioned most often), test your workload, and adjust accordingly. I'm not sure what it means to test your workload after changing the CTFP, and nobody offers any detail. What's the criteria for too little parallelism or too much parallelism? The usual suggestion is to monitor for CXPACKET waits, but how much is too much? How can you be sure that fewer CXPACKET waits is because of raising the CTFP versus normal variation or SQL Server automatically curtailing parallelism?
There is no universal correct setting for the CTFP. The trick is to choose a value that results in some parallelism happening, but not an exorbitant amount. An appropriate value is dependent upon the workload for the server. It's invalid to say that a value of 5 is too low for an instance simply because of modern server hardware. A value of 5 may be just fine, if 99% of the queries have a cost below 5. A value of 50 may be too low, if 10% of the queries have a cost above 50. The empirical data you need for setting the CTFP is to see which queries would be affected by setting the CTFP to a particular value. I have seen only one blog post script along those lines, but you need more information than it provides.
The Threshold tool provides the empirical data you need. It lists the most expensive queries and shows you the cost estimate for each one. If the cost estimate exceeds the CTFP then the query is eligible for parallelism. You can choose a CTFP value that makes a small percentage of your queries eligible for parallelism while making the vast majority of your queries ineligible for parallelism. It's a much better approach than using an arbitrary value for the CTFP and then maybe adjusting it based on a fleeting condition.
IndexNeedsPlus
The IndexNeedsPlus tool is a companion to the IndexNeeds tool. They both analyze the "missing index" information provided by SQL Server itself. There are two approaches to gathering the "missing index" information, by using DMVs or by searching the plan cache. The IndexNeeds tool uses the DMVs. The IndexNeedsPlus tool searches the plan cache. The two methods, and the two tools, usually have a lot of overlapping information, but not identical information.
The "missing index" information from DMVs was described as "useless" by a prominent member of the SQL Server community in July 2022. The reason given was an index suggestion is not, and cannot be, linked to a query. I do not regard an index suggestion as "useless" in the absence of being linked to a query, although such a link is definitely valuable context. I simply disagree that an index suggestion cannot be linked to a query. There's a new(ish) DMV, introduced with SQL Server 2019, which serves exactly that purpose. The new(ish) DMV has a couple limitations, but it does what supposedly cannot be done.
The "missing index" information from the plan cache is harder to access. There are various blog post scripts that make an effort, but all of them fall short of being really useful.
The IndexNeedsPlus tool takes the approach to another level.
The IndexNeedsPlus tool returns three result sets. Two of them contain the same rows, just ordered differently for convenience. Each row represents a query and a "missing index" suggestion for the query. The output includes the query text, the object for the query (if applicable), the cost estimate for the query, a CREATE INDEX statement, the familiar "missing index" impact (index_value), and a calculated value indicating an anticipated benefit.
The first result set contains the rows described above, ordered by the query.
The second result set contains the rows described above, ordered by the index.
The third result set contains each unique index suggestion, a total benefit, and a count of how many queries are associated with the index suggestion.
Several different queries might be associated with the same index suggestion, or very similar index suggestions. A single query might be associated with several different index suggestions. An index suggestion might be completely new, or it might be very similar to an existing index. An index suggestion should not be blindly implemented. It should be compared with existing indexes to see if an existing index can be adjusted to cover the needs of the query. Further, the size of the suggested index and the anticipated benefit must be carefully considered.
The IndexNeedsPlus tool is unique in two important ways:
- I have never seen anything else that pulls together so much information.
- I have never seen anything else that presents the information in such a convenient form.
The IndexNeedsPlus tool and the IndexNeeds tool are invaluable for finding opportunities to quickly and easily tune your SQL Server workloads. Index changes are often the best "bang for your buck" in the process.