February 5, 2015 at 7:43 am
I have a large database (several tables with hundreds of millions of rows) and I am running a query that performs joins on these tables to return some data. On Monday, this query was running find. Starting on Wednesday, a query that was running in a few seconds started to consistently take 30 minutes. An index seek on a 700 million row table for some reason became and index scan. As far as I can tell, data volumes have not changed significantly.
So, my question is not how to tune the query.
My question is this: Have people experienced cases where the SQL query optimizer starts to make different access path choice and what drives this behavior? In other words, are there cases where for no obvious reason, indexes stop being used and the only option is to rebuild the index?
February 5, 2015 at 8:08 am
What is your maintenance? Do you regularly rebuild/reorganize indexes?
Your case is typical for tables with not updated statistics.
Igor Micev,My blog: www.igormicev.com
February 5, 2015 at 8:27 am
Igor,
That is very likely the problem then. I am helping some people with this problem and I am not the database owner. For various reasons, I don't think their maintenance is as regular as it should be.
February 5, 2015 at 8:27 am
Could be parameter sniffing, perhaps a plan flush that compiled poorly the next time, it's not the index itself.
February 5, 2015 at 8:30 am
Steve Jones - SSC Editor (2/5/2015)
Could be parameter sniffing, perhaps a plan flush that compiled poorly the next time, it's not the index itself.
We identified the offending query in the stored procedure and analyzed it in SSMS. The estimated execution plan showed an index scan whereas it was previously showing an index seek.
February 5, 2015 at 9:52 am
Tom John-342103 (2/5/2015)
Steve Jones - SSC Editor (2/5/2015)
Could be parameter sniffing, perhaps a plan flush that compiled poorly the next time, it's not the index itself.We identified the offending query in the stored procedure and analyzed it in SSMS. The estimated execution plan showed an index scan whereas it was previously showing an index seek.
A few things. First, you want to get the Actual Execution plan not the estimated execution plan. In SSMS right-click in the query window and select "Show Actual Execution Plan". It is likely going to be the same but is not always.
Second, I have never seen an index "go bad" in the way that you have described. Sometimes they can get fragmented and performance degrades by a small percent but not seconds to several minutes.
Next, how is this query being executed? As a stored procedure? ad-hoc SQL?... If it's running as a stored proc then try running it with the recompile option and check the actual execution plan. Or you can run DBCC FREEPROCCACHE & DBCC DROPCLEANBUFFERS (caution: this will cause other queries to slowdown as they will need to recompile).
Last... Can you post the query plan? If you have the good and bad query plans that would be best but the bad one should do.
-- Itzik Ben-Gan 2001
February 5, 2015 at 10:28 am
Alan.B (2/5/2015)
Or you can run DBCC FREEPROCCACHE & DBCC DROPCLEANBUFFERS (caution: this will cause other queries to slowdown as they will need to recompile).
No this! It should never be done on a production. You can do that on a test environment.
Igor Micev,My blog: www.igormicev.com
February 5, 2015 at 10:49 am
Igor Micev (2/5/2015)
Alan.B (2/5/2015)
Or you can run DBCC FREEPROCCACHE & DBCC DROPCLEANBUFFERS (caution: this will cause other queries to slowdown as they will need to recompile).
No this! It should never be done on a production. You can do that on a test environment.
1. The OP did not specify what environment he was working in.
2. "Never?"... You can clear the plan cache in production provided that you understand the consequences and that they are acceptable. The correct answer to the question, "Should I ever run DBCC FREEPROCCACHE in Prod?" is It Depends.
-- Itzik Ben-Gan 2001
February 5, 2015 at 11:17 am
Can this be a simple case of there are just more people running this query, logging onto the system, running another query that is consuming resources, or any number of other similar factors?
Do you know the usage patterns of your system?
As an example, from 6 am till about 10 am, our usage is all reads. The writes spike just before noon, and reads take over again after lunch. Writes spike again about 4.
Why? When people first come in, they are getting all the data they need to do their jobs. They then "scramble" to get everything entered before going to lunch, and before going home for the day.
I can expand this to show usage patterns for the individual procedures. This can be correlated to a daily, weekly, or monthly time frame. Users are just doing different things at different times and days.
Similarly, the number of reports being run spike between 11:30 and 1. Again, why? The users kick off any number of reports, go to lunch, and the results are waiting for them when they come back.
If you have narrowed this to a specific proc, how is it written? Can it do multiple things depending upon the parameters? In this case, they may be doing one thing on Mondays and Tuesdays, which uses an efficient execution plan. Things change Wednesday, the proc is run differently, and a less than optimal plan is used.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
February 5, 2015 at 11:23 am
Alan.B (2/5/2015)
Igor Micev (2/5/2015)
Alan.B (2/5/2015)
Or you can run DBCC FREEPROCCACHE & DBCC DROPCLEANBUFFERS (caution: this will cause other queries to slowdown as they will need to recompile).
No this! It should never be done on a production. You can do that on a test environment.
1. The OP did not specify what environment he was working in.
2. "Never?"... You can clear the plan cache in production provided that you understand the consequences and that they are acceptable. The correct answer to the question, "Should I ever run DBCC FREEPROCCACHE in Prod?" is It Depends.
Imagine a production env with thousands of transactions in a second and you do that, blast.
Igor Micev,My blog: www.igormicev.com
February 5, 2015 at 11:29 am
Igor Micev (2/5/2015)
Alan.B (2/5/2015)
Igor Micev (2/5/2015)
Alan.B (2/5/2015)
Or you can run DBCC FREEPROCCACHE & DBCC DROPCLEANBUFFERS (caution: this will cause other queries to slowdown as they will need to recompile).
No this! It should never be done on a production. You can do that on a test environment.
1. The OP did not specify what environment he was working in.
2. "Never?"... You can clear the plan cache in production provided that you understand the consequences and that they are acceptable. The correct answer to the question, "Should I ever run DBCC FREEPROCCACHE in Prod?" is It Depends.
Imagine a production env with thousands of transactions in a second and you do that, blast.
Yes, in this environment I would never run it.
-- Itzik Ben-Gan 2001
February 5, 2015 at 11:35 am
The problem was solved by running some index rebuilds. So frustrating when people aren't careful about performing routine maintenance.
February 5, 2015 at 12:03 pm
Tom John-342103 (2/5/2015)
The problem was solved by running some index rebuilds. So frustrating when people aren't careful about performing routine maintenance.
Index rebuilds = plan recompiles. There are a few other factors to consider. How about posting up the execution plan?
For better assistance in answering your questions, please read this[/url].
Hidden RBAR: Triangular Joins[/url] / The "Numbers" or "Tally" Table: What it is and how it replaces a loop[/url] Jeff Moden[/url]
February 5, 2015 at 12:07 pm
I don't have the execution plan. What I know is that after the index was rebuilt (actually, I think they updated statistics - they told me they ran RUNSTATS), the query in SSMS which had been running for almost 1/2 hour returned to its normal run time of a couple of seconds.
February 5, 2015 at 2:36 pm
Tom John-342103 (2/5/2015)
I don't have the execution plan. What I know is that after the index was rebuilt (actually, I think they updated statistics - they told me they ran RUNSTATS), the query in SSMS which had been running for almost 1/2 hour returned to its normal run time of a couple of seconds.
The thing I find most interesting that performance went from fine to awful essentially over night - no degradation over time. I know that size of the underlying tables did not change much although one of the tables has a lot of insert and delete activity. I guess some threshold was hit that made SQL decide an index scan instead of seek was necessary.
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply