February 11, 2016 at 8:12 am
Hello,
I'm in a strange situation with one large table. He was initially 900 millions rows(in 4 columns: 3 int's ans 1 char(1)). More than half of rows were moved and the final status is 390 millions rows. He has one non-clustered index and a clustered PK.
Before the movement of data for this select:
select * from userdata where 'key' = 'value' he made an index seek on 'key'(very fast result)
after, he is doing an index scan on PK and execution time incresed 2, even 3 times. If I put "with (forceseek)" the behavior is like before the movement. I noticed that this conduct happens when 'key' matches more then 1 million rows, but I think ti is not necessarly a rule. The statistics are up to date, the indexes are rebuild.
In the PK scan case the Estimate Number of Rows(1492080) is pretty close to Actual Number of Rows(1487899).
Where shoul I look?
Thank you!
February 11, 2016 at 1:28 pm
Is the plan with the index seek *only* an index seek? Or is it a combination of an index seek, a nested loops, and a key lookup?
I assume the latter. And in that case, the explanation has to do with the tipping point in the costing model.
Compare this to books. Assume I gave you a copy of the Lord of the Rings trilogy, that was expanded with an index listing all characters and the pages where they appear. If I asked you to go review all mentions of Galadriel, you would probably use the index to find the few handful of pages where she apears. But if you had to list all references to Frodo, it would probably be faster (and defintiely more fun) to simply reread the books. At one point the number of pages exceeds a timmping point where all the leafing between index and main text and back again starts to take more time than reading the entire thing. That tipping point is not a fixed number of page numbers in the index entry, it is a ratio of the number of references versus the total number of pages in the book.
I assume that for your table, before the purge the statistics resulted in an estimated rowcount that was just below the tipping point for the query. After the cleanup, the total number of rows has probably gone down way more than the number of matches, so now the estimated number of rows found as a percentage of the total table size has gone up. And has passed the tipping point.
Can you run the query with and without the FORCESEEK hint and with SET STATISTICS IO turned on? That will show you the actual amount of pages that have to be read.
February 12, 2016 at 1:29 am
You are right, the index is not covered and it does a nested loop join with a key lookup.
With forceseek I got this:
Scan count 1, logical reads 6046852, physical reads 0, read-ahead reads 570, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
without:
Scan count 1, logical reads 2319106, physical reads 694, read-ahead reads 2315361, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
February 12, 2016 at 1:55 am
manole_i (2/12/2016)
You are right, the index is not covered and it does a nested loop join with a key lookup.With forceseek I got this:
Scan count 1, logical reads 6046852, physical reads 0, read-ahead reads 570, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
without:
Scan count 1, logical reads 2319106, physical reads 694, read-ahead reads 2315361, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
THere are twi interesting things I see in those results.
First - the number of logical reads for the query with forceseek is 2.5 times the number of logical reads for the unhinted query. So I understand why the optimizer makes its choice: if the estimated rowcounts are correct, then the QO expected those IO amounts. Also, because the scan is sequential read-ahead optimization can be used (and is used, as shown in the results).
Second - the forceseek plan does logical IO only and the unhinted plan does a lot of physical IO. This is not taken into account for the plan choice (because the QO does not know what is and what isn't in the buffer pool, and because this could be different when the plan is later reused), but it could be an explanation of why the cheaper plan ended up taking more time than the more expensive plan.
If you want to dig deeper into this, then you will have to look at what other activities in your workload cause this difference. What surprises me most is that there are no phyiscal reads at all for the forceseek plan - an obvious explanation would be when the nonclustered index is in the buffer pool and the clustered index is not, but then the lookups would incur physical reads. Now it looks as if exactly those pages it needs from the clustered index are in the buffer pool, but the rest of the clustered index is not in the buffer pool.
Did you perhaps run the forceseek query once without measuring, then run it again with the measurement and then run the unhinted version?
For a fair comparison, always either forcefully flush the buffer pool before running each query (by using DBCC DROPCLEANBUFFERS - but do not do that on a production server!!), or ensure that each query can employ the buffer pool fully (by doing two executions of the same query in a row and measuring only the second),
Oh, of course - if there is something going on as a regular process in your workload that causes these weird differences in buffer pool usage, then you may end up deciding that you prefer the more expensive query. But for now, I assume that the way you ran your tests has influenced the results.
February 15, 2016 at 4:29 am
Indeed I run the query a few times. I will do the tests after I will clean-up the buffers. On the machine it was only me, no other activity could interfere. For instance I put this task in background, some others came along with higher priority and I found some workarounds:
1. declare @var int = 'value'
select * from userdata where 'key' = @var this does an index seek
2. the index on 'key' shloud be modified to contain the other fields with include
3. the forceindex
Thnak you for your replys and hints!
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply