March 20, 2012 at 8:04 am
I have a table with a few hundreads of millions rows in it and that table has an index (non clustered) on it.
If only part of the index is require for a join condition, will SQL load the entire index in memory than use only the require part of it or can it load only part of that index into memory and than perform the join?
Ty
March 20, 2012 at 8:11 am
Depends whether the query is resolved with an index scan or lots of index seeks. Check the execution plan.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 20, 2012 at 8:13 am
True, index seek.
March 20, 2012 at 8:40 am
I've made up my case scenario.
The index take around 450 megs.
I've build up a specific query to make sure an index seek occurs (which was confirmed by the plan).
Upon check, the index was taking 108 megs in memory, a quarter of the index when only 8 rows were returned (it is a non unique index).
Here's the index definition if you'd like:
CREATE NONCLUSTERED INDEX [NameOfIndex]
ON [SchemaName].[TableName]
(
[Col1] ASC,
[Col2] ASC,
[Col3] ASC
)
INCLUDE ( [Col4])
GO
Col1, 2, 3 are INT, Col4 is a DECIMAL (18,6)
I will need to check the index BTree depth and compute how many pages are require to be read to get through the leaf level (intermediate pages should be able to store 670 entries (671 - 1)). But I believe it (BTree depth) should be quite small. Hence why it is taking 108 megs for that part of the index?
FYI: The index is newly created with 100% Fill Factor. Pad_index off for near maximum compaction.
March 20, 2012 at 8:43 am
Hard to say anything for a hypothetical situation.
Run DROPCLEANBUFFERS then run your query with statistics IO on. The number of physical reads done on that table is the number of pages pulled into memory.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 20, 2012 at 9:18 am
Thank you for your time Gail.
I now get only 4 reads and the memory is kept to a minimum.
I'll need to investigate why the initial run took more than 100 megs for the same plan.
I had done the dropcleanbuffers but not the statistics io that initial time so I'm missing some investigation data for that one 🙁
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply