Index Usage prior to 2014 Upfrade from 2008R2

  • Hello All,
    We recently upgraded (Side by Side) our oldest SQL Server box from  2008R2 to 2014 Standard onto VM running 2012 Data Center--also from 2008 standard.  Overall, my preliminary results during pre-migration testing, showed that the server was completing SQL Agent Jobs and Queries faster than the old server.  This part isn't really relevant, but some think it is---the old box had 24 CPU cores (2.5Ghz) and 64GB of memory, but the old box was running a 2008 standard server which can only use 32GB of memory. The new box is running 2012 server Data Center with 64GB of memory (SQL allocated 58GB leaving 6GB for OS), but only has 12 CPU cores (2.7Ghz).  So, the new machine has 32GB of additional RAM, but a little less than 1/2 the CPU Cores---the previous DBA stated that the machine was never really using all the CPUs allocated anyways.

    My problem now is a couple of random queries from in-house applications are not returning in a timely manner, if ever at all. When viewing the Estimated Execution Plan, the I see the proper Index is not being used---yet, it was being used on 2008R2.  After the Backup and Restores of all DBs to the new Instance, I set the Compatibility Mode to 120, updated Page Usage (DBCC UPDATEUSAGE <DB_NAME>) and then updated Statistics (EXEC SP_UPDATESTATS) for all DBs on the Instance...in that order.

    I  even tried to rebuild the Index that it used in 2008R2 and is not using in 2014, created a new one with proper Predicate and Output List, but it still doesn't use either Index, unless I force it to with a query hint.  Based on a question I had posted in this forum a few months ago, I learned about SARGABLE queries, so I figured I would check tat out and wouldn't you know it, the Predicate is using a LIKE statement with the % at the beginning and the end.  Form what I learned, I removed the % at the beginning and left the one at the end (MyName%) versus (%MyName%) and sure enough, it used the proper Index and returned lightning fast.

    So my main question here is, did the Cardinality Estimator change that dramatically from 2008R2 to 2014 where the same query stops using an Index it had used for many years, or is it possible I have missed something or could handle in a different manner without changing the query?  Obviously, I'm getting questioned from Development and Development Managers about why the query suddenly stopped working after the 2014 migration since they will have to re-write the query and send the application through the SDLC process again, and they think I've missed a setting or something.  Any help at all is greatly appreciated...

    To recap...
    Will not use the index without query hint
    AND pat.LastName like '%SomeName%'
    AND pat.FirstName like '%SomeName%'

    Will use the index without query hint
    AND pat.LastName like 'SomeName%'
    AND pat.FirstName like 'SomeName%'

    Thanks,
    Chris

  • The query won't use the NCI because of the leading wildcard character, which you've already accurately identified.

    The reason is because think about what the index actually is - it's duplicated data stored in a very specific structure.  This structure is a B-Tree and is sorted according to the key column(s).  Specifically, the is sorted by the whole string starting with the first character.  The query isn't searching for rows based on the first character.  Therefore, the query cannot use a seek on the index because the data isn't in the right order.

    Take a look at your actual execution plan on the old machine and I think the best you'll find is a scan of the NCI.  While I don't know your table structure, all indexes and query, my guess is that the optimizer chose to use the NCI scan to save a lot of reads to scan the entire CI.  There's no way to know without testing.

    The cardinality estimator did change with SQL 2014, the first change in a long time.  It looks at the statistics differently than it used to in previous versions, but I haven't dug into the differences deeply enough yet to post with confidence.  Because the rules have changed, it'll make different decisions.  This is why the type of testing you're doing right now is so very important.

    As for the problematic query, I don't know of a way to make a query with a leading wildcard do an index seek unless you write your own version of an NCI, but that's a bit of work.  If you're interested in trying it, I'd recommend SQL Server MVP Deep Dives, Volume 1.  Erland Somerskog wrote a chapter on this exact situation and he does a great job of explaining it.

  • Hi Ed,
    Thanks so much for your reply and confirmation.  I started reading around Microsoft Tech Articles and found a good one regarding the new Cardinality Estimator in 2014, and that it was the first time a new estimator had come out in a very long time.  As such, I put that information to use and started testing, and then made subsequent recommendations to the team---hopefully they will listen. I recommended a 3-prong approach:

    1. To ease the immediate pain and get the Operations teams going, enable the Trace Flag 9481 globally, which uses the legacy cardinality estimator and causes the query to return in just a second or so like the old 2008R2 instance did. I told them this was not a long-term solution, as you're throwing out all the GOOD that the new estimator has to offer in order to satisfy a poorly written query.
    2. Have the development team revise the problematic query to either use a SARGABLE function without the leading %, or if not a feasible option to remove the leading %, implement the trace flag at the query level using option(querytraceon 9481) or possibly force to use the appropriate index---yes, the latter two are poor practice, but one of those case by case things i guess. I explained using the Phone Book analogy of searching for "HAM" in a phone book as part of a name, versus searching a phone book for "HAM" as the beginning of a name.
    3. Add more CPU cores to reflect what the old box had, and also add memory since it is a cheap performance gain relatively speaking---i mean, how much more does 128GB cost than 64GB relative to the performance gain.  Started reading about PLE, and wow.

    Once #2 has been completed, disable the trace flag at the global level and move on---if we hit another problematic query, rinse, lather and repeat.

    https://docs.microsoft.com/en-us/sql/relational-databases/performance/cardinality-estimation-sql-server

    Thanks for your help,
    Chris

  • chris_barnhart - Tuesday, May 2, 2017 6:32 PM

    Hi Ed,
    Thanks so much for your reply and confirmation.  I started reading around Microsoft Tech Articles and found a good one regarding the new Cardinality Estimator in 2014, and that it was the first time a new estimator had come out in a very long time.  As such, I put that information to use and started testing, and then made subsequent recommendations to the team---hopefully they will listen. I recommended a 3-prong approach:

    1. To ease the immediate pain and get the Operations teams going, enable the Trace Flag 9481 globally, which uses the legacy cardinality estimator and causes the query to return in just a second or so like the old 2008R2 instance did. I told them this was not a long-term solution, as you're throwing out all the GOOD that the new estimator has to offer in order to satisfy a poorly written query.
    2. Have the development team revise the problematic query to either use a SARGABLE function without the leading %, or if not a feasible option to remove the leading %, implement the trace flag at the query level using option(querytraceon 9481) or possibly force to use the appropriate index---yes, the latter two are poor practice, but one of those case by case things i guess. I explained using the Phone Book analogy of searching for "HAM" in a phone book as part of a name, versus searching a phone book for "HAM" as the beginning of a name.
    3. Add more CPU cores to reflect what the old box had, and also add memory since it is a cheap performance gain relatively speaking---i mean, how much more does 128GB cost than 64GB relative to the performance gain.  Started reading about PLE, and wow.

    Once #2 has been completed, disable the trace flag at the global level and move on---if we hit another problematic query, rinse, lather and repeat.

    https://docs.microsoft.com/en-us/sql/relational-databases/performance/cardinality-estimation-sql-server

    Thanks for your help,
    Chris

    Be careful about adding more cores to the server.  I believe per-core licensing started with SQL 2012, so there's going to be an extra cost there.  You don't need to give your boss that kind of surprise after they just spent a bunch of money on a new server and licensing.

    Also, the edition you're running dictates the maximum amount of memory SQL Server will use.  According to the latest MSDN page at https://msdn.microsoft.com/en-us/library/cc645993(v=sql.120).aspx, it'll use up to 128 GB.  It used to be 64 GB, but I guess either they updated it or their own page is wrong.

    Personally, I think your option #2 nails it.  That's the direction I would go in.

  • Thanks Ed, you are correct. The license structure is now per core. We moved this instance from a physical 24 core box to a VM with 12 cores allocated. I believe i too read a couple months back that 2014 64bit Standard accommodated 128gb of memory.

    I am pushing for option #2, but I have a feeling they will want to do #1 until that application can make it through the SDLC since some operational folks are being affected and thus, losing money.

    This stuff is painful at times, but I will say that when it happens you start learning all kinds of new stuff and seeds of knowledge planted.

    Thank You very much for help, I greatly appreciate it.

  • chris_barnhart - Tuesday, May 2, 2017 9:16 PM

    ...We moved this instance from a physical 24 core box to a VM with 12 cores allocated...

    Just a small note on this. Not directly related to your issue.

    you had a server with 24 cores/ 48 threads

    From what you said you are now running on a VM with 12 cores.

    When you say 12 cores is 12 physical cores on the server or 12 virtual cores on the VM?

    If it is 12 physical cores how many cpu's are visible on the VM? 12 or 24?

    There are 2 aspects to this question
    1 - performance
    A - if the physical server has 12 cores and has HT active and if your VM only sees 12 cpu's then you are running at 1/4 of the previous cpu threads
    B - if the physical server has 12 cores and does not have HT active and your VM sees 12 cpu's then you are running at a bit more than 1/4 of the previous cpu threads - but you are not fully using those cores.
    C - if the physical server has 12 cores and has HT active and if your VM sees 24 cpu's then you are running at 1/2 of the previous cpu threads which may be your expectation.

    2 - licensing
    - as this is a standard version if running on a VM all cpu's made available to the VM need to be licensed - so if your setup is C above just make sure that you have licenses for 24 cores - not 12 cores as that would only apply if it was a physical machine, not a VM.
    On this aspect if not using a enterprise license from a performance point of view is better to use a physical machine as you get to use the full cpu power of the server for the same licensing cost.

    just a minor example
    physical server with 12 cores/24 threads
    setup 1 - without VM - licensing required for 12 cores (but uses 24 threads)
    setup 2 - with VM - single VM all 24 threads assigned to that vm - licensing required for 24 cores
    setup 3 - with VM - multiple VM = SQL Server VM with 12 vcpu assigned - licensing required for 12 cores

    As for memory - through 128GB at it as that is what the standard edition now supports.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply