poor query performance by joining large table and new CE

  • We have a complex stored procedure working in our database. Some of the statements are executed with different query plans depending on database compatibility level / Trace Flags. The different query plans leads to very significant differences in execution time of that stored procedure (some seconds to more than one hour).

    See the linked QueryPlans.

    I tested with these configurations on a SQL Server 2019:

    Microsoft SQL Server 2019 (RTM) - 15.0.2000.5 (X64)

    a)       compatibility level 100 (best performance)  QueryPlan

    b)      compatibility level 150 (poor performance) QueryPlan

    c)       compatibility level 150  + LEGACY_CARDINALITY_ESTIMATION  (best performance) QueryPlan

    insert into AttribInt (oid, aid, category, attrflag, attrStatusFlag, value)

    select tm.new_oid, a.aid, a.category, a.attrflag & ~@ATTRFLAG_FROM_TYPICAL_VALUE, a.attrStatusFlag, a.value

    from AttribInt as a

    join #tblMerge as tm on (a.oid = tm.oid)

    where     tm.new_oid is not null

    and tm.matchType = @MR_COPY

    and a.aid <> @aidObjectTag

    The statement (see above) is a join between a temp. table "#tblMerge" with about 300 rows and a data table "AttribINT" with about 230 million rows. We want to copy all rows in AttribINT (about 1200 rows) which are referenced by OID from the rows in #tblMerge.

    with a) and c) the join was performed with a Nested Loop and Index Seek - so as expected

    with b) the join used Merge Join with Clustered Index Scan over the 230 million rows.

    What is the reason for that behavior?

    What can be the solution except setting down the compatibility level or activating LEGACY_CARDINALITY_ESTIMATION  ?

    Thanks for your support!

    Thomas

  • Need to see the DDL for AttribINT, including all index definitions.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • CREATE TABLE [dbo].[AttribINT](
    [OID] [bigint] NOT NULL,
    [AID] [int] NOT NULL,
    [Category] [tinyint] NOT NULL,
    [AttrFlag] [smallint] NOT NULL,
    [Value] [int] NULL,
    [AttrStatusFlag] [tinyint] NOT NULL,
    CONSTRAINT [PK__IAttribINT] PRIMARY KEY CLUSTERED
    (
    [OID] ASC,
    [AID] ASC
    )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
    ) ON [PRIMARY]
  • I'd suggest first try creating a non-clustered index on ( OID, AID ).  Cluster the temp table on ( OID /*and AID if available*/ ).

    CREATE UNIQUE NONCLUSTERED INDEX IX1_IattribINT ON dbo.AttribINT ( OID, AID ) WITH ( FILLFACTOR = 98 ) ON [PRIMARY];

     

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • Some queries benefit from the cardinality estimator change (2014). Some do not. Whether it's most or a few in your situation is hard to determine without analyzing a larger workload. If you are on 2019, I don't know that I'd want to change this for all queries, without some way to tell if this helps or hurts many queries.

    In the short term, you can set a query hint (https://docs.microsoft.com/en-us/sql/t-sql/queries/hints-transact-sql-query?view=sql-server-ver15#use_hint) for this query.

    I would a) as Scott suggested, tune this query. b) I'd start to set up some sample workload of a number of queries and track the performance of all these. c) I'd likely switch the cardinality estimator with the database scoped configuration (https://docs.microsoft.com/en-us/sql/t-sql/statements/alter-database-scoped-configuration-transact-sql?view=sql-server-ver15) and then retest my workload.

  • If the main table is really that narrow (few bytes), then compress the non-clus index to reduce the pages more:

    CREATE UNIQUE NONCLUSTERED INDEX IX1_IattribINT ON dbo.AttribINT ( OID, AID ) 
    WITH ( DATA_COMPRESSION = PAGE, FILLFACTOR = 98 ) ON [PRIMARY];

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

  • Have you tried "inner hash join" instead of "join"?

    select tm.new_oid, a.aid, a.category, a.attrflag & ~@ATTRFLAG_FROM_TYPICAL_VALUE, a.attrStatusFlag, a.value
    from AttribInt as a
    inner hash join #tblMerge as tm
    on tm.oid = a.oid
    and tm.new_oid is not null
    and tm.matchType = @MR_COPY
    where a.aid <> @aidObjectTag
  • This word document is worth reading:

    https://docs.microsoft.com/en-us/previous-versions/dn673537(v=msdn.10)?redirectedfrom=MSDN

  • It seems that a.oid and tm.oid are of different data types.

    any reason for that?

    _____________
    Code for TallyGenerator

  • Thanks all for your replies.

    I just tested the "inner hash join" (Jonathan), but the table/index scan is still used.

    @steve-2: I am using database scoped configuration for LEGACY_CARDINALITY_ESTIMATION

    This will give me the best performance at the moment

    @sergiy: both OID fields are of same data type BIGINT

    .. still going on testing.

    Thomas

  • As almost a sidebar, try a couple of things on your original query (posted below just so you don't have to look back.... they've worked for me in the past.

    insert into AttribInt (oid, aid, category, attrflag, attrStatusFlag, value)

    select tm.new_oid, a.aid, a.category, a.attrflag & ~@ATTRFLAG_FROM_TYPICAL_VALUE, a.attrStatusFlag, a.value

    from AttribInt as a

    join #tblMerge as tm on (a.oid = tm.oid)

    where tm.new_oid is not null

    and tm.matchType = @MR_COPY

    and a.aid <> @aidObjectTag

    1. Add WITH (TABLOCK) to the "insert into AttribInt".
    2. Because SQL Server sometimes doesn't estimate the execution plan correctly until the values of variables are known, try adding a OPTION(RECOMPILE) to the end of the code.

    If that doesn't work (it won't hurt), we may have to split this puppy up just to get way from the idea of reading from the same table that we're inserting into.  Yeah... I know... SQL Server is supposed to have stuff in it to prevent those types of problems but I've also seen that stuff not work as expected.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply