Need Correct Index for slow query

  • This is the query which app uses and it takes 30 seconds.

    select * from tablename FOR XML AUTO, ELEMENTS

    Table is 1.8 GB , 1.5 Million Rows and Table def is
    CREATE TABLE [dbo].[tablename](
        [enh_id] [int] NOT NULL,
        [emailBCC] [text] NULL,
        [emailBody] [text] NULL,
        [emailSubject] [varchar](255) NULL,
        [timeSent] [datetime] NULL,
        [sigorig] [varchar](255) NULL,
        [sigid] [varchar](255) NULL,
        [en_id] [int] NULL
    ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

    GO

    SET ANSI_PADDING OFF
    GO

    There is a pkey clustered index on[enh_id] [int]  but when I look at the exec plan, I see a full Index Scan , No seek. 

    Also since columns being used are text, I cannot add an index!  Any suggestions welcome to speed up the query.

    Thanks

  • You are selecting all rows in the table, so there is nothing to 'seek' and a scan will always happen.

    The absence of evidence is not evidence of absence.
    Martin Rees

    You can lead a horse to water, but a pencil must be lead.
    Stan Laurel

  • Out of interest? why ARE you using the TEXT data type? It's deprecated, and has been since SQL server 2008, and if you're using 2016 you really should be using (n)varchar(max).

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Thanks Tom.
    That table is used from legacy code.

    I will ask them to update the column data type  with nvarchar(max).

    Question is - will the import work just as text  and no data is messed up?

    Secondly- can you suggest some indexes to speed up the query for large table ? Definition and query above.

    Thanks

  • sqlguy80 - Sunday, March 12, 2017 9:48 AM

    Thanks Tom.
    That table is used from legacy code.

    I will ask them to update the column data type  with nvarchar(max).

    Question is - will the import work just as text  and no data is messed up?

    Secondly- can you suggest some indexes to speed up the query for large table ? Definition and query above.

    Thanks

    Which import?

    Indexes do not speed up non-filtered queries, as far as I know.

    The absence of evidence is not evidence of absence.
    Martin Rees

    You can lead a horse to water, but a pencil must be lead.
    Stan Laurel

  • Do NOT convert your TEXT columns to NVARCHAR(MAX).  It's a waste of disk space, memory, and IO because NVARCHAR uses two characters per byte instead of just one.  You would actually double the amount of time for your query to work, your DBA will hate you because you'd nearly double the size of the table, and your import performance will suffer, as well

    Since you started with the TEXT datatype, use VARCHAR(MAX), instead.

    As to performance, as Phil mentioned earlier, you processing all rows and columns in the entire table and the TEXT columns (and future VARCHAR(MAX) columns are "out of row", which causes things to slow down by 2 to 4 times.  Check your data for length in those columns.  If they never exceed VARCHAR(8000), use that for the column datatype, instead.

    If you really want performance, stop using XML. 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Some things to consider:

    Typically you'll most likely access these rows by datetime range.  I'm guessing the actual query might even have such a date restriction, or, if it doesn't, one could perhaps be added (do you really want to see, say, 6 month old email info?).  If so, change the clustered index to:
    ( [timeSent], [enh_id] )

    Encode the [sigorig] and [sigid] values, i.e., use a smallint  (or int, if needed) instead of varchar and store the varchars in a separate lookup table.

    Is [emailBCC] the list of bcc recipients?  If so, I'd encode that list as well, since they probably tend to repeat.

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply