Any Opinions On Computed Columns & Performance

  • Hello all

    Wondering if anyone has had any experience & is willing to offer opinions on computed columns & their impact on performance. As an example, I'm dealing with a client's legacy db ported directly to SQL2K. The most queried table -- selects & updates -- has about 110 thousand (110k) records and approx. 75 columns, 25 of which are computed columns. Some of these (maybe 15) need to be indexed to provide lookup capabilities. Assuming we can live with the database settings requirements, will the added overhead of maintaining indexes on these computed columns be too much of a performance hit, is this overhead roughly the same as a regular column, or is it livable?

    Thanks for any input,

    Vik

  • I tried this previously with a 5,000,000 plus row table and saw no issue. I have since gotten rid of that DB so have no real details beyond before and after no noticeable strain or duratiuon differences noted on the server.

  • Consider using indexed views on the tables if the data is relatively static. The server materializes a physical table with the values as columns. The computation is performed once and then it is stored as if it's a regular column. When you update the row, the materialized data is updated too.

    This is a new feature with SS2k.

  • The benefit of indexed computed columns when being searched upon will out weigh any performance hit (if there is one) of maintaining them.

    Simon Sabin

    Co-author of SQL Server 2000 XML Distilled

    http://www.amazon.co.uk/exec/obidos/ASIN/1904347088


    Simon Sabin
    SQL Server MVP

    http://sqlblogcasts.com/blogs/simons

  • Working with a 100,000 row table of numeric values, and examining the execution plan, Here's what I find.

    Select a,b,c,d Subtree cost was 1.23

    Select a + b, c + d, a,b,c,d Subtree cost was 1.23

    Select a * b, c * d, a,b,c,d Subtree cost was 1.23

    select convert(money, a + B), a + b, c + d, a,b,c,d Subtree cost was 1.23

    The only differences I saw were the addition of a Compute Scalar step. I would have thought that there was a higher cost difference than this just off the top of my head, but I repeated the same tests on several of my tables, and the results were the same. I was unable to create a cost difference until I worked with more than 5 million rows, and then the cost difference still wasn't but a single digit (1.24)

    Regardless, though, I agree on the indexed view advice. Follows the concepts of pre-aggregated data that we've been using for years.

  • If the columns could possibly be Null, how would you handle this?

  • First off, thanks to everyone above for their advice & input.

    quentinjs,

    If, in effect, a computed column is treated like a regular column for purposes of indexing, why would it matter if it's value were null?

    Vik

  • Well I have a case where I have 4 fields (which may be Null) and I want to create a computed total of them. (Nulls should be treated as 0)

    Thanks!

    Quentin

  • Quentin

    Try using the isnull operator, like so:

    colvalue = isnull(col1, 0) + isnull(col2, 0) + isnull(col3, 0) + isnull(col4, 0)

    this will return the col? values as 0 if they start out being null.

    Check out BOL -- books on line -- for more detail on isnull....

    Vik

  • This is probably asking too much, but can you do If/ELSE type conditionals? Is Computed fields a SQL 200 feature, or does version 7.0 also support it?

  • They are called CASE statements. See SQL BOL for more information.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply