any better way of doing this Query

  • Create Table Table1 (id1 int,id2 int,id3 int,id4 int)

    Create table table2 (id int not null identity(1,1), description varchar(100))

    Insert into Table1 values (1,2,3,4)

    Insert into Table1 values (3,4,5,6)

    Insert into Table1 values (2,7,1,9)

    Insert into Table1 values (6,null,8,4)

    Drop table table2

    Insert into table2 values ('Description1')

    Insert into table2 values ('Description2')

    Insert into table2 values ('Description3')

    Insert into table2 values ('Description4')

    Insert into table2 values ('Description5')

    Insert into table2 values ('Description6')

    Insert into table2 values ('Description7')

    Insert into table2 values ('Description8')

    Insert into table2 values ('Description9')

    Select

    Table1.id1,t1.description,

    Table1.id2,t2.description,

    Table1.id3,t3.description,

    Table1.id4,t4.description

    From Table1

    Left Join table2 t1 on Table1.id1 = t1.id

    Left Join table2 t2 on Table1.id2 = t2.id

    Left Join table2 t3 on Table1.id3 = t3.id

    Left Join table2 t4 on Table1.id4 = t4.id

    any better way of acheiving this result set

  • I think this is the simplest way.

    Vinu Vijayan

    For better and faster solutions please check..."How to post data/code on a forum to get the best help" - Jeff Moden[/url] πŸ˜‰

  • Not sure if this is any better, but here's one way: -

    SELECT

    a.id1, MAX(b.d1) AS description,

    a.id2, MAX(b.d2) AS description,

    a.id3, MAX(b.d3) AS description,

    a.id4, MAX(b.d4) AS description

    FROM Table1 a

    CROSS APPLY (SELECT

    CASE WHEN a.id1 = id THEN description END AS d1,

    CASE WHEN a.id2 = id THEN description END AS d2,

    CASE WHEN a.id3 = id THEN description END AS d3,

    CASE WHEN a.id4 = id THEN description END AS d4

    FROM table2

    WHERE id = a.id1 OR id = a.id2 OR id = a.id3

    OR id = a.id4) b

    GROUP BY a.id1, a.id2, a.id3, a.id4;


    Forever trying to learn
    My blog - http://www.cadavre.co.uk/
    For better, quicker answers on T-SQL questions, click on the following...http://www.sqlservercentral.com/articles/Best+Practices/61537/
    For better, quicker answers on SQL Server performance related questions, click on the following...http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

  • This is similar to Cadavre's but aggregates within the APPLY:

    Select

    t1.id1, [description] = x.description1,

    t1.id2, [description] = x.description2,

    t1.id3, [description] = x.description3,

    t1.id4, [description] = x.description4

    From Table1 t1

    CROSS APPLY (

    SELECT

    Description1 = MAX(d.Description1),

    Description2 = MAX(d.Description2),

    Description3 = MAX(d.Description3),

    Description4 = MAX(d.Description4)

    FROM (

    SELECT

    Description1 = CASE WHEN t2.id = t1.id1 THEN t2.[Description] END,

    Description2 = CASE WHEN t2.id = t1.id2 THEN t2.[Description] END,

    Description3 = CASE WHEN t2.id = t1.id3 THEN t2.[Description] END,

    Description4 = CASE WHEN t2.id = t1.id4 THEN t2.[Description] END

    FROM table2 t2

    WHERE t2.id IN (t1.id1, t1.id2, t1.id3, t1.id4)

    ) d

    ) x

    β€œWrite the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • vinu512 (7/25/2012)


    I think this is the simplest way.

    Look at the execution plan for that, Vinu. There are 9 rows in one table and 4 in the other. The Cartesian Product would, of course, be 9*4 or 36.

    Although you may be able to resolve it with some proper indexing (which I haven't tested), each and every scan of the table in the execution plan has a row count of 36. That's effectively 5 accidental CROSS JOINs which will crush the server for sets that aren't much larger (say 10,000 rows) if the query plan doesn't change.

    Still, even for 4*9 rows, I'd still fix this because if it's used a lot, its going to waste a lot of resources even if some of them are cached.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Just my opinion,

    Cadavre's method also has an accidental CROSS JOIN in it according to the row counts coming off of one of the table scans. It also has a relatively expensive sort in it.

    Chris' method has just two table scans and an aggregate. No sorts, no unexpected row counts.

    Yes, all of this could change with higher row counts and the proper application of indexes but, despite its comparative length, I'd use Chris' method. It follows the basic principle that you first get the data and do any necessary calculations in one section of the code and display it (cross tab, in this case) in another.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Nice work Chris and Cadavre.

    Jason...AKA CirqueDeSQLeil
    _______________________________________________
    I have given a name to my pain...MCM SQL Server, MVP
    SQL RNNR
    Posting Performance Based Questions - Gail Shaw[/url]
    Learn Extended Events

  • Jeff Moden (7/25/2012)


    vinu512 (7/25/2012)


    I think this is the simplest way.

    Look at the execution plan for that, Vinu. There are 9 rows in one table and 4 in the other. The Cartesian Product would, of course, be 9*4 or 36.

    Although you may be able to resolve it with some proper indexing (which I haven't tested), each and every scan of the table in the execution plan has a row count of 36. That's effectively 5 accidental CROSS JOINs which will crush the server for sets that aren't much larger (say 10,000 rows) if the query plan doesn't change.

    Still, even for 4*9 rows, I'd still fix this because if it's used a lot, its going to waste a lot of resources even if some of them are cached.

    Absolutely Mr. jeff moden, when the data increases this might hamper performance. I didn't say its the best option performance wise...I only meant that its the simplest....

    Edit: I ran both the queries for the Sample Data with Statistics ON and these were the results:

    Chris's Query :

    Table 'table2'. Scan count 4, logical reads 4, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'Table1'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    Sharath's Query:

    Table 'table2'. Scan count 4, logical reads 16, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    Table 'Table1'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 0 ms.

    But, this is for the given sample data. Things sure will change when the Data increases. That could be explained by Mr. Jeff Moden with one of his bigger data sets. πŸ™‚

    Vinu Vijayan

    For better and faster solutions please check..."How to post data/code on a forum to get the best help" - Jeff Moden[/url] πŸ˜‰

  • vinu512 (7/25/2012)


    Jeff Moden (7/25/2012)


    vinu512 (7/25/2012)


    I think this is the simplest way.

    Look at the execution plan for that, Vinu. There are 9 rows in one table and 4 in the other. The Cartesian Product would, of course, be 9*4 or 36.

    Although you may be able to resolve it with some proper indexing (which I haven't tested), each and every scan of the table in the execution plan has a row count of 36. That's effectively 5 accidental CROSS JOINs which will crush the server for sets that aren't much larger (say 10,000 rows) if the query plan doesn't change.

    Still, even for 4*9 rows, I'd still fix this because if it's used a lot, its going to waste a lot of resources even if some of them are cached.

    Absolutely Mr. jeff moden, when the data increases this might hamper performance. I didn't say its the best option performance wise...I only meant that its the simplest....

    Edit: I ran both the queries for the Sample Data with Statistics ON and these were the results:

    ...

    A couple of nights ago I did a scaled-up test (100,000 rows) of the three offerings so far and was surprised to see that there was little difference between them in the execution time. If you're interested, Vinu, and you have the time, this would be a great little exercise for you. I lost my SSMS window containing the code - did something dopey :hehe:

    β€œWrite the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply