February 28, 2012 at 9:37 am
Hi,
I'm having some trouble in getting some results.
Here is my issue.
I got two Tables,
CREATE TABLE [dbo].[Test](
[Name] [char](10) NULL,
[id] [int] NULL,
[Title] [varchar](4) NULL
) ON [PRIMARY]
CREATE TABLE [dbo].[Test1](
[Name] [char](10) NULL,
[UserName] [char](5) NULL
) ON [PRIMARY]
INSERT [dbo].[Test] ([Name], [id], [Title]) VALUES (N'ROCK ', 1, N'sql')
INSERT [dbo].[Test] ([Name], [id], [Title]) VALUES (N'ROCK ', 2, N'08r2')
INSERT [dbo].[Test1] ([Name], [UserName]) VALUES (N'ROCK ', N'Xyz ')
So when i do a Query like
select T.Name,T.Title,T1.USERNAME from Test as t inner join test1 as t1
on t.Name=t1.name
where id in (1,2)
i get some thing like
Name-----Title---USERNAME
ROCK-----sql-----Xyz
ROCK-----08r2----Xyz
How can i get the result as
Name-----Title----Version---USERNAME
ROCK------Sql------08r2-----XYZ
Thank you
February 28, 2012 at 10:06 am
Based on your setup you can do the following:
select T.Name,T1.Title, T2.Title AS [Version], T.USERNAME
from test1 as t
join Test as t1
on t1.Name = t.name
and t1.id =1
join Test as t2
on t1.Name = t.name
and t2.id=2
I guess you will want to add more details into your question...
February 28, 2012 at 12:24 pm
Looks like a classic CrossTab problem (see the related link in my signature for details).
SELECT
T.Name,
MAX(CASE WHEN t.id=1 THEN T.Title ELSE NULL END) AS Title,
MAX(CASE WHEN t.id=2 THEN T.Title ELSE NULL END) AS Version,
T1.USERNAME
from Test as t inner join test1 as t1
on t.Name=t1.name
where id in (1,2)
GROUP BY T.Name, T1.USERNAME
February 28, 2012 at 5:54 pm
Why not reverse the SELECT and avoid the INNER JOIN and GROUP BY by using sub-queries?
SELECT
T1.Name
,(SELECT t.TITLE FROM Test t WHERE t.id=1 and t.Name = t1.Name) AS Title
,(SELECT t.TITLE FROM Test t WHERE t.id=2 and t.Name = t1.Name) AS Version
,T1.USERNAME
from test1 t1
It looks a little cleaner and according to the query plan cost for the 3 solutions provided:
1. 32%
2. 21%
3. 12% (this one)
They don't add up to 100% because I did the test using a table variable, so the INSERTs account for the missing %.
** Edited to correct SQL but the query plan results hold ***
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
February 28, 2012 at 8:51 pm
dwain.c (2/28/2012)
Why not reverse the SELECT and avoid the INNER JOIN and GROUP BY by using sub-queries?
SELECT
T1.Name
,(SELECT t.TITLE FROM Test t WHERE t.id=1 and t.Name = t1.Name) AS Title
,(SELECT t.TITLE FROM Test t WHERE t.id=2 and t.Name = t1.Name) AS Version
,T1.USERNAME
from test1 t1
It looks a little cleaner and according to the query plan cost for the 3 solutions provided:
1. 32%
2. 21%
3. 12% (this one)
They don't add up to 100% because I did the test using a table variable, so the INSERTs account for the missing %.
** Edited to correct SQL but the query plan results hold ***
Although using costs can lead to some performance problems to repair, they should never be used to determine which code is "better" either performance-wise or resource-wise. Your code example may very well fall into the category of "only sees the first iteration" because it's a correlated subquery where it can only estimate the cost of a single row.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 28, 2012 at 9:26 pm
Jeff - I basically agree with you. I was too lazy to run with STATISTICS ON, so I went back and tried it.
In the SQL below I added primary keys to the tables and also blew out the record sets to 50000 records per key before running this test.
CREATE TABLE [dbo].[Test](
[Name] [char](10) NOT NULL,
[id] [int] NOT NULL,
[Title] [varchar](4) NULL
PRIMARY KEY CLUSTERED
( [Name] ASC, [id] ASC)
) ON [PRIMARY]
CREATE TABLE [dbo].[Test1](
[Name] [char](10) NOT NULL,
[UserName] [char](5) NULL
PRIMARY KEY CLUSTERED
( [Name] ASC)
) ON [PRIMARY]
INSERT [dbo].[Test] ([Name], [id], [Title]) VALUES (N'ROCK', 1, N'sql')
INSERT [dbo].[Test] ([Name], [id], [Title]) VALUES (N'ROCK', 2, N'08r2')
INSERT [dbo].[Test1] ([Name], [UserName]) VALUES (N'ROCK', N'Xyz ')
;WITH Nbrs_3( n ) AS ( SELECT 1 UNION SELECT 0 )
,Nbrs_2( n ) AS ( SELECT 1 FROM Nbrs_3 n1 CROSS JOIN Nbrs_3 n2 )
,Nbrs_1( n ) AS ( SELECT 1 FROM Nbrs_2 n1 CROSS JOIN Nbrs_2 n2 )
,Nbrs_0( n ) AS ( SELECT 1 FROM Nbrs_1 n1 CROSS JOIN Nbrs_1 n2 )
,Nbrs ( n ) AS ( SELECT 1 FROM Nbrs_0 n1 CROSS JOIN Nbrs_0 n2 )
INSERT [dbo].[Test]
SELECT SUBSTRING([Name],1,4) + CAST(n AS NVARCHAR(5)), [id], [Title]
FROM [dbo].[Test]
CROSS APPLY (
SELECT ROW_NUMBER() OVER (ORDER BY n) As n
FROM Nbrs) D
WHERE n BETWEEN 1 and 50000
;WITH Nbrs_3( n ) AS ( SELECT 1 UNION SELECT 0 )
,Nbrs_2( n ) AS ( SELECT 1 FROM Nbrs_3 n1 CROSS JOIN Nbrs_3 n2 )
,Nbrs_1( n ) AS ( SELECT 1 FROM Nbrs_2 n1 CROSS JOIN Nbrs_2 n2 )
,Nbrs_0( n ) AS ( SELECT 1 FROM Nbrs_1 n1 CROSS JOIN Nbrs_1 n2 )
,Nbrs ( n ) AS ( SELECT 1 FROM Nbrs_0 n1 CROSS JOIN Nbrs_0 n2 )
INSERT [dbo].[Test1]
SELECT SUBSTRING([Name],1,4) + CAST(n AS NVARCHAR(5)), [UserName]
FROM [dbo].[Test1]
CROSS APPLY (
SELECT ROW_NUMBER() OVER (ORDER BY n) As n
FROM Nbrs) D
WHERE n BETWEEN 1 and 50000
SET STATISTICS IO ON
SET STATISTICS TIME ON
SELECT
T.Name,
MAX(CASE WHEN t.id=1 THEN T.Title ELSE NULL END) AS Title,
MAX(CASE WHEN t.id=2 THEN T.Title ELSE NULL END) AS Version,
T1.USERNAME
from Test as t inner join test1 as t1
on t.Name=t1.name
where id in (1,2)
GROUP BY T.Name, T1.USERNAME
SELECT
T1.Name
,(SELECT t.TITLE FROM Test t WHERE t.id=1 and t.Name = t1.Name) AS Title
,(SELECT t.TITLE FROM Test t WHERE t.id=2 and t.Name = t1.Name) AS Version
,T1.USERNAME
from test1 t1
This produced the following messages from one run of the two final SELECTs:
Warning: Null value is eliminated by an aggregate or other SET operation.
(50001 row(s) affected)
Table 'Test'. Scan count 1, logical reads 726, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Test1'. Scan count 1, logical reads 237, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 172 ms, elapsed time = 562 ms.
(50001 row(s) affected)
Table 'Test'. Scan count 2, logical reads 1452, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Test1'. Scan count 1, logical reads 237, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 94 ms, elapsed time = 547 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
Multiple runs produced varying results, sometimes in favor of one or the other. BTW. With that many records and the index on primary key, the execution plan changed in your favor (38%/62%).
Also, the first solution is a bust because it doesn't return the correct row set in this scenario.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
February 29, 2012 at 10:50 am
@dwain:
What query do you refer to when you write:
Also, the first solution is a bust because it doesn't return the correct row set in this scenario.
February 29, 2012 at 5:11 pm
Sorry to be unclear. I referred to the query posted by Eugene.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
March 1, 2012 at 2:36 am
dwain.c (2/29/2012)
Sorry to be unclear. I referred to the query posted by Eugene.
Try before commenting. My query returns exactly what OP has asked.
The rest of solutions are based on unclear OP requirements. Instead of guessing, I've welcomed OP to explain what he exactly wants in deatails.
His post requests the following results from his setup:
Name-----Title----Version---USERNAME
ROCK------Sql------08r2-----XYZ
If you take the first row as column headers, and make the assumption that what he really wants are columns (not concatenated strings) then query I've gave produces exactly that:
NameTitleVersionUSERNAME
ROCK sql08r2Xyz
March 1, 2012 at 2:39 am
My comment was that it didn't work after I exploded the record set out to 50000 records with my CROSS APPLY INSERTs.
You may try it yourself if you don't believe me.
I agree it did work in the limited case of one record.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
March 1, 2012 at 3:19 am
dwain.c (3/1/2012)
My comment was that it didn't work after I exploded the record set out to 50000 records with my CROSS APPLY INSERTs.You may try it yourself if you don't believe me.
I agree it did work in the limited case of one record.
You've exploded the recordset out to 50000 records based on what? Your guess that is what OP really wants? Are you sure that your assumption reflects the OP database?
I'm not going to try it, as I have no much time on guessing of all possible permitations of setups. Instead of guessing, I asked OP for more details.
The query I gave was exactly for this purpose, so, OP could understand that details he provided were not sufficient. But looks like he has lost the interest in his own post...
March 8, 2012 at 6:56 pm
Eugene Elutin (3/1/2012)
dwain.c (3/1/2012)
My comment was that it didn't work after I exploded the record set out to 50000 records with my CROSS APPLY INSERTs.You may try it yourself if you don't believe me.
I agree it did work in the limited case of one record.
You've exploded the recordset out to 50000 records based on what? Your guess that is what OP really wants? Are you sure that your assumption reflects the OP database?
I'm not going to try it, as I have no much time on guessing of all possible permitations of setups. Instead of guessing, I asked OP for more details.
The query I gave was exactly for this purpose, so, OP could understand that details he provided were not sufficient. But looks like he has lost the interest in his own post...
I believe the post was addressed to me. 😉 If I were doing it, I'd have "exploded the record set out to" a million rows to test it. 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
March 9, 2012 at 6:42 am
Hi Guys, Last time I wrote something here I was castigated. Just tell me if my method is bad
It Pivots the test and the joins test1
I know some will say Pivot consumes more resources etc
I am not sure how to tell by looking at my stats
select Name,Username,[1] as Title,[2] as version from
(Select test.Name,username,id,Title from Test
INNER JOIN Test1 ON Test.Name = Test1.Name
)
DataTable Pivot
(max(Title)
for id in ([1],[2])
) as PivotTable
The timings are fluctuating ,but the logical reads are the same as what others are getting
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 6 ms.
(50001 row(s) affected)
Table 'Test'. Scan count 1, logical reads 726, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Test1'. Scan count 1, logical reads 237, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 157 ms, elapsed time = 537 ms.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply