July 22, 2012 at 9:56 pm
Nagaram (7/22/2012)
thanks for all for the valuable different methods . which were really help me to think in different perspectives.I have gone thorough the all the solutions .
I will check all the above solutions with original data (one lac + records ..) for performance wise .
Thanks
~IRK
Be real careful. The one that joins to the table 4 times has an accidental cross join in it that can be pretty tough on TempDB.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 23, 2012 at 12:48 am
Mea culpa, mea culpa, mea maxima culpa β‘
Dear, Jeff, forgive me because I have sinned.
I hope you don't kick me out of the anti RBAR brotherhood.
Actually the purpose was to demonstrate there are other ways of doing this and that people really need to test and compare performance to fit their cases.
[edited] why do I always type preformance in stead of performance
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution π
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
July 23, 2012 at 1:04 am
ALZDBA (7/23/2012)
Dear, Jeff, forgive me because I have sinned.
I hope you don't kick me out of the anti RBAR brotherhood.
He almost revoked my anti-RBAR card once for sinning as you have my son...
But Jeff is kind-hearted, so will probably let it pass... This time. π
Having options shines a floodlight on such sins.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
July 23, 2012 at 2:35 am
You guys have been having way too much fun while I've been away.
The plan for this query (once the missing ORDER BY is in place)
lokeshvij (7/22/2012)
Here it is (there can be another better solution as well)
with cte1
as
( select col1,col2 from @t
),
cte2
as
(
select col1,col2 from @t where col2 is not null
)
select col1,coalesce(col2,(select top 1 col2
from cte2 where cte2.col1>cte1.col1 ORDER BY cte2.col1 ASC)) col22
from cte1;
and for mine with the OUTER APPLY are virtually identical and appear to have the same cost. The optimiser is smart enough to turn the output correlated
subquery and the OUTER APPLY into NL joins, leaving the Top N Sort in each case as the highest-costed operation.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
July 25, 2012 at 4:12 am
Jeff Moden (7/22/2012)
ChrisM@home (7/22/2012)
Without an ORDER BY, TOP is meaningless:
SELECT
t1.col1,
col2 = ISNULL(t1.col2, x.col2)
FROM @T t1
OUTER APPLY (
SELECT TOP 1
t2.col2
FROM @T t2
WHERE t2.col1 > t1.col1
AND t2.col2 IS NOT NULL
AND t1.col2 IS NULL
ORDER BY t2.col1 ASC
) x
This is pretty cool. With the right kind of indexing (nothing special, just the PK in this case), change it to an UPDATE, and add a WHERE clause to only work WHERE t1.Col2 IS NULL, it actually beats the Quirky Update that uses a safety counter. Not by much but clearly a win for two reasons. Obviously, it's faster and the other reason is because it's supported code.
For doing the same thing but in the same direction as the clustered index instead of the reverse, an "unfettered" QU still comes in 6 times faster processing a million rows in about 2 seconds on my old machine. Still, the method you wrote comes in at 12 seconds on a million rows with a 28% modification rate (28% of Col2 is NULL). Unlike the QU, you can control which rows get updated so you don't fire any triggers present on rows that don't need updating.
And, it's about as fast as some of the new "previous row" functionality available in 2012.
All of that is "geekinese" for "Well Done!" π
Thanks Jeff. I have a great teacher π
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
July 26, 2012 at 2:33 am
Then Chris' method works (very fast in the presence of a Clustered Index on Col1) with just a couple o' tweeks...
UPDATE t1
SET col2 = x.col2
-- SELECT t1.col1, col2 = ISNULL(t1.col2, x.col2)
FROM @T t1
OUTER APPLY
(
SELECT TOP 1
t2.col2
FROM @T t2
WHERE t2.col1 > t1.col1
AND t2.col2 IS NOT NULL
AND t1.col2 IS NULL
ORDER BY t2.col1 ASC
) x
WHERE t1.Col2 IS NULL --Use this if updating
;
You could drop the where clause & use cross apply to eliminate non-updated rows:
UPDATE t1
SET col2 = x.col2
-- SELECT t1.col1, col2 = ISNULL(t1.col2, x.col2)
FROM @T t1
CROSS APPLY
(
SELECT TOP 1
t2.col2
FROM @T t2
WHERE t2.col1 > t1.col1
AND t2.col2 IS NOT NULL
AND t1.col2 IS NULL
ORDER BY t2.col1 ASC
) x
--WHERE t1.Col2 IS NULL --Use this if updating
August 29, 2012 at 7:26 am
way to late ( due to summer holidays ) but here are the results on my dev server ...
I loaded 1000000 rows is MytestTable;
use SSC_Test
/*
DECLARE MytestTable TABLE ( col1 INT, col2 INT )
INSERT INTO MytestTable ( col1, col2 )
SELECT 1, NULL
UNION ALLSELECT 2, 2
UNION ALLSELECT 3, NULL
UNION ALLSELECT 4, NULL
UNION ALLSELECT 5, NULL
UNION ALLSELECT 6, 6
UNION ALLSELECT 7, 7
UNION ALLSELECT 8, 8
UNION ALLSELECT 9, 9
UNION ALLSELECT 10, NULL;
SELECT * FROM MytestTable
*/
/*
--===== Conditionally drop the test tables to make reruns easier in SSMS
IF OBJECT_ID('MytestTable','U') IS NOT NULL DROP TABLE MytestTable;
IF OBJECT_ID('MytestCase','U') IS NOT NULL DROP TABLE MytestCase;
GO
--===== Declare and set a variable for the desired number of test rows.
-- I did it this way so that folks using 2K5 don't have to make
-- any changes to get it to work.
DECLARE @Rows INT;
SET @Rows = 1000000;
--===== Create and populate the test table on-the-fly.
SELECT TOP (@Rows)
Col1 = ISNULL(CAST(ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS INT),0),
Col2 = CAST(ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) AS INT)
INTO MytestTable
FROM sys.all_columns ac1
CROSS JOIN sys.all_columns ac2
;
--===== Add a PK/Clustered Index to what looks like should be the PK.
ALTER TABLE MytestTable
ADD PRIMARY KEY CLUSTERED (Col1)
;
-- added alzdba
create index x_MytestTable_col2 on MytestTable (col2)
;
--===== Try to nullify about 1/3rd of the columns.
-- About 28% will actually be nullified due to dupe updates.
-- The separate "MytestCase" table gets rid of a major performance problem here.
SELECT TOP (@Rows/3)
Col1ToNullify = ABS(CHECKSUM(NEWID()))%@Rows+1
INTO MytestCase
FROM MytestTable
;
UPDATE tgt
SET Col2 = NULL
FROM MytestTable tgt
INNER JOIN MytestCase n
ON tgt.Col1 = n.Col1ToNullify
;
SELECT *
into MyTestTable_Save
FROM MytestTable
;
SELECT top 1000 *
FROM MytestTable
order by col1
;
*/
/*
SELECT t1.col1
, col2 = ISNULL(t1.col2, x.col2)
FROM MytestTable t1
OUTER APPLY (
SELECT TOP 1
t2.col2
FROM MytestTable t2
WHERE t2.col1 > t1.col1
AND t2.col2 IS NOT NULL
AND t1.col2 IS NULL
ORDER BY t2.col1 ASC
) x
Select T1.col1
, MIN(T2.col1) NextCol1
from MytestTable T1
inner join MytestTable T2
on T2.col1 > T1.col1
and T1.col2 is null
and T2.col2 is not null
group by T1.col1
order by T1.col1
*/
print convert(char(26), getdate(), 121)
begin tran
print convert(char(26), getdate(), 121)
;
--with cte1
-- as (
-- select col1
-- , col2
-- from MytestTable
-- ) ,
with cte2
as (
select col1
, col2
from MytestTable
where col2 is not null
)
update MytestTable
--set col2= coalesce(col2, (
set col2 = (
select top 1
col2
from cte2
where cte2.col1 > MytestTable.col1
ORDER BY cte2.col1 ASC
)
--from cte1
where col2 is null ;
print convert(char(26), getdate(), 121)
select *
from MytestTable
where col2 is null
order by col1
rollback tran
go
print convert(char(26), getdate(), 121)
begin tran
print convert(char(26), getdate(), 121)
UPDATE t1
SET col2 = x.col2
-- SELECT t1.col1, col2 = ISNULL(t1.col2, x.col2)
FROM MytestTable t1
OUTER APPLY
(
SELECT TOP 1
t2.col2
FROM MytestTable t2
WHERE t2.col1 > t1.col1
AND t2.col2 IS NOT NULL
AND t1.col2 IS NULL
ORDER BY t2.col1 ASC
) x
WHERE t1.Col2 IS NULL --Use this if updating
;
print convert(char(26), getdate(), 121)
select *
from MytestTable
where col2 is null
order by col1
rollback tran
go
print convert(char(26), getdate(), 121)
begin tran
Update U
set col2 = Ur.col2
from MytestTable U
inner join (
Select T1.col1
, MIN(T2.col1) NextCol1
from MytestTable T1
inner join MytestTable T2
on T2.col1 > T1.col1
and T1.col2 is null
and T2.col2 is not null
group by T1.col1
) R
on R.col1 = U.col1
inner join MytestTable Ur
on Ur.col1 = R.NextCol1
print convert(char(26), getdate(), 121)
select *
from MytestTable
where col2 is null
order by col1
rollback tran
go
print convert(char(26), getdate(), 121)
Execution results:
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 3167 ms, elapsed time = 3197 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
2012-07-31 16:23:05.993
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
2012-07-31 16:23:05.993
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'MytestTable'. Scan count 2, logical reads 2629471, physical reads 0, read-ahead reads 7, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 2, logical reads 443523471, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(283721 row(s) affected)
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 38789446 ms, elapsed time = 39152936 ms.
2012-08-01 03:15:41.270
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
(1 row(s) affected)
Table 'MytestTable'. Scan count 1, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 4165 ms, elapsed time = 5243 ms.
SQL Server parse and compile time:
CPU time = 3198 ms, elapsed time = 3201 ms.
2012-08-01 03:15:49.740
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
2012-08-01 03:15:49.743
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'MytestTable'. Scan count 2, logical reads 2629471, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 2, logical reads 443523471, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(283721 row(s) affected)
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 38283769 ms, elapsed time = 38666997 ms.
2012-08-01 14:00:19.030
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
(1 row(s) affected)
Table 'MytestTable'. Scan count 1, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 4196 ms, elapsed time = 5961 ms.
SQL Server parse and compile time:
CPU time = 3339 ms, elapsed time = 3365 ms.
2012-08-01 14:00:28.400
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
Table 'MytestTable'. Scan count 283730, logical reads 302152165, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(283720 row(s) affected)
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 70863734 ms, elapsed time = 50218943 ms.
2012-08-02 03:57:30.320
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
(1 row(s) affected)
Table 'MytestTable'. Scan count 1, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(1 row(s) affected)
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 4118 ms, elapsed time = 4180 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
2012-08-02 03:57:34.547
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 0 ms.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
10:52:36
10:44:30
13:57:02
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution π
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
Viewing 7 posts - 16 through 21 (of 21 total)
You must be logged in to reply to this topic. Login to reply