December 1, 2013 at 9:47 pm
Comments posted to this topic are about the item High-Performance Transact-SQL with Window Functions
www.learn-with-video-tutorials.com - video tutorials
December 2, 2013 at 2:52 am
You have chosen a strange title for the article. It`s starts with "High-Performance..." but there is no performance comparison between Windowed Functions and other approaches. The article itself is interesting and might be very cognitive, but its title is a bit confusing.
December 2, 2013 at 4:48 am
How does the optimiser handle these querys/techniques vs standard methods?
December 2, 2013 at 5:37 am
OldFashionGang - Thank you for your comment. Next time I will try to choose a better title
www.learn-with-video-tutorials.com - video tutorials
December 2, 2013 at 5:41 am
davidadlington78 - maybe next article will be about this.
www.learn-with-video-tutorials.com - video tutorials
December 2, 2013 at 10:36 am
This is the simplest and most straight forward example for gaps and islands that I have ever seen. Thank you VERY much for posting.
December 2, 2013 at 11:55 am
/* Occam says, do we really need these new T-SQL constructs? And, when delivering a solution, anticipate that the client may ask some follow-on questions. Thus, it's often worth the I/O and table space to save the intermediate results.
*/
alter table dbo.T1 add seq int identity, prevStep int, nextStep int, stepType varchar(10)
go
update t1
set prevstep=0
where seq=1
update a
set prevstep=abs(b.col1-a.col1)
from t1 a
join t1 b
on a.seq=b.seq+1
where a.prevstep is null
update a
set nextstep=b.col1-a.col1
from t1 a
join t1 b
on a.seq=b.seq-1
update t1
set steptype =
case
when (prevstep=0 or prevstep > 1) and (nextstep > 1 or nextstep is null) then 'begin/end'
when prevstep=0 or prevstep > 1 then 'begin'
when prevstep=1 and nextstep=1 then 'middle'
else 'end'
end
IF OBJECT_ID('dbo.T1_islands', 'U') IS NOT NULL drop table t1_islands
select col1 as beginIsland
, case steptype when 'begin/end' then col1 else 0 end as endIsland
into t1_islands
from t1
where steptype like 'begin%'
update i
set endIsland=b.col1
from t1 a
join t1_islands i
on a.col1 = i.beginIsland
join t1 b
on b.seq=(select min(seq) from t1 where steptype='end' and seq > a.seq)
where i.endIsland = 0
select * from t1_islands
--find the biggest islands
select beginIsland,endIsland
from t1_islands
where endIsland-beginIsland = (select max(endIsland-beginIsland) from t1_islands)
--find the biggest gaps between islands
select max(nextstep) from t1
_________________
"Look, those sheep have been shorn."
data analyst replies, "On the sides that we can see.."
December 2, 2013 at 12:14 pm
Dense_Rank() would be a better choice for your windowing function as it would permit duplicate values to exist in your data islands.
December 2, 2013 at 6:11 pm
OldFashionGang (12/2/2013)
You have chosen a strange title for the article. It`s starts with "High-Performance..." but there is no performance comparison between Windowed Functions and other approaches. The article itself is interesting and might be very cognitive, but its title is a bit confusing.
I have to agree with the above. In your article, you indicate that this:
SELECT orderid, custid, val,
CAST(100. * val / SUM(val) OVER(PARTITION BY custid) AS NUMERIC(5, 2)) AS pctcust,
val - AVG(val) OVER(PARTITION BY custid) AS diffcust,
CAST(100. * val / SUM(val) OVER() AS NUMERIC(5, 2)) AS pctall,
val - AVG(val) OVER() AS diffall
FROM dbo.tb_OrderValues;
Is equivalent to this:
WITH CustAggregates AS
(
SELECT custid, SUM(val) AS sumval, AVG(val) AS avgval
FROM dbo.tb_OrderValues
GROUP BY custid
),
GrandAggregates AS
(
SELECT SUM(val) AS sumval, AVG(val) AS avgval
FROM dbo.tb_OrderValues
)
SELECT O.orderid, O.custid, O.val,
CAST(100. * O.val / CA.sumval AS NUMERIC(5, 2)) AS pctcust,
O.val - CA.avgval AS diffcust,
CAST(100. * O.val / GA.sumval AS NUMERIC(5, 2)) AS pctall,
O.val - GA.avgval AS diffall
FROM dbo.tb_OrderValues AS O
JOIN CustAggregates AS CA
ON O.custid = CA.custid
CROSS JOIN GrandAggregates AS GA;
Which from the perspective of output results is true. Now try the following 1,000,000 row test harness:
IF OBJECT_ID('tb_OrderValues') IS NOT NULL
DROP TABLE tb_OrderValues;
GO
CREATE TABLE tb_OrderValues(
orderid int,
custid int,
empid int,
shipperid int,
orderdate datetime,
requireddate datetime,
shippeddate datetime,
qty int,
val numeric(12, 2)
);
WITH Tally (n) AS
(
SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY (SELECT NULL))
FROM sys.all_columns a CROSS JOIN sys.all_columns b
)
INSERT INTO tb_OrderValues
SELECT a.n, b.n, 100*a.n, 100*b.n
,CAST('2006-01-01' AS DATETIME)+a.n-1
,CAST('2006-01-01' AS DATETIME)+a.n+60
,CAST('2006-01-01' AS DATETIME)+a.n+30
,1+ABS(CHECKSUM(NEWID()))%100
,1+ABS(CHECKSUM(NEWID()))%1000
FROM Tally a
CROSS APPLY Tally b;
DECLARE
@orderid int,
@custid int,
@pctcust NUMERIC(5,2),
@diffcust numeric(12, 2),
@pctall NUMERIC(5,2),
@diffall numeric(12, 2),
@shippeddate datetime,
@qty int,
@val numeric(12, 2);
PRINT 'Pre-aggregate';
SET STATISTICS TIME ON;
WITH CustAggregates AS
(
SELECT custid, SUM(val) AS sumval, AVG(val) AS avgval
FROM dbo.tb_OrderValues
GROUP BY custid
),
GrandAggregates AS
(
SELECT SUM(val) AS sumval, AVG(val) AS avgval
FROM dbo.tb_OrderValues
)
SELECT @orderid=O.orderid, @custid=O.custid, @val=O.val,
@pctcust=CAST(100. * O.val / CA.sumval AS NUMERIC(5, 2)), -- AS pctcust,
@diffcust=O.val - CA.avgval, -- AS diffcust,
@pctall=CAST(100. * O.val / GA.sumval AS NUMERIC(5, 2)), -- AS pctall,
@diffall=O.val - GA.avgval -- AS diffall
FROM dbo.tb_OrderValues AS O
JOIN CustAggregates AS CA
ON O.custid = CA.custid
CROSS JOIN GrandAggregates AS GA;
SET STATISTICS TIME OFF;
PRINT 'Aggregate window functions';
SET STATISTICS TIME ON;
SELECT @orderid=orderid, @custid=custid, @val=val,
@pctcust=CAST(100. * val / SUM(val) OVER(PARTITION BY custid) AS NUMERIC(5, 2)), -- AS pctcust,
@diffcust=val - AVG(val) OVER(PARTITION BY custid), -- AS diffcust,
@pctall=CAST(100. * val / SUM(val) OVER() AS NUMERIC(5, 2)), -- AS pctall,
@diffall=val - AVG(val) OVER() --AS diffall
FROM dbo.tb_OrderValues;
SET STATISTICS TIME OFF;
GO
DROP TABLE tb_OrderValues;
The timing results produce the following:
(1000000 row(s) affected)
Pre-aggregate
SQL Server Execution Times:
CPU time = 4494 ms, elapsed time = 2507 ms.
Aggregate window functions
SQL Server Execution Times:
CPU time = 11716 ms, elapsed time = 8533 ms.
Which indicates the approach that pre-aggregates is more that 70% faster than the window aggregates.
I've generally found this to be the case.
You should write every query as if it will be executed 1,000,000 times per day on 1,000,000 rows of data.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
December 2, 2013 at 7:28 pm
Having just skimmed over this posting, I'm not sure that this is relevant, but have you looked at the "quirky update" method. Yes, it's a bit dirty, but if you must have performance, then it's worth trying.
See: Solving the Running Total and Ordinal Rank Problems: http://www.sqlservercentral.com/articles/T-SQL/68467
December 3, 2013 at 7:36 am
This article is a complete ripoff from Itzik Ben-Gan's book Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions! Nowhere is there any credits to the original author ... Shame, shame, shame!
Large parts of the text and some of the code examples are an exact copy/paste from the book. See for yourself: http://it-ebooks.info/book/1140/
What's SQLServerCentrals opinion on plagiarism?
PS. Buy the book written by Itzik Ben-Gan, I did it and it's definitely worth it!
December 3, 2013 at 5:20 pm
GordonLiddy (12/3/2013)
This article is a complete ripoff from Itzik Ben-Gan's book Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions! Nowhere is there any credits to the original author ... Shame, shame, shame!Large parts of the text and some of the code examples are an exact copy/paste from the book. See for yourself: http://it-ebooks.info/book/1140/
What's SQLServerCentrals opinion on plagiarism?
PS. Buy the book written by Itzik Ben-Gan, I did it and it's definitely worth it!
I have alerted the SSC editor. Have not read the book unfortunately.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
December 4, 2013 at 7:54 am
People reading this thread: Beware! As GordonLiddy noted a few posts earlier, the text of this article is a direct copy/paste from Itzik Ben-Gan's fantastic book "Microsoft SQL Server 2012 High-Performance T-SQL Using Window Functions", Chapter 1 (the only changes I saw were table names being changed in the examples). You can see this for yourself - go to http://www.amazon.com/Microsoft-Server-High-Performance-Window-Functions/dp/0735658366 and utilize the "Look Inside" feature to browse Chapter 1.
While you're at that link, just go ahead and order the book. If you're interested in this "article", you'll love the book. It's worth it.
Wayne
Microsoft Certified Master: SQL Server 2008
Author - SQL Server T-SQL Recipes
December 4, 2013 at 8:50 am
I have removed the article and linked to the book.
My apologies for the issues.
December 4, 2013 at 8:54 am
The good news is I still learned something very valuable and ultimately found a more credible resource.
Viewing 15 posts - 1 through 15 (of 20 total)
You must be logged in to reply to this topic. Login to reply