December 5, 2013 at 1:45 pm
We use this in an update trigger to perform additional processing on affected child keys:
insert into @IntermediateData
select ROW_NUMBER () OVER (order by D.FOLDER_KEY),
D.FOLDER_KEY, D.CHILD_KEY, D.CHILD_TYPE,
I.CHILD_KEY, I.CHILD_TYPE
from DELETED D
join INSERTED I on D.CHILD_KEY = I.CHILD_KEY
and D.CHILD_TYPE = I.CHILD_TYPE
where D.CHILD_TYPE in (2,3)
December 5, 2013 at 1:49 pm
Most SQL people I've come across are more than happy to share their knowledge and more often than not their code but be courteous and ask first and ALWAYS give credit where it's due.
DECLARE @tmp_Strat TABLE (StateCode CHAR(2), SomeValue INT);
INSERT INTO @tmp_Strat (StateCode, SomeValue) VALUES
-- Site Your Source: http://en.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_population
('CA', 38041430), ('TX', 26059203), ('NY', 19570261), ('FL', 19317568), ('IL', 12875255);
SELECTStateCode
, SomeValue
, PopulationRank = RANK() OVER (ORDER BY SomeValue DESC)
FROM@tmp_Strat;
_____________________________________________________________________
- Nate
December 5, 2013 at 2:50 pm
I think sqlservercentral did the right thing. The author of the article should have given due credit to Itzik Ben-Gan.
-------------------------------
/* Find True Duplicates from a table and delete them */
;WITH MyDuplicateCTE
AS
(
SELECT *, ROW_NUMBER() OVER (PARTITION BY [Code],[Description]
ORDER BY [Code],[Description]) Ps
FROM [dbo].[Table1]
)
DELETE FROM MyDuplicateCTE WHERE Ps > 1;
GO
-------------------------------
December 5, 2013 at 2:54 pm
These two lines in the SELECT statement allowed me to add calls per shift and calls per unit to each line of detail. Without them, I would have had to calculate the group totals in separate queries, then join back to the dataset with the detail:
, count(*) over (partition by vw_CODS_DW.Business_Unit, vw_CODS_DW.Unit_Number, ActualStart) as 'CallsPerShift'
, count(*) over (partition by vw_CODS_DW.Business_Unit, vw_CODS_DW.Unit_Number) as 'CallsPerUnit'
December 5, 2013 at 3:27 pm
jim-1022345 (12/4/2013)
Window Functions are a great addition to TSQL. I use them all the time now.
WITH RandomData AS
(
SELECT 1 AS Number
UNION ALL
SELECT Number + 1
FROM RandomData
WHERE Number < 200
)
SELECT
Number,
ROW_NUMBER() OVER(PARTITION BY Number%10 ORDER BY Number) AS AlternateNTile
FROM RandomData
ORDER BY 1
OPTION (MAXRECURSION 200)
You might want to take a look at the following article for why you shouldn't use recursive CTEs that count. 😉
http://www.sqlservercentral.com/articles/T-SQL/74118/
--Jeff Moden
Change is inevitable... Change for the better is not.
December 5, 2013 at 3:30 pm
smutallib (12/4/2013)
-- The following script will generate time between given start time and end time-- Row_number() window function at the end is used to generate serial no.
DECLARE @T_AVAILABLE_TIME TABLE(T_RUNTIME TIME,POST_DATE DATETIME)
DECLARE@START_TIME TIME,
@END_TIME TIME,
@Inc INT;
SET @Inc = 10; --Incrementing by 10 minutes
SET @START_TIME='09:00:00'
SET @END_TIME='11:00:00';
WITH Vals AS (
SELECT @START_TIME RunTime
UNION ALL
SELECT DATEADD(mi,@Inc,RunTime)
FROM Vals
WHERE DATEADD(mi,@Inc,RunTime) < @END_TIME
)
INSERT INTO @T_AVAILABLE_TIME(T_RUNTIME,POST_DATE)
SELECT LEFT(RunTime,8) AS RUNTIME,GETDATE()
FROm Vals
OPTION (MAXRECURSION 0)
SELECTROW_NUMBER() OVER(ORDER BY POST_DATE) AS SERIAL,
LEFT(T_RUNTIME,8) AS FROM_TIME,
LEFT(DATEADD(MI,10,T_RUNTIME),8) AS TO_TIME
FROM @T_AVAILABLE_TIME A
The code above is another example of a recursive CTE that counts. Please see the following aritcle for why you should avoid such things.
http://www.sqlservercentral.com/articles/T-SQL/74118/
--Jeff Moden
Change is inevitable... Change for the better is not.
December 5, 2013 at 3:57 pm
This example uses T-SQL windowing to calculate the median price for a bundle of services in one particular service line. I use windowing functions all the time to calculate the median value of a collection.
Thanks,
Andre Ranieri
USE TempDB
GO
IF OBJECT_ID('dbo.SalesData', 'U') IS NOT NULL
DROP TABLE dbo.SalesData
GO
CREATE TABLE dbo.SalesData
(
Counter int NOT NULL IDENTITY (1,1),
AccountNum int,
ServiceLine varchar(20),
SaleAmount decimal (19,4)
CONSTRAINT PK_sample_table PRIMARY KEY (Counter)
)
-- Populate table with sample sales data for two customers and multiple service lines.
INSERT INTO dbo.SalesData (AccountNum, ServiceLine, SaleAmount)
Values(1000, 'Lawn Care', 39.95), (1000, 'Lawn Care', 39.95), (1000, 'Lawn Care', 39.95), (1000, 'Lawn Care', 39.95),(1000, 'Lawn Care', 39.95),(1000, 'Lawn Care', 42.95),(1000, 'Lawn Care', 42.95), (1000, 'Lawn Care', 42.95)
, (2000, 'Tree Care', 45.95), (2000, 'Tree Care', 49.95), (2000, 'Tree Care', 49.95), (2000, 'Tree Care', 49.95),(2000, 'Tree Care', 49.95),(2000, 'Tree Care', 62.95),(2000, 'Tree Care', 55.95), (2000, 'Tree Care', 52.95)
, (2000, 'Pest Control', 89.95), (2000, 'Pest Control', 89.95), (2000, 'Pest Control', 109.95);
-- Find the median sales amount for each customer and service line
WITH Median AS
(
SELECT AccountNum, ServiceLine, SaleAmount,
ROW_NUMBER() OVER(PARTITION BY AccountNum, ServiceLine ORDER BY SaleAmount) AS RowNum,
COUNT(*) OVER(PARTITION BY AccountNum, ServiceLine) AS Cnt
FROM dbo.SalesData
)
SELECT AccountNum, ServiceLine, MAX(SaleAmount) AS MedianPrice
FROM Median
WHERE RowNum IN((Cnt + 1) / 2, (Cnt + 2) / 2)
GROUP BY AccountNum, ServiceLine
December 5, 2013 at 4:21 pm
1) Bummer, I was just gonna use the stuff... Need a book!!
2) SELECTBusinesID,
AreaID,
DATEPART(yy, GetDate()) AS [Year],
SalesYTD,
AVG(SalesYTD) OVER (PARTITION BY AreaID ORDER BY DATEPART(yy, GetDate())) AS MovingAvgYTD,
SUM(SalesYTD) OVER (PARTITION BY AreaID ORDER BY DATEPART(yy, GetDate())) AS TotalYTD
FROMMyTable
WHEREAreaID < 0
December 5, 2013 at 5:23 pm
Excellent reference here now in how the OVER can be used. This was a great idea!
M.
Not all gray hairs are Dinosaurs!
December 5, 2013 at 5:35 pm
I hope more books gets plagiarized so we can get to win more books!
Kidding.
Here's my take on a query with an OVER statement:
DROP TABLE #HEY
CREATE TABLE #HEY (Region CHAR(10),Age INT, CustomerName VARCHAR(50))
INSERT INTO #HEY
VALUES ('A',29,'Morgan'),('A',15,'Marley'),('A',21,'Ashley'),('B',29,'Roger'),('B',20,'Claire')
-- Create a rank by Region of all customers by Age in descending order
SELECT ROW_NUMBER() OVER (PARTITION BY Region
ORDER BY Age DESC) AS Rank
,*
FROM #HEY
December 5, 2013 at 5:38 pm
Deliver('Book') = Good * 4
And now for Matrix Multiplication with bonus Window Functions. 😀
if object_id('tempdb..#a') is not null begin drop table tempdb..#a end ;
create table #a (
row_num int,
col_num int,
value int,
primary key ( row_num, col_num )
);
insert into #a ([row_num], [col_num], [value])
select 0, 3, 55 union all select 0, 4, 78 union all
select 1, 0, 19 union all select 1, 2, 21 union all
select 1, 3, 3 union all select 1, 4, 81 union all
select 2, 1, 48 union all select 2, 2, 50 union all
select 2, 3, 1 union all select 3, 2, 33;
if object_id('tempdb..#b') is not null begin drop table tempdb..#b end ;
create table #b (
row_num int,
col_num int,
value int,
primary key ( row_num, col_num )
);
insert into #b ([row_num], [col_num], [value])
select 0, 1, 73 union all select 0, 4, 42 union all
select 1, 2, 82 union all select 2, 0, 83 union all
select 2, 1, 13 union all select 2, 3, 57 union all
select 3, 0, 48 union all select 3, 1, 85 union all
select 3, 2, 18 union all select 3, 3, 24;
With BadongleBangle as (
select MatrixA.row_num, MatrixB.col_num, sum(MatrixA.value*MatrixB.value) as Val
from (select 'Matrix1' as Matrix, * from #a) as MatrixA
join (select 'Matrix2' as Matrix, * from #b) as MatrixB
on MatrixA.col_num = MatrixB.row_num
group by MatrixA.row_num, MatrixB.col_num
)
, FlibbertyJibby AS (
SELECT [Matrix] = p.row_num
, [0] = IsNull(p.[0],0)
, [1] = IsNull(p.[1],0)
, [2] = IsNull(p.[2],0)
, [3] = IsNull(p.[3],0)
FROM BadongleBangle
pivot (
max(val)
for [col_num] in ([0], [1], [2], [3])
) p
)
select Matrix
, [0], [1], [2], [3]
, [0_%] = [0] * 100 / sum([0]) over ()
, [1_%] = [1] * 100 / sum([1]) over ()
, [2_%] = [2] * 100 / sum([2]) over ()
, [3_%] = [3] * 100 / sum([3]) over ()
, [0_zscore] = ([0] - avg([0]) over ()) / stdev([0]) over()
, [1_zscore] = ([1] - avg([1]) over ()) / stdev([1]) over()
, [2_zscore] = ([2] - avg([2]) over ()) / stdev([2]) over()
, [3_zscore] = ([3] - avg([3]) over ()) / stdev([3]) over()
from FlibbertyJibby
;
December 5, 2013 at 6:08 pm
I've never posted before, so I hope this works, but this here is a small part of a stored proc that drives a report I have. The report shows 10 records per page because it looks like a dashboard and lets users lick on the dates and drill down to the detail reports for those dates.
-- this is a part of a stored procedure and the @GetPageNumber is passed in. This report displays this data 10 per page
DECLARE @GetPageNumber INT = 1;
DECLARE @PageNumber INT,
@Result INT,
@Outcome VARCHAR(MAX),
@MaxDateKey BIGINT;
SELECT @PageNumber = ((COUNT(DISTINCT CAST(CAST(DateKey AS CHAR(8)) AS DATE)))/10) + 1 FROM dbo.anchor_values (NOLOCK);
WITH AV AS(
SELECT DISTINCT CAST(CAST(DateKey AS CHAR(8)) AS DATE) AS AnchorValuesDates
,COUNT(*) AS RecordCount
,MAX(DateProcessed) AS DateProcessed
,SUM(CASE WHEN AnchorValue IS NULL THEN 1 ELSE 0 END) AS NumberBlankAnchorValues
,ROW_NUMBER() OVER (ORDER BY CAST(CAST(DateKey AS CHAR(8)) AS DATE) DESC) AS Position
,NTILE(@PageNumber) OVER (ORDER BY CAST(CAST(DateKey AS CHAR(8)) AS DATE) DESC) AS PageNumber
FROM dbo.anchor_values (NOLOCK)
GROUP BY CAST(CAST(DateKey AS CHAR(8)) AS DATE))
SELECT AnchorValuesDates
,RecordCount
,DateProcessed
,Position
,PageNumber
,@PageNumber AS PageCount
,NumberBlankAnchorValues
,TrendixValuesDates
FROM AV
LEFT OUTER JOIN (SELECT DISTINCT CAST(CAST(DateKey AS CHAR(8)) AS DATE) AS TrendixValuesDates
FROM dbo.trendix_values) TV
ON AV.AnchorValuesDates = TV.TrendixValuesDates
WHERE PageNumber = @GetPageNumber
ORDER BY PageNumber, Position
December 5, 2013 at 7:51 pm
Great to see SSC call out and apologize for the plagiarism, but how about the culprit (Kamil Moscicki) stand up and do the right thing and apologize not only to the community but to Ben-Gan.
Hope this helps...
Ford Fairlane
Rock and Roll Detective
December 5, 2013 at 8:00 pm
/*top 10 queries by avg cpu time*/
SELECT top 10
rowid=row_number() over(order by total_worker_time/execution_count desc)
,last_execution_time
, 'last_elapsed_time(ms)'= last_elapsed_time/1000
, 'avg_elapsed_time(ms)'= total_elapsed_time/execution_count/1000
, 'last_cpu_time(ms)'=last_worker_time/1000
, 'avg_cpu_time(ms)'=total_worker_time/execution_count/1000
, total_logical_reads, total_logical_writes
, execution_count
, (SELECT SUBSTRING(text, statement_start_offset/2 + 1,
(CASE WHEN statement_end_offset = -1
THEN datalength(text)
ELSE statement_end_offset END - statement_start_offset)/2 + 1)
FROM sys.dm_exec_sql_text(sql_handle)) AS query_text
FROM sys.dm_exec_query_stats
December 5, 2013 at 8:52 pm
Plagiarism of any kind is stealing - and it is rare that you see it so publicly acknowledged and with an apology!
I already own a copy of the book - it is an excellent resource as are the other two Ben-Gan SQL books that I own!. Windowed functions coupled with CTEs make for some powerful and efficient T-SQL code.
Fred M3
Viewing 15 posts - 196 through 210 (of 287 total)
You must be logged in to reply to this topic. Login to reply