December 5, 2013 at 7:51 am
Plagiarism is bad mojo.
This query is one we've derived with the help of others over the years to identify gaps in our running auto-number primary key.
;WITH
cte AS (
SELECT
PatientEncounterNumber as id,
RowNum = ROW_NUMBER() OVER (ORDER BY PatientEncounterNumber)
FROM dbo.tblPatientEncounter),
cte2 AS (
SELECT *, DENSE_RANK() OVER (ORDER BY ID - RowNum) As Series
FROM cte),
cte3 AS (
SELECT *, COUNT(*) OVER (PARTITION BY Series) AS SCount
FROM cte2),
cte4 AS (
SELECT
MinID = MIN(ID),
MaxID = MAX(ID),
Series
FROM cte3
GROUP BY Series)
SELECT GapStart = a.MaxID, GapEnd = b.MinID
FROM cte4 a
INNER JOIN cte4 b
ON a.Series+1 = b.Series
ORDER BY GapStart
David Walker
www.intellicure.com
December 5, 2013 at 7:52 am
Disturbing that someone would steal someone else's work, what do they gain - in the end, not as much as they lose.
Running Totals over a group:
SELECT wafer_id_sk, exit_ts,
TRUNC(COUNT(*) OVER(PARTITION BY wafer_id_sk, operation ORDER BY wafer_id_sk, exit_ts ROWS UNBOUNDED PRECEDING))
FROM wafer_location wl
WHERE wafer_id_sk BETWEEN 10 and 20
ORDER BY wafer_id_sk, exit_ts
December 5, 2013 at 7:57 am
-- Ranking of orders by OrderTotal per Account/Year
SELECT
A1.Account
,Year(SO1.OrderDate) OrderYear
,Rank() OVER(Partition by A1.Account,Year(SO1.OrderDate) Order By SO1.OrderTotal Desc) AS Rank_Num
,SO1.OrderTotal OrderTotal
FROM SalesOrder SO1
Inner Join Account A1 on A1.AccountID = SO1.AccountID
WHERE
A1.Account Like 'Milford Ex%' And
SO1.OrderTotal > 0
Order By 1, 2, 3
December 5, 2013 at 8:00 am
Much respect for the decision to own up to the plagarism in the article. If only other organisations did the same!
I'm afraid I have no example code to post up, as I've never used any of the functions.
All the more reason I'm desperate to win a book! 😉
December 5, 2013 at 8:00 am
is this "writing a query for a book"?? 😛
select row_number() over(partition by datepart(yy,create_date) order by name), *
from sys.tables
order by datepart(yy,create_date), name
bye!
December 5, 2013 at 8:01 am
SELECT
object_Id
, name
, ROW_NUMBER() OVER (PARTITION by schema_id ORDER BY object_id) as RowID
from
sys.objects
December 5, 2013 at 8:02 am
Good on SQLServerCentral.com for stepping up publicly on this. The book give-away is just icing. Here is a very complex example I show in my SQL Server 2012 Windowing Functions SQL Saturday session. It came from a need I had at a client to track user session usage, but unlike the classic example that almost EVERYONE uses (including Itzik) this is for a web/mobile app platform API usage scenario and there is NO STOP TIME! So it uses some interesting applications of a variety of 2012 windowing and other functions to first come up with all start and end session times per user per application and then does aggregates on top of that.
I have comments and various commented-out code points you can use to break it down and decipher what it is doing at each step. I hope some here can benefit from this!
USE tempdb
SET NOCOUNT on
GO
CREATE TABLE dbo.T4
( UserID bigint NOT NULL, --user identifier
SocketID bigint NOT NULL, --"application" identifier
APIDateTime datetime NOT NULL --an API hit occurred by user for app
)
CREATE CLUSTERED INDEX idx1 ON T4 (userid, socketid, APIDateTime) --note "POC" index - NO SORTS!!
--TRUNCATE TABLE T4
INSERT dbo.T4
VALUES
(1, 295, '20130101 00:00'),
(1, 295, '20130101 00:01'),
(1, 295, '20130101 00:02'),
(1, 295, '20130101 00:03'),
(1, 295, '20130101 00:15'),
(1, 295, '20130101 00:30:01'),
(1, 295, '20130101 01:00'),
(1, 295, '20130101 01:10'),
(1, 295, '20130101 01:10'),
(1, 295, '20130101 01:11'),
(2, 295, '20130101 00:00'),
(2, 295, '20130101 00:00'),
(2, 295, '20130101 00:01'),
(2, 295, '20130101 00:02'),
(2, 295, '20130101 00:03'),
(2, 295, '20130101 00:15'),
(2, 295, '20130101 00:30:01'),
(2, 295, '20130101 01:00'),
(2, 295, '20130101 01:10'),
(2, 295, '20130101 01:10'),
(2, 295, '20130101 01:11'),
(2, 295, '20130101 00:00'),
(1, 110, '20130101 00:00'),
(1, 110, '20130101 00:01'),
(1, 110, '20130101 00:02'),
(1, 110, '20130101 00:03'),
(1, 110, '20130101 00:15'),
(1, 110, '20130101 00:30:01'),
(1, 110, '20130101 01:00'),
(1, 110, '20130101 01:10'),
(1, 110, '20130101 01:10'),
(1, 110, '20130101 01:11'),
(2, 110, '20130101 00:00'),
(2, 110, '20130101 00:00'),
(2, 110, '20130101 00:01'),
(2, 110, '20130101 00:02'),
(2, 110, '20130101 00:03'),
(2, 110, '20130101 00:15'),
(2, 110, '20130101 00:30:01'),
(2, 110, '20130101 01:00'),
(2, 110, '20130101 01:10'),
(2, 110, '20130101 01:10'),
(2, 110, '20130101 01:11'),
(2, 110, '20130101 00:00')
--crossing midnight?
INSERT dbo.T4
VALUES
(1, 295, '20130101 23:45'),
(1, 295, '20130101 23:56'),
(1, 295, '20130102 00:03'),
(1, 110, '20130101 23:56'),
(1, 110, '20130102 00:00'),
(1, 110, '20130102 00:07')
SELECT * FROM T4
ORDER BY userid, socketid, apidatetime
DECLARE @GapAllowed INT = 15*60 --15 minutes contiguous defines a single session within the app?
;WITH cteSource(UserID, SocketID, RangeStart, RangeEnd)
AS (
SELECT UserID, SocketID, RangeStart,
LEAD(RangeEnd) OVER (PARTITION BY UserID, SocketID ORDER BY APIDateTime) AS RangeEnd
FROM (
SELECT UserID, SocketID, APIDateTime,
CASE
WHEN DATEDIFF(ss, LAG(APIDateTime) OVER (PARTITION BY UserID, SocketID ORDER BY APIDateTime), APIDateTime) <= @GapAllowed THEN NULL
ELSE APIDateTime
END AS RangeStart,
CASE
WHEN DATEDIFF(ss, APIDateTime, LEAD(APIDateTime) OVER (PARTITION BY UserID, SocketID ORDER BY APIDateTime)) <= @GapAllowed THEN NULL
ELSE APIDateTime
END AS RangeEnd
FROM T4) AS d
WHERE RangeStart IS NOT NULL
OR RangeEnd IS NOT NULL
)
--select * from ctesource ORDER BY userid, socketid, rangestart, rangeend
SELECT UserID, SocketID, RangeStart,
ISNULL(RangeEnd, RangeStart) AS RangeEnd
FROM cteSource
WHERE RangeStart IS NOT NULL
ORDER BY UserID, SocketID, RangeStart, RangeEnd
UserID SocketID RangeStart RangeEnd
-------------------- -------------------- ----------------------- -----------------------
1 110 2013-01-01 00:00:00.000 2013-01-01 00:15:00.000
--we want a minimum usage time so we will add 10 seconds to each zero-second interval
1 110 2013-01-01 00:30:01.000 2013-01-01 00:30:01.000
1 110 2013-01-01 01:00:00.000 2013-01-01 01:11:00.000
1 110 2013-01-01 23:56:00.000 2013-01-02 00:07:00.000
1 295 2013-01-01 00:00:00.000 2013-01-01 00:15:00.000
1 295 2013-01-01 00:30:01.000 2013-01-01 00:30:01.000
1 295 2013-01-01 01:00:00.000 2013-01-01 01:11:00.000
1 295 2013-01-01 23:45:00.000 2013-01-02 00:03:00.000
2 110 2013-01-01 00:00:00.000 2013-01-01 00:15:00.000
2 110 2013-01-01 00:30:01.000 2013-01-01 00:30:01.000
2 110 2013-01-01 01:00:00.000 2013-01-01 01:11:00.000
2 295 2013-01-01 00:00:00.000 2013-01-01 00:15:00.000
2 295 2013-01-01 00:30:01.000 2013-01-01 00:30:01.000
2 295 2013-01-01 01:00:00.000 2013-01-01 01:11:00.000
DECLARE @GapAllowed INT = 15*60 --what happens when change contiguous time interval to say 10 mins?
;WITH a (UserID, SocketID, RangeStart, RangeEnd)
AS (
SELECT UserID, SocketID, RangeStart,
LEAD(RangeEnd) OVER (PARTITION BY UserID, SocketID ORDER BY APIDateTime) AS RangeEnd
FROM (
SELECT UserID, SocketID, APIDateTime,
CASE
WHEN DATEDIFF(ss, LAG(APIDateTime) OVER (PARTITION BY UserID, SocketID ORDER BY APIDateTime), APIDateTime) <= @GapAllowed THEN NULL
ELSE APIDateTime
END AS RangeStart,
CASE
WHEN DATEDIFF(ss, APIDateTime, LEAD(APIDateTime) OVER (PARTITION BY UserID, SocketID ORDER BY APIDateTime)) <= @GapAllowed THEN NULL
ELSE APIDateTime
END AS RangeEnd
FROM T4) AS d
WHERE RangeStart IS NOT NULL
OR RangeEnd IS NOT NULL)
--SELECT * FROM a WHERE RangeStart IS NOT NULL
, b as (SELECT UserID, SocketID, DATEDIFF(ss, RangeStart, CASE WHEN RangeEnd <> RangeStart THEN ISNULL(RangeEnd, RangeStart) ELSE DATEADD(ss, 10, ISNULL(RangeEnd, RangeStart)) END)*1.0 AS SessionSeconds
FROM a
WHERE RangeStart IS NOT NULL)
--SELECT * FROM b
/* since can't do average of average, Store count and session total per UserID/SocketID
then do further analytics based off of those two known keys*/
SELECT UserID, SocketID, SUM(SessionSeconds)/60.0 AS SUMSessionMinutes, COUNT(*) AS SessionCount
FROM b
GROUP BY UserID, SocketID
ORDER BY UserID, SocketID
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
December 5, 2013 at 8:04 am
I use window functions all the time. I am looking forward to upgrading from SQL 2008 R2 to get more window function features.
SELECT DISTINCT ComponentKey
, Line
, Segment
, ProdDate
, Shift
, ItemNumber
, UPC
, DATEPART(ww, Proddate) AS WeekNumber
, SUM(Downtime) OVER ( PARTITION BY Line, ProdDate, ComponentKey, ItemNumber ) AS TotalDay
, SUM(DownTime) OVER ( PARTITION BY Line, DATEPART(yy, ProdDate), DATEPART(ww, ProdDate), ComponentKey, ItemNumber ) AS TotalWeek
, SUM(Downtime) OVER ( PARTITION BY Line, ProdDate, Shift, ComponentKey, ItemNumber ) AS TotalDayByShift
, SUM(DownTime) OVER ( PARTITION BY Line, DATEPART(yy, ProdDate), DATEPART(ww, ProdDate), Shift, ComponentKey,
ItemNumber ) AS TotalWeekByShift
, @user-id AS UserID
FROM dbo.tblDownTime AS dt
WHERE Proddate >= @FiscalYearStartDate
December 5, 2013 at 8:06 am
All,
You can see this behavior all over the web when doing a search for something. The same exact code will come up in numerous sites without any mention or credit.
The best part of this site is how much I learn from the responses to the various articles and QofD answers.
I guess I will have to bone up on other uses of the "OVER" when used with the PARTITION keyword. I have used it only in relation to the RANK function.
My code comes from a procedure I created to give me a listing of ONLY the changes found in our "Log" tables for a specific log table, column name and search value.
WHAT made this fun is that I was able to use a bunch of new (to me) features that I never used or used very rarely before (DENSE_RANK, FOR XML, UNPIVOT, EXCEPT, etc).
The log tables mirror the original tables and are filled by three triggers on the original table. These have been invaluable in troubleshooting any data issues that come up.
Thanks Again for the inspiration to keep learning...
Anton
INSERT INTO #Comparison (Rank, Action, ActionDate, ActionUser, FieldCode, FieldValue, ID)
SELECT Rank, Action, ActionDate, ActionUser, FieldCode, FieldValue, LogID
FROM (SELECT dense_rank() over (order by LogID) As Rank,
Action,
ActionDate,
ActionUser,
LogID,
ISNULL(CONVERT(varchar(max), [ID] ), 'NULL') AS [ID],
ISNULL(CONVERT(varchar(max), [Field1] ), 'NULL') AS [Field1],
ISNULL(CONVERT(varchar(max), [Field2] ), 'NULL') AS [Field2],
ISNULL(CONVERT(varchar(max), [Field3] ), 'NULL') AS [Field3],
ISNULL(CONVERT(varchar(max), [Field4] ), 'NULL') AS [Field4]
FROM tblTable_Log with (nolock)
WHERE CONVERT(VARCHAR(100), OrderNumber = '2058114'
AND Action <>'Update - OLD') MyTable
UNPIVOT (FieldValue FOR FieldCode IN ([ID],[Field1],[Field2],[Field3],[Field4])) AS MyUnPivot
ORDER BY LogID DESC
select x.ID As LogID, x.Action, x.ActionDate, x.ActionUser, x.FieldCode As ColumnName, x.FieldValue As NEW_Value, y.FieldValue as PREVIOUS_Value
from #Comparison x
left join #Comparison y on x.Rank = y.rank + 1
WHERE y.ID is not null
AND x.FieldCode = y.FieldCode
AND x.FieldValue <> y.FieldValue
order by x.id desc
December 5, 2013 at 8:15 am
Every one can make errors, the most important thing is to recognize it and repair it.
By posting this article you not only admitted your error and apologized but you did it in an interesting way. Good initiative staff!
I'm really interested in getting this book
with LicenseList (ID, HostName, LicenseType, ExpDate, ExpirationDate )
as
(select
LicenseID,
Hostname,
LType,
ExpiryDate,
max(ExpiryDate) over ( partition by Hostname, LType)
from
tbl_Servers
JOIN Tbl_Licenses ON Host = Hostname
)
select * from LicenseList
where ExpDate = ExpirationDate
order by ExpirationDate, HostName, LicenseType
December 5, 2013 at 8:16 am
Kudos to SSC for being on top of this and going above and beyond.
Here is some SQL I use to calculate a rolling P95 average.
Rob
SELECT DISTINCT
DATEADD(M,T.N - 1, TD.First_Worked) as Measure_Month,
Percentile_Cont (0.95) WITHIN GROUP (Order By DATEDIFF(H,Req_Start,First_Worked))
Over(Partition By TD.FirstWorked) as Month_P95
FROM
Time_Data as TD
CROSS JOIN
Tally as T
WHERE
T.N <= 3 AND
DATEADD(M,T.N - 1, TD.First_Worked) <= GETDATE()
December 5, 2013 at 8:18 am
Here is my query. Sure do hope that I win Itzik's new book. - Mark
SELECT DoctorID, HospitalID
,CONVERT(varchar(40),PatientsYTD,1) AS PatientsYTD
,DATEPART(yy,ModifiedDate) AS CareYear
,CONVERT(varchar(20),SUM(PatientsYTD) OVER (PARTITION BY HospitalID
ORDER BY DATEPART(yy,ModifiedDate)
ROWS BETWEEN CURRENT ROW AND 1 FOLLOWING ),1) AS CumulativeTotal
FROM Med.Doctor
WHERE HospitalID IS NULL OR HospitalID < 10;
😀
December 5, 2013 at 8:23 am
Simple example:
SELECT ROW_NUMBER() OVER(ORDER BY TRACCT) AS Row,
TRACCT, SUM(DEBCRED) as Total
FROM Accounts
December 5, 2013 at 8:25 am
Love Mr. B!! Would not be where I am without his books!! Thanks to SQLServerCentral for doing the right thing....
SELECT Risk_Mit, sciClientID, Amt
,SUM(Amt) OVER(PARTITION BY Risk_Mit) AS Total
,AVG(Amt) OVER(PARTITION BY Risk_Mit) AS "Avg"
FROM Risk.RiskDetail
WHERE SciId IN(AFT_102401,AFT_856944);
JK
December 5, 2013 at 8:27 am
You can tell I haven't plagiarised this by the poor SQL:
Table:
CREATE TABLE [dbo].[Wealth](
[Spouse] [char](1) NULL,
[AssetType] [varchar](100) NULL,
[EstimatedValue] [money] NULL
) ON [PRIMARY]
Data:
INSERT INTO [Scratch].[dbo].[Wealth]([Spouse],[AssetType],[EstimatedValue])
VALUES('H','Vehicle',10000.00)
INSERT INTO [Scratch].[dbo].[Wealth]([Spouse],[AssetType],[EstimatedValue])
VALUES('H','Savings',3000.00)
INSERT INTO [Scratch].[dbo].[Wealth]([Spouse],[AssetType],[EstimatedValue])
VALUES('H','Misc.',1000.00)
INSERT INTO [Scratch].[dbo].[Wealth]([Spouse],[AssetType],[EstimatedValue])
VALUES('H','Misc.',10000.00)
INSERT INTO [Scratch].[dbo].[Wealth]([Spouse],[AssetType],[EstimatedValue])
VALUES('H','Misc.',10.00)
INSERT INTO [Scratch].[dbo].[Wealth]([Spouse],[AssetType],[EstimatedValue])
VALUES('H','Property',50000.00)
INSERT INTO [Scratch].[dbo].[Wealth]([Spouse],[AssetType],[EstimatedValue])
VALUES('W','Vehicle',18000.00)
INSERT INTO [Scratch].[dbo].[Wealth]([Spouse],[AssetType],[EstimatedValue])
VALUES('W','Misc.',1000000.00)
INSERT INTO [Scratch].[dbo].[Wealth]([Spouse],[AssetType],[EstimatedValue])
VALUES('W','Property',250000.00)
INSERT INTO [Scratch].[dbo].[Wealth]([Spouse],[AssetType],[EstimatedValue])
VALUES('W','Property',100000.00)
INSERT INTO [Scratch].[dbo].[Wealth]([Spouse],[AssetType],[EstimatedValue])
VALUES('W','Savings',0.00)
INSERT INTO [Scratch].[dbo].[Wealth]([Spouse],[AssetType],[EstimatedValue])
VALUES('W','Property',175000.00)
Bad example of OVER:
SELECT [Spouse], [AssetType], [EstimatedValue],
(SUM([EstimatedValue]) OVER(PARTITION BY [Spouse], [AssetType]))
AS TotalTypeEstimatedValue
FROM [dbo].[Wealth]
ORDER BY [Spouse], [AssetType], [EstimatedValue]
Gaz
-- Stop your grinnin' and drop your linen...they're everywhere!!!
Viewing 15 posts - 91 through 105 (of 287 total)
You must be logged in to reply to this topic. Login to reply