January 16, 2014 at 8:35 pm
Peter E. Kierstead (1/16/2014)
[h4]Try this...[/h4]
Create Function dbo.ElapsedTime
(
@StartTime DateTime,
@EndTime DateTime
)
Returns VarChar(20)
As
Begin
Declare
@st DateTime,
@et DateTime,
@NeedOffset bit;
-- Reality check!
If (@StartTime>@EndTime) Return Null;
Select
-- Get isolated time components for start and end DateTime inputs.
@st='1900-01-01 '+Right(Convert(VarChar,@StartTime,121),12),
@et='1900-01-01 '+Right(Convert(VarChar,@EndTime,121),12),
-- If there is a time inversion set a flag indicating that offsetting is required.
@NeedOffset=Case when @st>@et then 1 else 0 End;
-- The DateTime data-type can represent dates between 1753-01-01 00:00:00.000 and
-- 9999-12-31 23:59:59.998, inclusive. However, DateDiff(ms,...) will throw an Int
-- overflow exception if the differential is close to 25 days, therefore, this method
-- processes the days component separately from the time component.
Return
-- Output the DateTime days differential (0 - 3012153).
Cast
(
DateDiff(dd,@StartTime,@EndTime)-
-- Apply day offset, if necessary.
Case @NeedOffset when 1 then 1 else 0 End
as VarChar
)+
':'+
-- Output the DateTime time differential (00:00:00.000 - 23:59:59.999).
-- This is done by replacing both input value's date component with
-- '1900-01-01' before running the appropriate conversion functions.
Right
(
Convert
(
VarChar,
DateAdd
(
ms,
DateDiff
(
ms,
@st,
)+
-- Handle milliseconds offset, if necessary.
Case @NeedOffset when 1 then 86400000 else 0 End,
0
),
121
),
12
);
End
As I said, I'm always up for alternate methods and very much appreciate your time in posting that code. I did try it on the million row test table and it took over 26 seconds to complete where the same code (both executed on my slower 12 year old box) only took a little over 3 . Part of the problem is that it's a scalar function, which will usually mean that, all else being equal, it will run about 7 times slower than direct code or an iSF (proverbial "Inline Scalar Function", which is really "just" an Inline Table Valued Function that returns a scalar value).
While that doesn't seem to be a big difference to many, things like this really start to add up in the face of scalability.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 16, 2014 at 8:53 pm
dwain.c (1/16/2014)
I must say, I love the technique Jeff! Very simple, straightforward and clean with good performance. Nothing more straightforward comes immediately to mind (issues noted above aside).Normally I take your word on performance but I in this case I played a bit and found something interesting. Running your 1M row test harness, followed by the below query:
SET STATISTICS TIME ON;
SELECT @Bitbucket = STUFF(CONVERT(VARCHAR(20),s,114),1,2,DATEDIFF(hh,0,s))
FROM #JBMTest
CROSS APPLY (SELECT EndDT-StartDT) a (s);
SET STATISTICS TIME OFF;
Generated different speed results on SQL 2008 R2 vs. SQL 2012.
SQL 2008 R2 (expected - second result uses CA for intermediate calculation):
(1000000 row(s) affected)
SQL Server Execution Times:
CPU time = 702 ms, elapsed time = 699 ms.
SQL Server Execution Times:
CPU time = 717 ms, elapsed time = 707 ms.
SQL 2012 (unexpected):
(1000000 row(s) affected)
SQL Server Execution Times:
CPU time = 749 ms, elapsed time = 738 ms.
SQL Server Execution Times:
CPU time = 671 ms, elapsed time = 674 ms.
Improvements in the optimizer I reckon.
Thanks for the feedback, Dwain. Like I said in the article, most of the time was spent on display and the actual time expended for the calculation was right around 717ms using the "@BitBucket" method for taking the display out of the picture. Your code certainly backs that claim up.
As for the 2012 code, I'm actually disappointed to see those results and wouldn't call it an "improvement" to the optimizer at all. It seems to indicate that you need more complex code to get the same performance out of what used to be simpler code.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 16, 2014 at 8:57 pm
Jeff Moden (1/16/2014)
dwain.c (1/16/2014)
As for the 2012 code, I'm actually disappointed to see those results and wouldn't call it an "improvement" to the optimizer at all. It seems to indicate that you need more complex code to get the same performance out of what used to be simpler code.
I can see that perspective. It does open interesting possibilities though.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
January 16, 2014 at 9:04 pm
rho_pooka (1/16/2014)
Thank you for article, enjoyed the read!
Thank you very much for the feedback. I appreciate it.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 16, 2014 at 9:07 pm
Thomas Abraham (1/16/2014)
I meant to post this earlier, but got caught up in some of the tangent. Thanks to Jeff for providing this spackle. Great job, as always.
Thanks for the feedback and for the cover in my absence. Great job, Thomas and thank you very much for the help.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 16, 2014 at 9:40 pm
Since there was some interest in correctly calculating the duration even if the start and end dates were reversed, here's a function that straightens that out. Personally, I'd much rather ensure that the data in the table were actually correct or that an application was passing the dates correctly but this "iSF" function would do just fine (it's only a little slower than the original code) if such things were not possible.
CREATE FUNCTION dbo.Duration
/**********************************************************************************************************************
Purpose:
Given a start and end date as a DATETIME, regardless of the temporal order, calculate the duration between the lowest
datetime and the highest and format the output as HHH:MI:SS.nnn WHERE "HHH" is any number of hours up to the maximum
maximum difference that DATEDIFF and CONVERT can handle (MAX date of 9999-01-01 23:59:59.000 or 71,003,135 hours or
3ms less than 10,000 years).
Obviously, if either value is NULL, a NULL will be returned.
Usage Example:
SELECT ca.FormattedDuration
FROM #JBMTest tt
CROSS APPLY dbo.Duration(tt.StartDT, tt.EndDT) ca
Ref: http://www.sqlservercentral.com/articles/T-SQL/103343/
Revision History:
Rev 00 - 16 Jan 2014 - Jeff Moden - Initial creation and unit test.
**********************************************************************************************************************/
(
@StartDT DATETIME
,@EndDt DATETIME
)
RETURNS TABLE WITH SCHEMABINDING AS
RETURN
WITH cteSortDates AS
(--==== Sort the 2 dates with a "short circuit" in the CASEs for the expected order.
SELECT StartDT = CASE WHEN @StartDT <= @EndDT THEN @StartDT ELSE @EndDT END
,EndDT = CASE WHEN @StartDT <= @EndDT THEN @EndDT ELSE @StartDT END
)
SELECT FormattedDuration = STUFF(CONVERT(VARCHAR(20),EndDT-StartDT,114),1,2,DATEDIFF(hh,0,EndDT-StartDT))
FROM cteSortDates
;
--Jeff Moden
Change is inevitable... Change for the better is not.
January 16, 2014 at 10:07 pm
Crud. I just noticed a limitation and I need to post another correction to the article. I tested for all dates from 1900-01-01 00:00:00.000 through 9999-12-31 23:59:59.997. If you have just 3ms more time span, the code fails with an overflow. This is apparently because CONVERT has a limit the same as the MAX SQL DateTime of 9999-12-31-23:59:59.997 (and why wouldn't it?). Even just 3ms more, and you get an overflow error. That means this method has a limit of 3ms less than 10,000 years. Still, that's better than 148 years but it's still an error in the article that needs to be corrected, which I'll do in the morning.
Here's the code that demonstrates the problem above...
--==== This works.
SELECT CONVERT(VARCHAR(20),CAST('9999-12-31 23:59:59.997' AS DATETIME)-CAST('1900-01-01 00:00:00.000' AS DATETIME),114)
;
--==== Adding just 3ms more duration causes CONVERT to fail.
SELECT CONVERT(VARCHAR(20),CAST('9999-12-31 23:59:59.997' AS DATETIME)-CAST('1899-12-31 23:59:59.997' AS DATETIME),114)
;
--Jeff Moden
Change is inevitable... Change for the better is not.
January 16, 2014 at 10:43 pm
Thanks Jeff for the performance analysis. The two main reasons that it is slower are:
1) Its coded as a scalar function
2) It also handles days, i.e., it outputs a string like DDD:HH:MM:SS.mmm
I've altered things a bit and here are the results from 1,000,000 test rows:
Your code outputting [HHH:MM:SS.mmm]:
STUFF(CONVERT(VARCHAR(20),EndDT-StartDT,114),1,2,DATEDIFF(hh,0,EndDT-StartDT))
CPU time = 1202 ms, elapsed time = 1182 ms.
My refactored code outputting [DDD:HH:MM:SS.mmm]:
-- Emit the days number...
Cast(DateDiff(dd,StartDT,EndDT)
-- ...subtracting a day when the times are inverted.
-Case when Convert(VarChar,StartDT,114)>Convert(VarChar,EndDT,114) then 1 else 0 End as VarChar)
-- Emit the separator and time components.
+':'+Convert(VarChar,EndDT-StartDT,114)
CPU time = 2527 ms, elapsed time = 2381 ms.
I'm willing to eat the extra time to get the elapsed time in DDD:HH:MM:SS.mmm format, especially since we only use this to output elapsed time between DML in procedures during development.
I didn't realize that you could perform simple math (+, -) on DateTime types! Learn something new everyday. Thanks, you rock!
January 17, 2014 at 3:10 pm
Fantastically simple!
As for how to get the duration in YMD, here's what I came up with. I tried to code it the way I would process it in my head (scary, I know!). My approach is that we humans would intuitively account for the differences in the days of the month and leap years by subtracting the dates in part.
1) first the whole years difference
2) then the number of whole months left over from the whole years difference
3) then the number of days left over from the whole months difference. Thus, the difference in days is dependent on the number of days in the month from the start date.
Anyway, for what it's worth here's the SQL I came up with.
DECLARE @StartTime datetime, @EndTime datetime;
SET @StartTime = '1964-07-24';
SET @EndTime = '2014-01-17';
SELECT
CONVERT(varchar(4),
CASE WHEN DATEPART(MONTH,@EndTime) < DATEPART(MONTH,@StartTime)
-- If month of @EndTime is < the month of @StartTime, then there is a fraction of a year between the two dates, so subtract 1 from the year difference
THEN DATEPART(YEAR,@EndTime) - DATEPART(YEAR,@StartTime) - 1
-- If the month of @EndTime is >= the month of @StartTime, then difference the years and take one away if there is only a fraction of a month between the dates (i.e. the day of @EndTime < the day of @StartTime)
ELSE DATEPART(YEAR,@EndTime) - DATEPART(YEAR,@StartTime) - CASE WHEN DATEPART(DAY,@EndTime) < DATEPART(DAY,@StartTime) THEN 1 ELSE 0 END
END
) + 'Y ' +
CONVERT(varchar(4),
CASE WHEN DATEPART(MONTH,@EndTime) <= DATEPART(MONTH,@StartTime)
-- If the month of @EndTime <= the month of @StartTime, then get the difference in months by adding 12 (i.e. one year) to @EndTime
THEN DATEPART(MONTH,@EndTime) + 12 - DATEPART(MONTH,@StartTime)
-- Otherwise, simply subtract the month of @StartTime from the month of @EndTime
ELSE DATEPART(MONTH,@EndTime) - DATEPART(MONTH,@StartTime)
END
-- If month of @EndTime is < month of @StartTime, then there is a fraction of a year between the two dates, so subtract 1 from the month difference
- CASE WHEN DATEPART(DAY,@EndTime) < DATEPART(DAY,@StartTime) THEN 1 ELSE 0 END
-- If the months are the same and the day of @EndTime is >= the day of @StartTime, then subtract 12 to prevent 12M from occurring in the output
-- The logic above for the year difference accounts for the full year
- CASE WHEN DATEPART(MONTH,@EndTime) = DATEPART(MONTH,@StartTime) AND DATEPART(DAY,@EndTime) >= DATEPART(DAY,@StartTime) THEN 12 ELSE 0 END
) + 'M ' +
CONVERT(varchar(4),
CASE WHEN DATEPART(DAY,@EndTime) < DATEPART(DAY,@StartTime)
-- If the day of @EndTime is < the day of @StartTime, then subtract the difference between @StartTime - @EndTime from the number of days in the month of @StartTime
-- This give the intuitive difference in days based on the @StartTime after getting the difference in years and months above.
THEN DATEDIFF(DAY, DATEADD(DAY, 1 - DAY(@StartTime), @StartTime), DATEADD(MONTH, 1, DATEADD(DAY, 1 - DAY(@StartTime), @StartTime))) -- Number of days in the month of @StartTime
- (DATEPART(DAY,@StartTime) - DATEPART(DAY,@EndTime))
-- Otherwise, simply subtract the day of @StartTime from the day of @EndTime
ELSE DATEPART(DAY,@EndTime) - DATEPART(DAY,@StartTime)
END
) + 'D '
AS 'Time Period'
-- RTW
Be curious!
January 17, 2014 at 4:43 pm
Another bit of SQL weirdness with the DateTime datatype.
[h4]Setup...[/h4]
According to BOL a DateTime variable will hold dates ranging from 1753-01-01 through 9999-12-31.
By not also specifying the time component they judiciously avoided a few nasty questions. Here are some examples that illuminate this issue.
This example uses implicit casting to convert a date/time string into a DateTime data type; using actual DateTime values will not change the outcome:
Declare
@MinDT DateTime,
@MaxDT DateTime;
Select
@MinDT ='1753-01-01 00:00:00.000',
@MaxDT ='9999-12-31 23:59:59.998'
Select
@MinDT,
@MaxDT;
The final SELECT statement will output:
1753-01-01 00:00:00.000 | 9999-12-31 23:59:59.997
Notice the millisecond difference between the assignment of @MaxDT and its value in the row set output?
Once again just so you won't think implicit casting has anything to do with this:
Declare
@MinDT DateTime,
@MaxDT DateTime;
Select
@MinDT ='1753-01-01 00:00:00.000',
@MaxDT ='9999-12-31 23:59:59.998'
Select
@MinDT,
@MaxDT;
-- @MaxDT is now a valid DateTime value.
Set @MaxDT=DateAdd(ms,1,@MaxDT);
Select
@MinDT,
@MaxDT;
Both SELECT statement output the same results:
1753-01-01 00:00:00.000 | 9999-12-31 23:59:59.997
1753-01-01 00:00:00.000 | 9999-12-31 23:59:59.997
Since the accuracy of a DateTime data type is 0.00333 then the highest actual millisecond value would be 1.0-0.00333=0.99667 or 0.997 after rounding.
Explanation:
1.0-0.998=0.002, 0.998-0.9967=0.0013; hence 0.998 is closer to 0.9967 than it is to 1.0 so the rounded value is 0.997
Had the rounding gone the other way the assignment to @MaxDT would throw an "out-of-range datetime value" exception.
This leads us to our next two examples:
Declare
@MinDT DateTime,
@MaxDT DateTime;
Select
@MinDT ='1753-01-01 00:00:00.000',
@MaxDT ='9999-12-31 23:59:59.999'
Declare
@MinDT DateTime,
@MaxDT DateTime;
Select
@MinDT ='1753-01-01 00:00:00.000',
@MaxDT ='9999-12-31 23:59:59.997'
Set @MaxDT=DateAdd(ms,2,@MaxDT)
Now it should be more readily understood why both of these examples throw "out-of-range datetime value" exceptions even though the apparent values appear to be in range.
[h4]Promised weirdness...[/h4]
Declare
@MinDT DateTime,
@MaxDT DateTime;
Select
@MinDT ='1753-01-01 00:00:00.000',
@MaxDT ='9999-12-31 23:59:59.997'
Select
@MinDT,
@MaxDT,
@MaxDT-@MinDT;
This code throws "out-of-range datetime value"! I don't know about you but this messed with my head. I would have thought that I would get the lowest DateTime value back, i.e., 1753-01-01 00:00:00.000. In fact, with a little testing I found that the following will work!
Declare
@MinDT DateTime,
@MaxDT DateTime;
Select
@MinDT ='1753-01-01 00:00:00.000',
@MaxDT ='9852-12-31 23:59:59.997'
Select
@MinDT,
@MaxDT,
@MaxDT-@MinDT;
Hmm, looks like we really don't have years 1753 - 9999 as a consistently valid range for all DateTime operations?!? In fact there is a 147 year discrepancy! A little more testing reveals:
Select GetDate()-GetDate();
Which returns: 1900-01-01 00:00:00.000 instead of 1753-01-01 00:00:00.000! Now I see what's going on here. The 147 years is the difference between 1900 and 1753! This is why some programmer's DateTime code will overflow sooner/later than other programmer's DateTime code; it depends on which functions you use to calculate the date and time differences. However, given that most of us probably won't be programming in Microsoft SQL (at least today's current versions!) in 9999, a 147 year discrepancy is probably not a problem.
Therefore, if ya just can't spare 147 years in your elapsed time functions don't use an algorithm that uses the binary subtract operator ("-") to get the DateTime difference; use DateDiff on DateTime values where the CCYY-MM-DD component has been edited to the same value, such as 1900-01-01 (taking into account any time range inversion).
January 18, 2014 at 12:40 am
[h4]More DateTime info...[/h4]
This time I teased out the internal representation of the DateTime data type. First I ran the following:
Declare @d DateTime;
Set @d='1753-01-01 00:00:00.000';
Select Cast(@d as VarBinary),Cast(@d as BigInt);
And received:
0xFFFF2E4600000000 | -53690
Immediately I noticed that 0xFFFF2E4600000000 is not equivalent to -53690! In fact -53690 is the decimal representation of the first 4 hexadecimal bytes! The Cast to VarBinary seemed to know that @d was 8 bytes long, why didn't the Cast to BigInt have the same length info? So then I tried this:
Declare @d DateTime;
Set @d='1753-01-01 00:00:00.000';
Select Cast(@d as VarBinary),Cast(@d as BigInt),Cast(Cast(@d as VarBinary) as BigInt);
And received:
0xFFFF2E4600000000 | -53690 | -230596794122240
Now that's more like it. 0xFFFF2E4600000000 is equivalent to -230596794122240! Next, based on a hunch from a previous post on this thread I ran:
Declare @d DateTime;
Set @d='1900-01-01 00:00:00.000';
Select Cast(@d as VarBinary),Cast(@d as BigInt),Cast(Cast(@d as VarBinary) as BigInt);
And received:
0x0000000000000000| 0 | 0
It looks like the internal representation is broken down in the following manor:
Bytes 1 - 4 represent the date component (CCYY-MM-DD)
Bytes 5-8 represent the time component (HH:MM:SS.mmm)
But what are their units and intervals? Well, the following query will tell us that:
Declare @d DateTime;
Set @d='1753-01-01 00:00:00.000';
Select Cast(@d as VarBinary),Cast(SubString(Cast(@d as VarBinary),1,4) as Int),Cast(SubString(Cast(@d as VarBinary),5,4) as Int);
Set @d='1753-01-02 00:00:00.001';
Select Cast(@d as VarBinary),Cast(SubString(Cast(@d as VarBinary),1,4) as Int),Cast(SubString(Cast(@d as VarBinary),5,4) as Int);
Set @d='1753-01-02 00:00:00.003';
Select Cast(@d as VarBinary),Cast(SubString(Cast(@d as VarBinary),1,4) as Int),Cast(SubString(Cast(@d as VarBinary),5,4) as Int);
0xFFFF2E4600000000 | -53690 | 0
0xFFFF2E4700000000 | -53689 | 0
0xFFFF2E4700000001 | -53689 | 1
Notice how I modified the date-time string before each query and notice the resultant output values. From an analysis of these values it can be seen that:
Bytes 1 - 4 represent the date component in days, however, the range does not start with 0, it starts with -53690.
Bytes 5-8 represent the time component in 3.33~ millisecond counts, i.e., this count increments by 1 for every 3.33~ milliseconds that pass; the range starts at 0.
So what are the actual ranges for days and 3.33~ millisecond counts? Lets run another query:
Declare @d DateTime;
Set @d='1753-01-01 00:00:00.000';
Select Cast(@d as VarBinary),Cast(SubString(Cast(@d as VarBinary),1,4) as Int),Cast(SubString(Cast(@d as VarBinary),5,4) as Int);
Set @d='9999-12-31 23:59:59.997';
Select Cast(@d as VarBinary),Cast(SubString(Cast(@d as VarBinary),1,4) as Int),Cast(SubString(Cast(@d as VarBinary),5,4) as Int);
0xFFFF2E4600000000 | -53690 | 0
0x002D247F018B81FF | 2958463 | 25919999
From these results we can see that the day range is -53690 through 2958463, or 3012154 (don't forget to add 1 for the zero crossing) which is 8246 years, the full span from years 1753 to 9999.
From these results we can also see that the 3.33~ millisecond count range is 0 through 25919999, or 25920000 counts which is 86400000 milliseconds (25920000*3.33~) or 1 days worth of milliseconds.
So, there you have it. You could use this knowledge to craft an ElapsedTime function that will work across the full spectrum of time from 1753-01-01 00:00:00.000 through 9999-12-31 23:59:59.997 with some reasoned logic and judicious use of SubString, Cast, & Convert. It probably won't be the fastest performer but it will give you the full time span available without any "out-of-range datetime value" exceptions!
Happy Casting!
January 18, 2014 at 7:42 am
Jeff Moden (1/16/2014)
Eirikur Eiriksson (1/16/2014)
Thank you for the spackle Jeff but it looks like there is a small crack in it as it makes the assumption that the end time is always greater than the start time. This is the reason why I prefer to use double cast, first to varbinary and then to bigint.
SELECT
StartDT
,EndDT
,Duration = STUFF(CONVERT(VARCHAR(20),EndDT-StartDT,114),1,2,DATEDIFF(hh,0,EndDT-StartDT))
,DurationI = STUFF(CONVERT(VARCHAR(20),StartDT-EndDT,114),1,2,DATEDIFF(hh,0,StartDT-EndDT))
,cast(cast(StartDT as varbinary(128)) as bigint) As StartTicks
,cast(cast(EndDT as varbinary(128)) as bigint) AS EndTicks
,cast(cast(StartDT as varbinary(128)) as bigint)-cast(cast(EndDT as varbinary(128)) as bigint) as XmY
,cast(cast(EndDT as varbinary(128)) as bigint)-cast(cast(StartDT as varbinary(128)) as bigint) as YmX
FROM #JBMTest;
You're correct and I could have added a simple statement, such as "Assuming two non-null dates where the StartDT is less than or equal to the EndDT,...", to make sure that no one would have to assume.
Shifting gears to the code you posted...
While I very much appreciate alternate methods for just about everything, I have to admit that the code you wrote for the StartTicks and EndTicks columns has me confused. Why are they labeled as "Ticks"? Because DATETIME is based on two integers where one integer represents the number of whole days and the other represents the number of 1/300ths of a second that have passed since midnight, I don't understand where the word "Ticks" comes into play. Without further explanation or esoteric knowledge, one might think that you were implying "clock ticks" and, without some additional calculation, they are not. Further, since the maximum hex value of the "Time" portion for a DATETIME datatype is 018B81FF (23:59:59.997) and falls drastically short of the FFFFFFFF value before the rollover to the next int (day), the VARBINARY/BIGINT converted value can't represent "ticks" of equal duration of any kind. A simple conversion of DATETIME to FLOAT would get you a lot closer to some form of "ticks" value.
This is the reason why I prefer to use double cast, first to varbinary and then to bigint.
I'm even more confused after that statement. Please explain how your code, as it is posted, ensures that the StartDT and EndDT values are in the proper order even if they were to produce the required human-readable output (and it IS required by most). From what I can see, the code doesn't resolve that problem. Unless I'm missing something, it simply presents two columns where an additional decision would still need to be made as to whether or not the two DATETIMEs were in the correct order or not.
And, of course, although most would never approach the limit and as you have yourself recognized, your conversions do have a limit at 148 years whereas the code in the article does not.
I must be missing something. Please explain.
First let me apologize for the inaccuracy and incompleteness of my initial postings and for implying that a direct cast of the DATETIME to BIGINT would hold any sensible meaning.
To finish what I started;
DATETIME is a structure of two integers, signed integer for the number of days starting at -53690 for 1753-01-01 which are the number of days until 1900-01-01.
For this reason, the subtraction of two equal datetime values returns 1900-01-01.
It can hold 3012153 days or 2958463 days after 1900-01-01 which is 9999-12-31.
And as you correctly said, the second part of the DATETIME structure holds the time from midnight in 1/300th parts of a second, that is 25920000 intervals per day. This makes 78,075,031,679,999 (1/300th) parts for the full day range of the DATETIME, well within the limits of the BIGINT, 2^63-1 (9,223,372,036,854,775,807). Since the storage size is the same for both data types, I prefer to store both date-time and duration/period as (1/300th) parts BIGINT if to be used for temporal calculations.
The calculation is quite quick and straight forward; extract the day part, add 53690, then multiply by 25920000 and add the time part. To extract the day value one can either shift the bytes by division of 0xFFFFFFFF or using SUBSTRING, the latter tends to be faster. For the time part OR or XOR with an integer of 0 (zero).
A sample test code snip using the #JBMTest;
DECLARE @Bitbucket BIGINT;
SET STATISTICS TIME ON;
SELECT @Bitbucket = ABS(
((CAST((53690 +
CAST(SUBSTRING(CAST(EndDT AS VARBINARY(8)),1,4) AS int)) AS BIGINT) * 25920000)
+ (CAST(EndDT AS VARBINARY(8)) ^ 0)) -
((CAST((53690 +
CAST(SUBSTRING(CAST(StartDT AS VARBINARY(8)),1,4) AS int)) AS BIGINT) * 25920000)
+ (CAST(StartDT AS VARBINARY(8)) ^ 0))
)
FROM #JBMTest;
SET STATISTICS TIME OFF;
GO
A MIN-MAX demonstration
DECLARE @MinDT DateTime;
DECLARE @MaxDT DateTime;
SELECT
@MinDT ='1753-01-01 00:00:00.000',
@MaxDT ='9999-12-31 23:59:59.998';
SELECT
@MinDT AS MinDateTime
,@MaxDT AS MaxDateTime
,ABS(
((CAST((53690 +
CAST(SUBSTRING(CAST(@MaxDT AS VARBINARY(8)),1,4) AS int)) AS BIGINT) * 25920000)
+ (CAST(@MaxDT AS VARBINARY(8)) ^ 0)) -
((CAST((53690 +
CAST(SUBSTRING(CAST(@MinDT AS VARBINARY(8)),1,4) AS int)) AS BIGINT) * 25920000)
+ (CAST(@MinDT AS VARBINARY(8)) ^ 0))
) AS TP_300
,ABS(
((CAST((53690 +
CAST(SUBSTRING(CAST(@MaxDT AS VARBINARY(8)),1,4) AS int)) AS BIGINT) * 25920000)
+ (CAST(@MaxDT AS VARBINARY(8)) ^ 0)) -
((CAST((53690 +
CAST(SUBSTRING(CAST(@MinDT AS VARBINARY(8)),1,4) AS int)) AS BIGINT) * 25920000)
+ (CAST(@MinDT AS VARBINARY(8)) ^ 0))
) / 25920000 AS NumberOfDays
;
Again my apologies for the inaccuracy of my previous postings.
January 20, 2014 at 6:03 am
Thanks for the article. I appreciate it.
One question though. It said in the article
It’s important that the DATEDIFF in the formula have a "starting date" of "0" (the numeric equivalent of 1900-01-01) and an "ending date" of the duration calculation so that we don’t introduce any errors simply by crossing an "hour boundary" at the wrong time.
which I expect means the DATEDIFF section should use
DATEDIFF(hh,0,@EndDT-@StartDT)
rather than
DATEDIFF(hh,@StartDT,@EndDT)
which also returns 26 in this example. I guess I don't understand what an "hour boundary" is or why we need to guard against crossing one. Could you give an example where these two DATEDIFF statements would give different results?
January 20, 2014 at 8:11 am
When the hour increments from one value to the next, or from 12 to 1, an hour boundary is said to have been crossed. The same happens for the other time portions too. Try this code:
SELECT *,
[Hours] = DATEDIFF(hour, Starttime,EndTime),
[Minutes] = DATEDIFF(minute, Starttime,EndTime),
Seconds = DATEDIFF(second, Starttime,EndTime),
[millisecond] = DATEDIFF(millisecond, Starttime,EndTime),
[microsecond] = DATEDIFF(microsecond, Starttime,EndTime)
FROM (
SELECT
Starttime = cast('20140120 16:59:59.997' AS DATETIME),
EndTime = cast('20140120 17:00:00.000' AS DATETIME)
) d
There's only 3ms difference between Starttime and EndTime, but an hour boundary was crossed (16:nn:nn to 17:nn:nn), a minute boundary was crossed (16:59:nn to 17:00:nn) and a second boundary was crossed (16:59:59 to 17:00:00).
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
January 20, 2014 at 1:57 pm
Thanks Chris, that helps a lot!
So I'm understanding now that starting at 0 and ending at the difference [ie. DATEDIFF(hh,0,@EndDT-@StartDT)] allows the query to work when the hour part of @StartDT is larger than the hour part of @EndDT.
Viewing 15 posts - 31 through 45 (of 57 total)
You must be logged in to reply to this topic. Login to reply