February 10, 2009 at 9:08 am
Hi All
Now I am working on getting an integer time into a more readable format. Like 00:00:00.
91700
212000
90200
1
Thanks
Craig
February 11, 2009 at 2:07 am
you can try this
stuff(stuff(right('000000'+your_colume,6), 3,0, ':'), 6,0, ':')
February 12, 2009 at 2:49 pm
Stuff takes character_expression. Need to convert int to char first.
select stuff(stuff(right('000000'+convert(varchar(10), mycolumn),6), 3,0, ':'), 6,0, ':') from mytable
February 12, 2009 at 3:07 pm
select
TM,
[Integer Time To Datetime] =
dateadd(ss,(tm%100)+(((tm/100)%100)*60)+((tm/10000)*3600),0)
from
( -- Test Times
select tm = 91700union all
select tm = 212000union all
select tm = 90200union all
select tm = 1
) a
Results:
TM Integer Time To Datetime
----------- ------------------------------------------------------
91700 1900-01-01 09:17:00.000
212000 1900-01-01 21:20:00.000
90200 1900-01-01 09:02:00.000
1 1900-01-01 00:00:01.000
(4 row(s) affected)
February 12, 2009 at 9:02 pm
I think you'll find that Michael's solution is absolutely the fastest because it uses only 1 implicit conversion and it's not character based.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 13, 2009 at 7:33 am
Of course, this SQL 2008 version is probably closer to what you really want.
select
TM,
[Integer Time To Time] =
convert(time(0),dateadd(ss,(tm%100)+(((tm/100)%100)*60)+((tm/10000)*3600),0))
from
( -- Test Times
select tm = 91700 union all
select tm = 212000 union all
select tm = 90200 union all
select tm = 1
) a
Results:
TM Integer Time To Time
----------- --------------------
91700 09:17:00
212000 21:20:00
90200 09:02:00
1 00:00:01
(4 row(s) affected)
February 13, 2009 at 7:50 am
My test shows 50% : 50% execution plan cost for these 2 codes.
create table mytest (mytime int)
insert mytest values (91700)
insert mytest values (212000)
insert mytest values (90200)
insert mytest values (1)
SET STATISTICS TIME ON
-- code 1:
select stuff(stuff(right('000000'+convert(varchar(10), mytime),6), 3,0, ':'), 6,0, ':') from mytest
-- code 2:
select [Integer Time To Datetime] =
dateadd(ss,(mytime%100)+(((mytime/100)%100)*60)+((mytime/10000)*3600),0)
from mytest
February 13, 2009 at 10:32 am
Vivien Xing (2/13/2009)
My test shows 50% : 50% execution plan cost for these 2 codes...
Execution plan cost is not much of indication of actual performance.
Typically, you would need to test this with a few million rows of data to see the difference in performance. However, my past experience has alway shown that date manipulations that involve casting to strings is much slower than the date functions.
This thread shows some tests that compare using date functions vs. string functions for the same thing.
Building a date
February 13, 2009 at 9:27 pm
Thank you for pointing it out. I tested both using 1/2 million records. Your query is 2+ times faster than the query using stuff/convert.
I doubted the conversion cost at the beginning, but I did not have another query handy to compare with.
Every day is a school day.
February 14, 2009 at 11:21 am
Vivien Xing (2/13/2009)
My test shows 50% : 50% execution plan cost for these 2 codes.create table mytest (mytime int)
insert mytest values (91700)
insert mytest values (212000)
insert mytest values (90200)
insert mytest values (1)
SET STATISTICS TIME ON
-- code 1:
select stuff(stuff(right('000000'+convert(varchar(10), mytime),6), 3,0, ':'), 6,0, ':') from mytest
-- code 2:
select [Integer Time To Datetime] =
dateadd(ss,(mytime%100)+(((mytime/100)%100)*60)+((mytime/10000)*3600),0)
from mytest
Don't trust it... I can show you where things like % of batch and supposed "costs" lie like a rug. For example... what does the execution plan say about these two code snippets and what actually happens? First, here's the test data...
--===== Create and populate a 1,000,000 row test table.
-- Column "RowNum" has a range of 1 to 1,000,000 unique numbers
-- Column "SomeInt" has a range of 1 to 50,000 non-unique numbers
-- Column "SomeLetters2" has a range of "AA" to "ZZ" non-unique 2 character strings
-- Column "SomeMoney has a range of 0.0000 to 99.9999 non-unique numbers
-- Column "SomeDate" has a range of >=01/01/2000 and <01/01/2010 non-unique date/times
-- Column "SomeCSV" contains 'Part01,Part02,Part03,Part04,Part05,Part06,Part07,Part08,Part09,Part10'
-- for all rows.
-- Column "SomeHex12" contains 12 random hex characters (ie, 0-9,A-F)
-- Jeff Moden
SELECT TOP 1000000
RowNum = IDENTITY(INT,1,1),
SomeInt = ABS(CHECKSUM(NEWID()))%50000+1,
SomeLetters2 = CHAR(ABS(CHECKSUM(NEWID()))%26+65)
+ CHAR(ABS(CHECKSUM(NEWID()))%26+65),
SomeCSV = CAST('Part01,Part02,Part03,Part04,Part05,Part06,Part07,Part08,Part09,Part10' AS VARCHAR(80)),
SomeMoney = CAST(ABS(CHECKSUM(NEWID()))%10000 /100.0 AS MONEY),
SomeDate = CAST(RAND(CHECKSUM(NEWID()))*3653.0+36524.0 AS DATETIME),
SomeHex12 = RIGHT(NEWID(),12)
INTO dbo.JBMTest
FROM Master.dbo.SysColumns t1
CROSS JOIN Master.dbo.SysColumns t2
--===== A table is not properly formed unless a Primary Key has been assigned
ALTER TABLE dbo.JBMTest
ADD PRIMARY KEY CLUSTERED (RowNum)
... and now, the two code snippets...
--===== Execution plan says these are the same...
-- Are they?
SET STATISTICS TIME ON
SELECT TOP 10 *
FROM dbo.JBMTest
ORDER BY SomeDate
SET ROWCOUNT 10
SELECT *
FROM dbo.JBMTest
ORDER BY SomeDate
SET ROWCOUNT 0
SET STATISTICS TIME OFF
We can do even better on the disparity between execution plan costs and what actually happens especially when a bit of RBAR enters the picture... look at the costs and percent of batch for these two and then run them to see what actually happens...
SET NOCOUNT ON
--=======================================================================================
-- Recursive method shown by (Name with-held)
--=======================================================================================
PRINT '========== Recursive method =========='
--===== Turn on some performance counters ===============================================
SET STATISTICS IO ON
SET STATISTICS TIME ON
DECLARE @Bitbucket DATETIME --Holds display output so display times aren't measured.
--===== Execute the code being tested ===================================================
DECLARE @DateVal DATETIME
SET @DateVal = '2008-01-01'
;with mycte as
(
select @DateVal AS DateVal
union all
select DateVal + 1
from mycte
where DateVal + 1 < DATEADD(yy, 5, @DateVal)
)
select @Bitbucket = d.dateval
from mycte d
OPTION (MAXRECURSION 0)
--===== Turn off the performance counters and print a separator =========================
SET STATISTICS TIME OFF
SET STATISTICS IO OFF
PRINT REPLICATE('=',90)
GO
--=======================================================================================
-- Tally table method by Jeff Moden
--=======================================================================================
PRINT '========== Tally table method =========='
--===== Turn on some performance counters ===============================================
SET STATISTICS IO ON
SET STATISTICS TIME ON
DECLARE @Bitbucket DATETIME --Holds display output so display times aren't measured.
--===== Execute the code being tested ===================================================
DECLARE @StartDate AS DATETIME
SET @StartDate = '2008-01-01'
SELECT TOP (DATEDIFF(dd,@StartDate,DATEADD(yy,5,@StartDate)))
@Bitbucket = @StartDate-1+t.N
FROM Tally t
ORDER BY N
--===== Turn off the performance counters and print a separator =========================
SET STATISTICS TIME OFF
SET STATISTICS IO OFF
PRINT REPLICATE('=',90)
The reason why is because the execution plan for the recursive CTE is ONLY for the first iteration of the recursion.
Bottom line is, you can't simply say that something has less cost according to the execution plan and have it always be faster.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply