October 3, 2017 at 7:39 am
rmcfaden - Tuesday, October 3, 2017 7:21 AMJeff Moden - Tuesday, October 3, 2017 6:59 AMGood, Sir, with the quality of the code you write, you never have to justify yourself to me. And, yes... I absolutely did realize that the OP added the STUFF thing. When I said that the code was the result of the "collective" efforts, I did include the OP in that. And, yes... the OP didn't cough up that the Job_Prod column was, in fact, unique until fairly late in the game. Heh... I was just doing the BASF thing... "We don't make the paint, we just make it dry a little quicker". 😉
Actually, Jason added the Stuff thing. That was because in my testing I found that oddity for that test data.
I do apologize for not letting everyone in on the uniqueness of the Job_Prod column. It did not dawn on me to mention that. (NOOB)
BTW, I am struggling with which post to acknowledge as the answer.
You guys are awesome and I greatly appreciate all the efforts you have put into assisting me and certainly for all you do for everyone who posts on this site.
I was thinking this morning that I would like to know what you guys know, but then again, if all that knowledge swimming around in your brains leads to insomnia, perhaps I do not want that.
By golly, you're right. I stand corrected.
As for which post to mark as the answer, it's nice but not actually necessary. A whole bunch of people pitched in to get to the final high performance answer. That's how the DelimitedSplit8K function was actually developed... great community effort and that's why I love this community.
Heh... and no... knowing about SQL Server doesn't actually lead to insomnia unless it's also your hobby (I DO have to get a life!). In most cases, the knowledge prevents insomnia because we can go to bed knowing our stuff works. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
October 3, 2017 at 7:48 am
Jeff,
I was just wondering if you used that same test data and found the same results with 0's where there ought not be, when not using the STUFF function.
October 3, 2017 at 8:22 am
SELECT
t.Job_Prod
,t.Job_No
,TrDate = DATEADD(DAY, sc.ItemNumber -1, t.Act_Start)
,[Hours] = MAX(CASE WHEN trk.tType = 'A' THEN CONVERT(FLOAT,sc.Item) ELSE 0 END)
,Pieces = MAX(CASE WHEN trk.tType = 'P' THEN CONVERT(INT,sc.Item) ELSE 0 END)
FROM
TABLEA t
CROSS APPLY ( VALUES ('A', t.actual_Track), ('P', t.pieces_track) ) trk (tType, tValue)
CROSS APPLY dbo.DelimitedSplit8K(trk.tValue, ';') sc
GROUP BY
t.JOB_PROD,
t.Job_No,
t.Act_Start,
sc.ItemNumber
ORDER BY
t.JOB_PROD,
sc.ItemNumber;
How about this. We get rid of the STUFF, move the convert inward and the output is good, even with that test data that has no "0" before the "."
October 3, 2017 at 10:41 am
rmcfaden - Tuesday, October 3, 2017 8:22 AM
SELECT
t.Job_Prod
,t.Job_No
,TrDate = DATEADD(DAY, sc.ItemNumber -1, t.Act_Start)
,[Hours] = MAX(CASE WHEN trk.tType = 'A' THEN CONVERT(FLOAT,sc.Item) ELSE 0 END)
,Pieces = MAX(CASE WHEN trk.tType = 'P' THEN CONVERT(INT,sc.Item) ELSE 0 END)
FROM
TABLEA t
CROSS APPLY ( VALUES ('A', t.actual_Track), ('P', t.pieces_track) ) trk (tType, tValue)
CROSS APPLY dbo.DelimitedSplit8K(trk.tValue, ';') sc
GROUP BY
t.JOB_PROD,
t.Job_No,
t.Act_Start,
sc.ItemNumber
ORDER BY
t.JOB_PROD,
sc.ItemNumber;How about this. We get rid of the STUFF, move the convert inward and the output is good, even with that test data that has no "0" before the "."
If you want the output to be identical to what's in the input, then don't do a CONVERT at all.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 3, 2017 at 11:04 am
Jeff Moden - Tuesday, October 3, 2017 10:41 AMIf you want the output to be identical to what's in the input, then don't do a CONVERT at all.
No, I definitely want it as numeric.
October 3, 2017 at 3:49 pm
rmcfaden - Tuesday, October 3, 2017 11:04 AMJeff Moden - Tuesday, October 3, 2017 10:41 AMIf you want the output to be identical to what's in the input, then don't do a CONVERT at all.
No, I definitely want it as numeric.
Understood. I'd be real tempted to do something other than FLOAT, though.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 3, 2017 at 4:25 pm
Is it a memory hog?
Would just a decimal(10,5) be better?
October 4, 2017 at 6:13 am
rmcfaden - Tuesday, October 3, 2017 4:25 PMIs it a memory hog?
Would just a decimal(10,5) be better?
Jeff's not concerned about memory here, but accuracy. Floating point representation is not entirely accurate, as there are some values that simply can NOT be accurately represented, and of all the possible values, it's a pretty significant fraction thereof. I don't trust floating point math for that reason. Also, every time you multiply or divide with any number that has digits to the right of the decimal place, your result increases in terns of the number of digits to the right of the decimal place. Multiplying two numbers with 2 decimal places each results in a number with 4 decimal places. On the division side, you could end up immediately having an infinite number of decimal places. If there are multiple math operations, your ability to maintain enough decimal places can rather quickly run out.
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
October 4, 2017 at 8:47 am
FLOAT is actually VERY powerful if you understand how it works, that you must never do implicit formatting during calculations, that precision is fixed (meaning the scale is subject to change based on the values to the left of the decimal point), and that proper rounding for the final result must be taken into consideration.
Unfortunately, most people that use it don't know how to use it properly. The following demonstration code shows the "approximation" that Steve is talking about and it throws some people for a loop because they're not used to working with "binary floating point", which is the reason why some values can't be precise.
SET NOCOUNT ON
;
DECLARE @SomeFloat FLOAT
,@SomeDecimal DECIMAL(9,2)
,@Counter INT
;
SELECT @SomeFloat = 0
,@SomeDecimal = 0
,@Counter = 1
;
CREATE TABLE #MyHead
(
SomeFloat FLOAT
,SomeDecimal DECIMAL(9,2)
)
;
WHILE @Counter <= 1000
BEGIN
SELECT @SomeFloat = @SomeFloat + .01
,@SomeDecimal = @SomeDecimal + .01
,@Counter = @Counter + 1
;
INSERT INTO #MyHead
(SomeFloat, SomeDecimal)
SELECT @SomeFloat, @SomeDecimal
;
END
;
SELECT *
FROM #MyHead
WHERE SomeFloat <> SomeDecimal
;
Unfortunately, the precise Decimal/Numeric data types also have problems because if a given calculation exceeds a precision of 38, SQL Server rounds the answer to a scale of 6 (IIRC) producing a tolerance build up of inaccuracy. It think it odd that SQL Server does this. Granny's 4 function calculator can be more accurate for successive calculations.
So, FLOAT is ok IF and only IF you properly round displayed answers rather than rounding interim results. You just have to be very keen on that and remember that WHERE clauses that are looking for WHOLE numbers or a range of numbers may not actually work as expected because of the approximate values that FLOAT sometimes renders out as. An interesting proof of that and the resulting "silent failure" may be found in the following code.
DECLARE @Float1 FLOAT, @Float2 FLOAT, @Float3 FLOAT, @SUM FLOAT
;
SELECT @Float1 = 42.00
,@Float2 = 0.03
,@Float3 = 1.01
;
SELECT @SUM = @Float3 + @Float1 + @Float2
;
SELECT [Should be 0]= @SUM - @Float3 - @Float1 - @Float2
, LoanStatus = CASE
WHEN @SUM - @Float3 - @Float1 - @Float2 = 0
THEN 'Loan is paid off'
ELSE 'Still has Balance'
END
;
SELECT @Float1, @Float2, @Float3, @SUM
;
--Jeff Moden
Change is inevitable... Change for the better is not.
October 4, 2017 at 10:46 am
Excellent demonstration of the FACTS, Jeff !!! It's a shame more people don't have some basic math education on this stuff.
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
October 4, 2017 at 7:21 pm
sgmunson - Wednesday, October 4, 2017 10:46 AMExcellent demonstration of the FACTS, Jeff !!! It's a shame more people don't have some basic math education on this stuff.
Thanks, Steve. I appreciate the feedback.
FLOAT isn't just basic math though. At least it's not basic DECIMAL math. It's BINARY floating point math and a whole lot of people, even math "experts", can't grok what it does nor what the ramifications are. Certainly there are many that don't understand why certain decimal fractional values can't be represented even in 17 bytes of precision even if they're all applied to scale. I once interviewed a person for a lead developer position and I confirmed that he actually did have a Phd in mathematics... and yet when I asked him to convert 1416 to Base 10, he told me that he'd need a piece of paper and pencil or his math calculator to determine the answer. I'm really happy I didn't ask him the question in Base 2 terms. 😉
As for the DECIMAL/NUMERIC data types in SQL Server, I think it's atrocious what they've done. Here's a simple problem that even Granny's 4 function calculator gets right but SQL Server gets wrong because it rounds to six decimal places due to a wonky formula they have to determine the resulting data type when the sum of the precision and the sum of the scale of the input data types exceed certain parameters.
SELECT CAST(0.0000009000 AS DECIMAL(30,10))
* CAST(1.0000000000 AS DECIMAL(30,10))
;
You've just gotta know Granny is counting pennies on her homemade loan amortization schedule whether she uses a calculator or a spreadsheet, both of which calculate the result correctly. And that's just with two numbers. It cascades through the order of operations.
To understand the atrocity, please see the following article. Despite the claim at the top of the article, this isn't new as of 2012. It's always been this way and I believe that Sybase has the same affliction when it comes to FLOAT.
https://docs.microsoft.com/en-us/sql/t-sql/data-types/precision-scale-and-length-transact-sql
Interestingly enough, if you keep the precision of the result at or below 38 , it comes up with the correct answer but don't forget this cascades throughout a formula because the precision of the result for multiplication is p1+p2+1 according to the link above.
SELECT CAST(0.0000009000 AS DECIMAL(12,10))
* CAST(1.0000000000 AS DECIMAL(12,10))
;
--Jeff Moden
Change is inevitable... Change for the better is not.
October 4, 2017 at 7:53 pm
This is one of the better explications I've come across for floating point numbers... https://youtu.be/PZRI1IfStY0
October 4, 2017 at 9:13 pm
Jason A. Long - Wednesday, October 4, 2017 7:53 PMThis is one of the better explications I've come across for floating point numbers... https://youtu.be/PZRI1IfStY0
Educational. Thank you Jason for sharing that.
October 4, 2017 at 10:21 pm
rmcfaden - Wednesday, October 4, 2017 9:12 PMJason A. Long - Wednesday, October 4, 2017 7:53 PMThis is one of the better explications I've come across for floating point numbers... https://youtu.be/PZRI1IfStY0Educational. Thank you Jason for sharing that.
Not a problem. If you you like stuff like that, it's a channel worth subscribing to.. There's also the Numberphile channel, which is really good too.
October 5, 2017 at 4:33 am
Here is a better solution i guess
create function dbo.split(@str varchar(100),@delimeter varchar(1)) returns @t TABLE (itemnumber int identity(1,1),val varchar(100))
begin
declare @cursor INT =0,@prv int = 0
while @cursor < Len(@str)
begin
set @prv = @cursor
set @cursor = charindex(@delimeter,@str)
insert @t
Select SUBSTRING(@str,0,@cursor)
set @STR = substring(@str,@cursor+1,LEN(@str))
end
insert @t
Select @STR
return;
end
go
SELECT Job_Prod, Job_ID, CONVERT(DATE,Actual_Start,101) AS TrDate, S1.val AS FirstDateField, S2.val AS SecondDateField
FROM TABLEA AS TA
CROSS APPLY dbo.split(TA.actual_Track, ';') AS S1
CROSS APPLY dbo.split(TA.pieces_track, ';') AS S2
WHERE S1.ItemNumber = 4
AND S2.ItemNumber = 4
ORDER BY Job_Prod;
First solve the problem then write the code !
Viewing 15 posts - 31 through 45 (of 46 total)
You must be logged in to reply to this topic. Login to reply