January 5, 2012 at 7:55 am
L' Eomot Inversé (1/5/2012)
SQL Kiwi (1/4/2012)
As far as I recall, the issue with multiplying or dividing by one was a bug in type inference, which has since been fixed.Well, I'm on an XP platform so I can't test if it is fixed in Denali, but it certainly isn't fixed in SQL 2008 R2 with latest updates.
Maybe someone could try this in Denali:
Hm, I was confusing what I thought you were describing with this bug: https://connect.microsoft.com/SQLServer/feedback/details/557523/automatic-type-assignment-causes-data-corruption, which is 'fixed' (for some value of 'fixed'). Your code example still throws the same error in the latest build of Denali I have, with TF 4199 on just in case it was an optimizer change that requires that flag. I need to look at it a bit more closely to understand fully what the script is telling me - do you know if there is an existing bug report on Connect for this?
You are right that it's a type inference problem, of course - if you have types where the precision and scale (in the T-SQL sense; T-SQL makes awfully inaccurate use of both those terms, of course, so this is far from the normal sense) are part of the type definition and you also allow type inference to decide the type of a computed column in a table definition so that the definition doesn't have to specify that type you will have to have a mechanism for allowing a special type inference to decide precision and scale; that becomes a problem when you allow that same special mechanism to rule when the result of the computation is being assigned to a column or variable of known type, because overflow should be determined by whether the value resulting can be represented in that known type, not on whether some rather pointless kludge to avoid explicitly defining the type of a computed column when defining the table that contains it would implicitly produce a type that would allow the value to be represented.
You're probably right. I should say, in case I have given a different impression previously, that the type system in SQL Server is a bit of a mess, and the fear of making improvements that 'break' backward compatibility is a sad and distressing thing. There are all sorts of things in T-SQL that ought to be redone completely, perhaps even throwing T-SQL away completely, but that's a whole different discussion, and something that doesn't look any more likely to happen than SQL Server adding some new decimal type with better semantics and more logical behaviours in general.
January 5, 2012 at 8:06 am
SQL Kiwi (1/5/2012)
I need to look at it a bit more closely to understand fully what the script is telling me
I didn't disect it completely, or even execute it, but based on fist glance, I think it loads a DECIMAL(38,5) with a value that uses all the 33 digits before the decimal place, then multiplies this by 1. Because SQL Server hardcodes the result to have at least 6 decimal places and at most 38 total positions, only 32 positions before the decimal remain, and that's one too short for the current value.
January 5, 2012 at 8:39 am
Hugo Kornelis (1/5/2012)
SQL Kiwi (1/5/2012)
I need to look at it a bit more closely to understand fully what the script is telling meI didn't disect it completely, or even execute it, but based on fist glance, I think it loads a DECIMAL(38,5) with a value that uses all the 33 digits before the decimal place, then multiplies this by 1. Because SQL Server hardcodes the result to have at least 6 decimal places and at most 38 total positions, only 32 positions before the decimal remain, and that's one too short for the current value.
Ah yes, that seems to be exactly it. So it probably won't be seen as a bug @ MSFT (just another counter-intuitive example to add to the list). No doubt people on this thread have strong views to the contrary! Thanks for the analysis.
January 5, 2012 at 11:50 am
SQL Kiwi (1/5/2012)
L' Eomot Inversé (1/5/2012)
SQL Kiwi (1/4/2012)
As far as I recall, the issue with multiplying or dividing by one was a bug in type inference, which has since been fixed.Well, I'm on an XP platform so I can't test if it is fixed in Denali, but it certainly isn't fixed in SQL 2008 R2 with latest updates.
Maybe someone could try this in Denali:
Hm, I was confusing what I thought you were describing with this bug: https://connect.microsoft.com/SQLServer/feedback/details/557523/automatic-type-assignment-causes-data-corruption, which is 'fixed' (for some value of 'fixed'). Your code example still throws the same error in the latest build of Denali I have, with TF 4199 on just in case it was an optimizer change that requires that flag. I need to look at it a bit more closely to understand fully what the script is telling me - do you know if there is an existing bug report on Connect for this?
As far as I know there is no Connect bug report, and I don't think there's much point in raising one as it's clearly "by design" - the two numbers being multiplied (or divided, or remaindered) are type decimal (38,5) and decimal(2,1) so the result has to be decimal (38,6) into which the decimal(38,5) number will not fit. Perhaps I should have given a simpler example of teh problem, so here goes:
declare @t table (A decimal(38,5) primary key)
insert @t(A) values((1234567890.0*10000000000.0*10000000000*1000)+0.12345)
select A from @t
update @t set A=A*1.0
There are all sorts of things in T-SQL that ought to be redone completely, perhaps even throwing T-SQL away completely, but that's a whole different discussion, and something that doesn't look any more likely to happen than SQL Server adding some new decimal type with better semantics and more logical behaviours in general.
I'm tempted to raise a connect item asking for support for the 2008 revision of the IEE floating point standard, including exponents to base 10 and 128 bit forms. That seems the obvious way to go in the long term. I looked a while back to see if there was a connect item for that and didn't spot one. An improved decimal type might be more likely to see a positive response in the short to medium term, but isn't as good a solution.
Tom
January 5, 2012 at 12:00 pm
L' Eomot Inversé (1/5/2012)
As far as I know there is no Connect bug report, and I don't think there's much point in raising one as it's clearly "by design"
Agreed.
I'm tempted to raise a connect item asking for support for the 2008 revision of the IEE floating point standard, including exponents to base 10 and 128 bit forms. That seems the obvious way to go in the long term. I looked a while back to see if there was a connect item for that and didn't spot one. An improved decimal type might be more likely to see a positive response in the short to medium term, but isn't as good a solution.
IIRC that's the same implementation used by the .NET framework. It would make a good Connect suggestion anyway. If you do enter one, link back here and I'll vote for sure.
January 5, 2012 at 6:05 pm
SQL Kiwi (1/5/2012)
Interesting. The documentation looks very similar between ASE 15 and SQL Server:
http://msdn.microsoft.com/en-us/library/ms190476.aspx
The magic number 6 appears for division in both, but not multiplication.
Interesting find ... however we have to read the Sybooks fine print ...
"For numeric and decimal datatypes, each combination of precision and scale is a distinct Adaptive Server datatype."
I only noticed this because the chapter heading is "Datatypes of mixed-mode expressions" and I thought ... hold on a minute, the QOTD is not mixed mode because the 2 multiplicands are identical scale and precision!!! Therefore the import of Table 1-3 does not apply to the QOTD for Sybase.
The MSDN entry is a bit more cryptic and harder to interpret as to whether the concept of mixed mode applies for numeric same scale and precision. The writers are fixated with decimal ranking higher than numeric and remain silent on the finer points.
Regardless I guess one can infer from the QOTD outcomes that SQL Server applies the 6 digit guillotine universally?!?
January 5, 2012 at 8:02 pm
I got it wrong, though I have always maintained that binary representations are more accurate than decimal ones (which they are of course) :crying:.
Sounds to me like the real problem here is that SQL SERVER treats DECIMALs too much as a strings rather than numbers. Treating them as strings rather than numbers even when you are doing arithmetic on them (the only reason why you "need" the absurdly high "precision" and get the resulting truncation issue in the first place) really indicates a lack of understanding the difference between the semantics and the representations of the data type.
Because there is nothing to stop a number representation based on the decimal system (instead of binary) to also use a floating decimal point. Nor is there any good reason (assuming you want to preserve the database representation for historical reasons) why SQL Server couldn't convert the disk-based strings into a format that treats the intended numbers as a numbers when it needs to manipulate them.
So it's a (documented) bug in my book.
January 9, 2012 at 9:02 am
SQL Kiwi (1/5/2012)
IIRC that's the same implementation used by the .NET framework. It would make a good Connect suggestion anyway. If you do enter one, link back here and I'll vote for sure.
I've raised a connect item.
Tom
January 9, 2012 at 9:07 am
L' Eomot Inversé (1/9/2012)
SQL Kiwi (1/5/2012)
IIRC that's the same implementation used by the .NET framework. It would make a good Connect suggestion anyway. If you do enter one, link back here and I'll vote for sure.I've raised a connect item.
Voted 🙂
January 24, 2012 at 2:21 am
Great question, thanks.
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
February 9, 2012 at 4:08 am
Nyc Q tx....
😉
Viewing 11 posts - 46 through 55 (of 55 total)
You must be logged in to reply to this topic. Login to reply