February 27, 2006 at 4:10 am
declare
@a decimal(38,20),
@b decimal(38,20)
select
@a = 1.0000000000000000/1.4500000000000,
@b = 3000.14561000000000000000000000
select @a, @b-2, @a * @b-2, @b-2 * (1.0/1.45)
.68965517241379310345
3000.14561000000000000000
2069.065938
2069.0654206645500000
I'm pretty sure I want the final result - but I am wondering why my 20 scale decimal is being rounded to 6? What datatype is 1.0/1.45 likely to be?
Sql2k sp3
February 27, 2006 at 4:26 am
Meh, think I am being a bit silly.
The 1.0/1.45 seems to be coming back as a very different value - rounded again to 6dp. When I ramp it up with a few 0s we get to a closer value, finally to the same.
However when I take some big decimals like @a = 1.0000000000000000/1.4552325465737373654654,
@B = 3000.1456144545446216545 then it still seems to be coming back as 6dps.
Can anyone shed some light on this?
Cheers.
February 27, 2006 at 4:41 am
I'm not going to say I understand how, cause I don't - but I'm guessing that what you're seeing is result of truncation. Here's a snippet from BOL about how precision and scale are handled.
The operand expressions are denoted as expression e1, with precision p1 and scale s1, and expression e2, with precision p2 and scale s2. The precision and scale for any expression that is not decimal is the precision and scale defined for the data type of the expression.
Operation | Result precision | Result scale * |
---|---|---|
e1 + e2 | max(s1, s2) + max(p1-s1, p2-s2) + 1 | max(s1, s2) |
e1 - e2 | max(s1, s2) + max(p1-s1, p2-s2) | max(s1, s2) |
e1 * e2 | p1 + p2 + 1 | s1 + s2 |
e1 / e2 | p1 - s1 + s2 + max(6, s1 + p2 + 1) | max(6, s1 + p2 + 1) |
* The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated.
Maybe the above explains the behaviour you're seeing?
/Kenneth
February 27, 2006 at 4:56 am
Thanks, yes I looked at this a bit. Counld'nt really make that much sense of it.
I am using 38p throughout with a 15s.
Reading the note you posted made me try this:
declare
@a decimal(28,15),
@B decimal(28,15),
@c decimal(28,15)
select
@a = 1.4552325465737373654654 ,
@B = 3.1456144545446216545
select @C
Then again dropping the precision to 18.
By dropping the precision we increase the results scale.
However by making them all (38,33) we still retain the scale.
I am getting myself horribly confused here...
February 27, 2006 at 10:15 pm
The key is in the snippet Kenneth posted...
e1 / e2 | p1 - s1 + s2 + max(6, s1 + p2 + 1) | max(6, s1 + p2 + 1) |
In English, unless all of the operands are in decimal, the result will have a max of 6 places of scale (see the red stuff above).
When you have things like 1.0000000000000000/1.4500000000000 in a formula, even though the answer is being stored in a decimal variable, the two operands are FLOAT and the answer will be truncated to 6 places before the answer is stored. The only real way to get around this (I think, could be wrong) is to either store the two numbers in two other decimal variables or...
CAST(1.0000000000000000 AS DECIMAL(38,20))/CAST(1.4500000000000 AS DECIMAL(38,20))
--Jeff Moden
Change is inevitable... Change for the better is not.
February 28, 2006 at 3:46 am
Nah, tried that its the same.
The only thing that increased the scale of the answer, was as I mentioned, dropping the precision of the items to be multiplied, or dramatically increasing the scale.
The bit that Jeff highlighted I still dont understand. It seems to suggest (to me) that the scale would be the bigger of 6 or 15+38+1. For division. I moved away from division into simple multiplication - the post to which Jeff replied. So the scale should be 15+15. However the precision would be 38+38+1. Given the *'d text annotation (below) I see that this is truncating the scale, but the numbers really don't seem to add up. Perhaps my maths is flaky, or I just don't get what it says.
* The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated.
I would really like to understand what is going on here, it seems natural to take 2 big decimals to increase the accuracy at which you can operate - but that seems to be dramatically at odds with implementational reality.
February 28, 2006 at 3:58 am
You are having problem because of implicit conversions.
Declare variables for original values, assign those values and than start calculations.
Otherwise all you numbers will be implicitly converted to type real or float, depending on numder of decimal places.
_____________
Code for TallyGenerator
February 28, 2006 at 9:57 am
No, I am not.
Try this:
declare
@a decimal(38,15),
@B decimal(38,15),
@c decimal(38,15)
select
@a = cast(1.4552325465737373654654 as decimal(38,15)),
@B =cast( 3.1456144545446216545 as decimal(38,15))
select @C
go
declare
@a decimal(18,15),
@B decimal(18,15),
@c decimal(18,15)
select
@a = cast(1.4552325465737373654654 as decimal(18,15)),
@B =cast( 3.1456144545446216545 as decimal(18,15))
select @C
The first gives 4.577601000000000
The second gives 4.577600533226127
The only difference is the precision of the variables. What I want to know is how to effectively predict this behaviour and what to do about measuring things accurately...
March 1, 2006 at 3:53 am
So is anyone able to explain this accurately in easy to understand terms?
Cheers
March 1, 2006 at 4:28 am
HI,
You have to use such query to maintain your decimal points.
SELECT CONVERT(NUMERIC(38,20),1.0/1.45)
Result :
.68965500000000000000
REGARDS
AMIT GUPTA
March 1, 2006 at 7:35 am
If we could step away from the 1/x issue, which I acknowledged earlier was an oversight, and focus on my later question, which I shall repost for the sheer joy of it:
declare
@a decimal(38,15),
@B decimal(38,15),
@c decimal(38,15)
select
@a = cast(1.4552325465737373654654 as decimal(38,15)),
@B =cast( 3.1456144545446216545 as decimal(38,15))
select @C
C: 4.577601000000000
go
declare
@a decimal(18,15),
@B decimal(18,15),
@c decimal(18,15)
select
@a = cast(1.4552325465737373654654 as decimal(18,15)),
@B =cast( 3.1456144545446216545 as decimal(18,15))
select @C
C:4.577600533226127
The first gives 4.577601000000000
The second gives 4.577600533226127
Why when I lower the overall precision does the scale remain adequate? What formula does it really follow in the truncation?
March 1, 2006 at 11:53 am
Hi Wangkhar,
The answer is in the precision of the results being over 38 (p1+p2+1 on multiplication). When you have large numbers multiplied, the results may be higher than what fits in the precision of the original numbers, and since the system has no way to predict what precision the results will be required to fit, it tries to keep from creating an arithmetic overflow unnecessarily. I am not sure why in your example it keeps 6 for a scale, probably a default, but when you get below that threshold of 6 the formula should be:
resultant scale - (resultant precision - 38) = actual scale
(15 + 15) - ((30 + 30 + 1) - 38) = 30 - (61 - 38) = 30 - 23 = 7
select CAST(100200300400.4552325465737373654654 AS DECIMAL(30,15))
*CAST(100200300400.1456144545446216545 AS DECIMAL(30,15))
Decrease precision to 27 and:
(15 + 15) - ((27 + 27 + 1) - 38) = 30 - (55 - 38) = 30 - 17 = 13
select CAST(100200300400.4552325465737373654654 AS DECIMAL(27,15))
*CAST(100200300400.1456144545446216545 AS DECIMAL(27,15))
If the results of the above where forced into a DECIMAL(30,15) it would overflow as there are 23 numbers to the left of the decimal place. So in order to prevent that, the scale is decreased.
In your example, the second calculation has no reduction in scale because 18 + 18 + 1 = 37, which is still less than the max precision of 38, so you get the full 15 + 15 = 30 for scale.
From Books on-line: * The result precision and scale have an absolute maximum of 38. When a result precision is greater than 38, the corresponding scale is reduced to prevent the integral part of a result from being truncated.
Hope this helps.
Eric
March 2, 2006 at 3:33 am
Thanks, that is a good explanation of what is happening, and the BOL excerpt is what I am asking about. My problem still remains with this default of 6.
But thanks for the walkthrough.
March 2, 2006 at 8:57 pm
Well, I could not find any documentation on the minimum scale of 6 on a multiplication of 2 decimal values, but it makes logical sense that you would not want to entirely wipe out your scale just to prevent a theoretical overflow. A value of 6 would seem to be enough for most situations, but not overkill.
Eric
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply