September 9, 2016 at 7:54 am
RossOverThere (9/9/2016)
sknox (9/9/2016)
Good question, but the explanation is incomplete. Precision is only half of the story, the other being scale.1.0 can fit in a number with precision of 1, if the scale is 0.
But 0.7 has precision of 1 and scale of 1. That means that it can only have one significant digit (precision = 1), and that 1 significant digit must lie to the right of the decimal point (scale = 1).
This leaves space for (precision - scale) = 0 digits to the left of the decimal point.
So the correct explanation is that the combination of precision and scale causes the 1.0 to overflow the numeric data.
Edit: Fixed Formatting
That's the explanation I needed - thanks!
So the default type of 0.7 is something akin to NUMERIC(1,1). When ROUND calculates its result based on a NUMERIC or DECIMAL, it returns a DECIMAL of the same precision and scale, and 1.0 doesn't fit into a DECIMAL(1,1).
The world makes sense again.
Thanks sknox/RossOverThere. Light begins to dawn.
Seriously though, I think this is one of too many cases where something should just work but doesn't due to the intervention of some bit of small print in the design. Sure, it may be documented somewhere in BOL, but that isn't the point. Why have unnecessary trip traps? Not to keep us SQLizers in work surely - we've got plenty of bigger stuff to be getting on with.
MarkD
September 9, 2016 at 8:17 am
Yeah, this behavior due to the typing has caused quite a bit of grief in the community.
There have been several connect items reported for it, including https://connect.microsoft.com/SQLServer/feedback/details/791155/round-function-returns-arithmetic-overflow-for-literal-values-0-5-and-1, where Paul White lists the links to a few other complaints.
Microsoft's response has consistently been to point out that you should just make sure the data type being used allows for any results you might expect (by declaring variables with the suitable precision and scale, and CAST/CONVERTing numeric literals to an appropriate precision/scale instead of relying on SQL Server to infer the precision and scale).
In one of the older connect items, the MS engineer noted they could change the behavior so that the precision/scale of the return value of ROUND is altered to avoid the error, but that it would have an undesired impact on persisted computed columns and indexed views, so they have no intention of changing it.
Cheers!
September 9, 2016 at 11:56 am
Got surprised by that one. Not particularly intuitive.
Not that I would have committed it to production code, I would have found out by testing it.
By the way, this one more reason why a I avoid rounding in the results. I prefer to have the client (SSRS report) do its own rounding. Stuff related to presentation I prefer to relegate to the client doing the presentation.
September 9, 2016 at 12:00 pm
Good question.
I think MS has got the selection of precision badly wrong for literals with 0 integer part.
Since we are not permitted by MS SQL Server to write the literal as .7 instead of as 0.7, we are forced to write at least one digit before the decimal point. If we always have at least one digit before the decimal point, the precision should be greater than the scale because the precision is the number of digits before the decimal point plus the scale. So it is unreasonable to assign presision 1 scale 1 to the literal, it should assign precision 2 scale 1.
And the argument that presentation and internal representation don't have to match doesn't cut any ice here, because making the presentation pointlessly misleading about the internal representation is just plain stupid, especially when the type has the name "decimal" which was been used to describe a particular presentation of numbers since a very long time (several centuries) before SQL existed.
Tom
September 9, 2016 at 1:04 pm
The title and much of this commentary is very misleading, in that this has nothing to do with small numbers, and is not limited to .5 <= x < 1.
These will also both cause errors, for the same root reason... the rounded number requires a larger data type than the input.
SELECT ROUND(9999.9, 0)
SELECT ROUND(-9999.9, 0)
As for whether or not this is a serious bug, Microsoft probably isn't as out-of-line as you might think. The "problem" has 2 underlying premises contributing to it:
1 - the ROUND function should return the same data type it is passed.
2 - the compiler should type a decimal constant using the smallest data type possible.
In order to avoid this problem, at least one of these two premises has to change, but I'm not convinced that either premise is inherently wrong. After all, a function changing data types can have all sorts of implications, and if a constant, by definition, never changes why would you waste space by allocating additional scale for it?
Keep in mind, in typical usage, you should not be calling ROUND(0.5, 0). Instead, you'd be calling ROUND([@]x, 0). The variable or column x should be typed large enough that it can hold any valid value... if you define it as a decimal(1,1) then I would say a function returning a value bigger than that SHOULD return an error. If your @x can be bigger and is defined as a decimal(5,1) for example, you have no issue (unless you're rounding up to 10,000 anyway, in which case I'd argue the variable is incorrectly defined.)
This particular "bug" is largely only an issue if you're rounding constants, and the only reason I can think of to be rounding constants is to demonstrate this, or perhaps if you're using non-parameterized dynamic SQL, in which case it's yet another reason to use proper parameters instead.
As always, I'm sure there are a few rare, constrained scenarios in which the application has to call ROUND() on constants, but by and large, I think this is more of an oddity than any serious concern.
September 12, 2016 at 3:21 am
Whilst I think Tom has a point, I don't see it as a serious concern. I'd not see why you would use a constant without making it a variable of an appropriate type. If you did really want to you can cast it to a type before rounding it.
September 12, 2016 at 6:51 am
Hmmm, very interesting.
September 12, 2016 at 10:22 am
sknox (9/9/2016)
Good question, but the explanation is incomplete. Precision is only half of the story, the other being scale.1.0 can fit in a number with precision of 1, if the scale is 0.
But 0.7 has precision of 1 and scale of 1. That means that it can only have one significant digit (precision = 1), and that 1 significant digit must lie to the right of the decimal point (scale = 1).
This leaves space for (precision - scale) = 0 digits to the left of the decimal point.
So the correct explanation is that the combination of precision and scale causes the 1.0 to overflow the numeric data.
Edit: Fixed Formatting
Why did I know this QotD was a gotcha, and which bookmarked reply did I pull up to remind myself (yet again, yet again) why this "works" as it does?
http://www.sqlservercentral.com/questions/T-SQL/131218/[/url]
and
http://www.sqlservercentral.com/Forums/FindPost1719196.aspx
Thanks again, sknox, for the explanation as to what's happening here.
So September must mean it's time for frustrating questions about Round(), I see. Good to know; I'll put a reminder in Outlook for 2017 about this one and feel prepared!
Rich
September 12, 2016 at 9:15 pm
So to round a number less than 1 it needs to be cast to scale 2, precision 1 (or 0) .. I dont even..
select round(cast(0.7 as numeric(2,1)),0)
-- 1.0
September 13, 2016 at 1:27 am
Nice question, thanks.
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
Viewing 11 posts - 16 through 25 (of 25 total)
You must be logged in to reply to this topic. Login to reply