January 2, 2012 at 8:04 pm
Comments posted to this topic are about the item Exact and Approximate
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
January 2, 2012 at 8:05 pm
Surprisingly enough, I got this one right...
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
January 2, 2012 at 8:37 pm
Incorrect answers: 40% (2)
Total attempts: 5
Nice, even if a rather simple question (but it does take one back to the basics and because of that a GOOD QOD) ... and yes I selected the correct answer.
January 2, 2012 at 8:44 pm
Nice question. Thanks Paul.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
January 2, 2012 at 10:53 pm
This was a good one! Thank-you, Paul. Great start to the year.
Have a Happy & Prosperous New Year 2012, everyone!
Thanks & Regards,
Nakul Vachhrajani.
http://nakulvachhrajani.com
Follow me on
Twitter: @sqltwins
January 2, 2012 at 10:54 pm
Nice Question
Correct answers: 63% (15)
Incorrect answers: 38% (9)
Total attempts: 24
-----------------
Gobikannan
January 3, 2012 at 1:22 am
This was removed by the editor as SPAM
January 3, 2012 at 3:11 am
Happy new year everyone! 😉
It doesn't happen very often, but today I find myself in disagreement with Paul. The correct answer is (or should be) DECIMAL.
The "precision" mentioned on the BOL page about floating point data is not the number of decimal positions represented internally, but the number of decimal positions that can be trusted to be correct. The internal representation is different (base-2 to be exact). As mentioned on the top of the BOL page: "not all values in the data type range can be represented exactly". The value 0.1234567 is one of the many values that can not be represented exactly. When trying the code snippet in SSMS, this is carefully hidden - because SSMS knows about the number of "trustworthy" positions for the REAL data type and shows only that number of positions. Other front-end tools might not be so smart. And front-end or back-end applications that do further calculations on the number do get the internal representation, which in the case of the REAL data is not exactly correct.
Here is a modification of Paul's code that tricks SSMS into revealing a more accurate representation of the values internally stored; it shows that the DECIMAL value is indeed exact, but the REAL value is not.
SELECT
[decimal] = CAST(@n1 * @n2 AS decimal(38,36)),
[real] = CAST(@n3 * @n4 AS decimal(38,36))
January 3, 2012 at 3:37 am
Hugo Kornelis (1/3/2012)
It doesn't happen very often, but today I find myself in disagreement with Paul. The correct answer is (or should be) DECIMAL...
Some good points, and it is true (for example) that Query Analyzer against a SQL Server 2000 instance gives the result using REAL as 0.12345671 but no matter how you CAST the result, the result of the DECIMAL calculation is 0.123457 (i.e. rounded to six decimal places) so the REAL result is, in the wording of the question ("In the example below, which data type will give the most accurate answer?") more accurate. I don't see how the answer could possibly be DECIMAL (0.123457 decimal is much further from 0.1234567 (correct) than 0.12345670908689499 (real, unadjusted for precision))?
Using REAL in this case does give exactly the right displayed answer in SSMS (though not in other tools as you say) and when working with approximate data types we do need to be careful about the significant digits (as demonstrated by SSMS). Anyway, the main point of the question is for people to be aware of the limitations of the fixed-precision types, and in particular to avoid using excessive scale.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
January 3, 2012 at 3:43 am
OOPS!! Sorry, Paul - you are completely right.
I had missed the rounding of the decimal result; I thought that the decimal result was the exact and correct result. Even when running the code I posted above (that shows the extra decimals), I simply overlooked that the DECIMAL result was 0.123457 instead of 0.1234567. My bad; my apologies.
January 3, 2012 at 4:01 am
No worries Hugo, though I am a little depressed that the values I chose don't make the difference easier to spot - it really wasn't my intention for this question to be a test of reading skills 🙁
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
January 3, 2012 at 4:07 am
Good question!
The following discussion is making it even better. We should be careful when using exact numeric data types both from the point of view of storing and calculations.
Thanks
January 3, 2012 at 5:34 am
Good question Paul, thanks for submitting.
http://brittcluff.blogspot.com/
January 3, 2012 at 6:49 am
SQL Kiwi (1/3/2012)
No worries Hugo, though I am a little depressed that the values I chose don't make the difference easier to spot - it really wasn't my intention for this question to be a test of reading skills 🙁
It is a good question. Nicely shoots down the myth that these fixed point "exact" numerics have fewer (or less serious) rounding issues than floating point approximate numerics.
I would have thought that if anyone ran it they would notice the difference between 570 and 567 as the last three digits of the result, so it hardly seems a test of reading skills.
But why run it, when it's clear that using decimal(38,20) to multiply these two numbers will run you head on into the draconian rounding used for "exact" numerics?
Tom
January 3, 2012 at 7:04 am
Ohh come on!! I'm wrong again. I'm starting to think this is not meant for common people. LOL! But i'm learning much so thanks. 😉
Viewing 15 posts - 1 through 15 (of 55 total)
You must be logged in to reply to this topic. Login to reply