December 10, 2003 at 10:23 pm
hi every one,
I am new in database programming with sqlserver 2000. I am confused to handle data types wisely in sqlserver2000.For instance
decimal: Fixed precision and scale numeric data from -10^38 +1 through 10^38 -1
numeric: Fixed precision and scale numeric data from -10^38 +1 through 10^38 -1
float: Floating precision number data from -1.79E + 308 through 1.79E + 308
real: Floating precision number data from -3.40E + 38 through 3.40E + 38
If the precision of decimal and numeric, float and real are same then what is the necessity of having same data type in different name OR there exists some reasons behind that. Can you help me on that?
..Better Than Before...
December 11, 2003 at 7:42 am
I will give a shot at this hopefully it will shed some kind of light....
I guess to most of us it will not make a hell of a difference but if some advanced Math programs are used requiring the use of the large numbers an error can easily be raised for calling a number a float rather than real when the limits are exceeded.
There are four classes of numeric datatypes
float real
numeric decimal
money and small money
int and small int ,tiny int
money represents monetary values
I guess the limits to money will grow as
Bill gates 10th generation predecessor makes
a software that can move you from the USA to the United Kingdom on your couch in seconds.
And integers represent the numbers the first grade kids manipulate with.
22/7 will fall under floats
why ? It is approximate (not exact)
There are lots of values within the ranges you gave that cannot be expressed exactly
Numeric and decimal to the contrary have to be fixed point values for the most part defined by the user
The larger the numbers the more demanding the calculations on them so if you want to do a calculation on a number that could grow more than 1.79E+38 then you are better off using a real number
Phew
Mike
December 11, 2003 at 9:41 am
The difference between decimal and numeric is part of the ANSI SQL spec and doesn't pertain to SQL Server. It basically states that the precision of a decimal data value is allowed be greater than the precision specified upon creating the column, whereas numeric values must be returned only to the precision specified. Again, in SQL Server, these are identical.
Real and float differ in that real is always a four-byte floating point representation whereas float can be either four-byte or eight-byte precisions. These equate to single precision and double precision in other languages, although float can be either, due to the way it's defined by ANSI SQL.
--Jonathan
--Jonathan
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply