June 30, 2010 at 9:36 am
I ran into a situation yesterday that used a lot of my time. When I update a Decimal(18,6) field type by dividing two Int field types, SQL does not do an implicit conversion.
Here is some code to create a table, insert one row and then do an Update.
I was astounded to find the the integer 2 divided by the integer 3 yielded the Decimal 0. I was under the impression that SQL would do an implicit conversion and populate the Decimal with .666667. Not so. Of course, CASTing the integers to Decimal(18,0) before division does yield .666667.
My question is: Am I doing something wrong that could prevent the implicit conversion? Is there a setting somewhere what would allow/disallow this implicit conversion.
My solution is to make all int and BigInt field types as Decimal(XX,0) rather than CASTing all my queries. Can you offer a better solution?
Thanks,
Pat
Here is my sample code
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[Test1](
[UnitsInt] [int] NULL,
[PoundsInt] [int] NULL,
[UnitsDecimal] [decimal](18, 0) NULL,
[PoundsDecimal] [decimal](18, 0) NULL,
[ResultsFromInt] [decimal](18, 6) NULL,
[ResultsFromDecimal] [decimal](18, 6) NULL
) ON [PRIMARY]
Insert Into Test1 (UnitsInt,PoundsInt,UnitsDecimal,PoundsDecimal)values(3,2,3,2)
update Test1 set resultsFromInt = PoundsInt/unitsint, resultsfromDecimal= poundsdecimal/unitsdecimal
June 30, 2010 at 9:53 am
mpdillon (6/30/2010)
...When I update a Decimal(18,6) field type by dividing two Int field types, SQL does not do an implicit conversion.
...
I was astounded to find the the integer 2 divided by the integer 3 yielded the Decimal 0. I was under the impression that SQL would do an implicit conversion and populate the Decimal with .666667.
...
My question is: Am I doing something wrong that could prevent the implicit conversion? Is there a setting somewhere what would allow/disallow this implicit conversion.
...
My solution is to make all int and BigInt field types as Decimal(XX,0) rather than CASTing all my queries. Can you offer a better solution?
Your interpretation of results is a bit incorrect.
1. SQL Server does implicit conversion from int to decimal.
When you devide integer 2 by integer 3, you get integer 0 which is implicitly converted into decimal(18,6) becaming 0.000000 😀
2. You don't do anything wrong that could prevent implicit conversion. SQL Server just cannot see it should do it at all! For example if you would devide 1/2, you will get 0 as SQL will not see that you want a decimal. However if you devide 1.0/2 or 1/2.0 you will get 0.5, here SQL Server can see what you want (it will implicitly convert the integer value into devfault decimal(18,6) and result will be decimal as well)
3. You should not make your columns be of DECIMAL datatype if you are going to store only integer values in them (you can find value ranges for numeric datatypes in BoL).
June 30, 2010 at 10:30 am
Eugene,
Thanks for clearing things up with points 1 and 2.
If you don't mind I would like explore question 3 a bit.
I am a VB programmer. Historically, I have done my calculations in VB Code and I have handled the conversions there. Recently, I have been trying to pass the calculations back onto SQL Server. I had also been using BoL to select the smallest data type for the object I needed to store. But that caused me trouble yesterday.
I see two solutions to the problem.
First, I could change my Data type to Decimal(XX,0). This will use more disk space and make the calculations slower. Are there other problems I should know about?
I know that you do not like this idea. Initially, I didn't either. I had been trying to use the "best fit" data type for the data to be stored. But this experience has caused me to question that idea.
OR
Second, I could CAST the integers every time I use them in a query. This does save disk space and does not use as many resources as if I had used Decimal. But it does take some additional SQL Server resources to do all the CASTs. For example, in yesterday's SQL Select statement there would have been 12 CASTs.
Do you think the Interger SELECT with 12 CASTs would use less resources than the same SELECT statement that used Decimal Data Types without any CASTs?
From my perspective as a developer working on small SQL databases, I can accept the increased disk space requirements of using Decimal. While writing the several hundred SQL statements in a typical project, it would be much more simple not have to remember when to use the CASTs. Not using the CASTs would make the SQL statements easier to read and debug.
But I do not have any way to judge the performace hit of Decimal vs Integer. Am I correct in assuming (this always gets me in trouble) that if I ignore the disk space penalty, the only other problem with using Decimal(XX,0) to store an integer is execution performace?
Thanks,
pat
July 1, 2010 at 2:09 pm
Pat,
if you do decide to change the datatype in the tables and you are using SQL Server 2005 or higher, you could try the new datatype vardecimal. This works similarly to varchar. You just have to enable it on the database
For better, quicker answers, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
July 2, 2010 at 1:56 am
In your case I would not try to choose the datatype based on the future operations which are going to be perfomed (it is usually a bad practice). For example:
You wouldn't want to store NumericAmount or DateOfBirth as varchar even, if only operation you ever do with them is displaying them in a GUI, for which you need to format them into nice string.
I would recommend to choose the datatype based on the nature of your data. Non-fractional numbers as bigint, int, smallint etc.
Money Amounts as Money or SmallMoney, Numeric data which require higher precision as decimal/numeric(p,s). There are plenty of specialised datatypes to choose from. 😀
You will need to operate with huge volume of data to see any significant impact of casting integers into decimal in a calculation (I believe it is not applicable in your case as you mentioned that you are working on small database).
And remember to get decimal(18,6) result from division of two integers you don't need to explicitly cast both integers into decimal. It is quite a common practice not to explicitly convert any of them, but, for making code clear and easier to read :hehe:, you would do some thing like:
SELECT 1.0 * IntColumn1/IntColumn2
You can see that first multiplication with decimal constant 1.0 (by default SQL takes it as decimal (18,6), tells SQLServer to implicitly convert both columns used in calculation into decimal.
July 2, 2010 at 2:17 am
In general, you should choose a data type based on the values to be stored.
If you need to perform a calculation on those quantities (usually a division) that requires a decimal type, you have a couple of options:
1. Store the result (or just the definition of the result) in a computed column
2. Use an explicit CAST or CONVERT in the stored procedure or function that returns the result
The method to choose, and the precise implementation, depends on your specific circumstances.
In any case, you should probably familiarise yourself with the rules for precision, scale, and length:
http://msdn.microsoft.com/en-us/library/ms190476.aspx
and the rules for data type precedence:
http://msdn.microsoft.com/en-us/library/ms190309.aspx
You probably won't want to use the vardecimal storage format - it was replaced by native data compression in SQL Server 2008, and isn't a good fit for your problem anyway.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
July 2, 2010 at 2:23 am
Eugene, I see we posted similar advice at the same time. Just one thing:
Eugene Elutin (7/2/2010)
You can see that first multiplication with decimal constant 1.0 (by default SQL takes it as decimal (18,6), tells SQLServer to implicitly convert both columns used in calculation into decimal.
That's not strictly true. SQL Server applies data type precedence and the precision/scale/length rules I referenced in my post in all situations where different data type are present in an expression.
A constant (like 1.0) will normally be typed as the smallest decimal that can contain its value (in this case, decimal(2,1)) - though there are some subtleties around auto-parameterization, which I'll skip.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
July 2, 2010 at 3:31 am
Paul White NZ (7/2/2010)
...A constant (like 1.0) will normally be typed as the smallest decimal that can contain its value (in this case, decimal(2,1)) - though there are some subtleties around auto-parameterization, which I'll skip.
...
My wrong:-D!
I guess, as soon as expression contains devision, SQL "types" 1.0 into decimal with higher precision/scale (18,6?): select 1.0 / 1
will return 1.000000.
Interesting to see how SQL smartly chooses scale for the multiplication: select 1.0 * 1.000
It returns 1.0000. This is done to accommodate all possible results of multiplication between two decimals with 1 and 3 decimals:
select 1.7 * 2.356
result: 4.0052
? select 1.0 * 1.00 * 1.000 * 1.0000
: 1.0000000000
😀
It doesn't work this way for devisions:
select 1.5/1.67
still returns as decimal (18,6 or 7,6?): 0.898203
P.S. Can someone tell me why when post is saved first time, too many empty lines added between entered text and my signature?
I have checked the signature, it doesn't have empty lines.
July 2, 2010 at 5:22 am
Eugene and Paul,
Thanks for continuing the discussion. I can go back to my previous practice of choosing the smallest data type. I can use 1.0 * in all my divisions without too much trouble.
Have you noticed how SQL rounds quotients differently?
Select 1.0 2/3 From Table : .666666
Select Cast(2 as decimal(18,6))/3) from Table :666666
but
the ResultsFromDecimal field after the UPDATE statement in my orginal post yields .666667
The UPDATE statement divides two Decimal types and stores the result in a Decimal type Field
Again, thanks for your help.
Mike01,
Thanks for the two links. I had read both of those but had lost the links. I thought about the etiquette article as I was preparing the CREATE, INSERT and UPDATE statements. What I entered was as close as my bad memory would allow to the link you refered.
I have saved the links again. Hopefully, I won't misplace them this time.
pat
July 2, 2010 at 5:39 am
Just to be complete in my post I should have shown:
Select Cast(2 as decimal(18,6))/cast(3 as Decimal(18,6))) from Table :666666666666666666
pat
July 2, 2010 at 5:47 am
...
Select Cast(2 as decimal(18,6))/3) from Table :666666
but
the ResultsFromDecimal field after the UPDATE statement in my orginal post yields .666667
...
If you will look closer, you will se that the result of
Select Cast(2 as decimal(18,6))/3
is not: 0.666666
but: 0.66666666
When you update your table, I believe that your destination column is defined as decimal(18,6), therefore 0.66666666 is rounded to 6 decimal points: 0.666667
Paul, may have willing (:-D) to explain why SQL doesn't keep result of SELECT Cast(2 as decimal(18,6))/3 as decimal(18,6) but adds two more digits after decimal points...
July 2, 2010 at 6:20 am
Eugene Elutin (7/2/2010)
I guess, as soon as expression contains devision, SQL "types" 1.0 into decimal with higher precision/scale (18,6?): select 1.0 / 1 will return 1.000000.Interesting to see how SQL smartly chooses scale for the multiplication: select 1.0 * 1.000
It returns 1.0000. This is done to accommodate all possible results of multiplication between two decimals with 1 and 3 decimals...
It doesn't work this way for devisions:
select 1.5/1.67
still returns as decimal (18,6 or 7,6?): 0.898203
It does work the same way, but there's a subtle twist.
If you look at the link I posted giving the rules for precision, scale, and length, you'll see that the formulas for division (e1/e2) give an output precision of {p1 - s1 + s2 + max(6, s1 + p2 + 1)} and an output scale of {max(6, s1 + p2 + 1)}.
Let's take the case of 1.0 / 1 - why does it return 1.000000?
Well, the constant 1.0 is typed as decimal (2,1) and 1 as decimal (1,0) - using the smallest precision and scale necessary, as I mentioned previously. The numbers to plug in to the formulas are:
p1 = 2 (precision of the first expression)
s1 = 1 (scale of the first expression)
p2 = 1 (precision of the second expression)
s2 = 0 (scale of the second expression)
Output precision is p1 - s1 + s2 + max(6, s1 + p2 + 1) = 2 - 1 + 0 + max(6, 1 + 1 + 1) = 1 + max(6, 3) = 7
Output scale is max(6, s1 + p2 + 1) = max(6, 1 + 1 + 1) = max(6, 3) = 6
So the output has a type of decimal (7,6). You can verify that result by running SELECT 1.0 / 1 AS a INTO A and then looking at the definition of column a in table A (using your favourite method).
Dec(2,1) / Dec(1,0) = Dec(7,6).
The key to success there is understanding the max function - it returns the higher of its two inputs. The magic number 6 has no special significance, except for backward compatibility. The (unsatisfactory) answer to the question "but why six?" is the inevitable "because it is" 🙂
There are a couple of other tricks to cope with what happens if the formulas produce numbers outside of the allowed range...here, the magic numbers are 38, and you guessed it, 6.
Paul
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
July 2, 2010 at 6:23 am
Eugene Elutin (7/2/2010)
Paul may have willing to explain why SQL doesn't keep result of SELECT Cast(2 as decimal(18,6))/3 as decimal(18,6) but adds two more digits after decimal points...
The rules are the same, I'll leave determining the inferred types and running the numbers through the formulas to the interested reader as an exercise (yes, I am too lazy).
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
July 2, 2010 at 6:38 am
mpdillon (7/2/2010)
I can go back to my previous practice of choosing the smallest data type. I can use 1.0 * in all my divisions without too much trouble.
Many people do resort to tricks like that in production code - but you should be fully aware of what is going on under the hood before sprinkling magic "1.0"s everywhere (so you can at least debug any problems). Hopefully some of the information presented here will help you in that.
My preference is to use explicitly typed expressions (computed columns, function return values...whatever) as far as possible. It is just so easy to fall foul of the typing rules. Doing what you can to keep things as explicit as possible can only help.
An explicit conversion like SELECT CONVERT(DEC(9,2), a) definitely returns a DECIMAL with precision 9 and scale 2. Relying on implicit conversions to do the job for you (say SELECT 1.0 * a) is no faster (and usually slower if an extra math operation is introduced).
Let me be absolutely clear on that: SQL Server always keeps track of precision and scale at every step of the calculation and will introduce implicit conversions wherever necessary. Check your query plans: you will sometimes see a CONVERT_IMPLICIT(numeric(p,s)...) where that occurs.
It is a false economy to try to avoid explicit CASTs or CONVERTs - the work has to be done, and an implicit conversion runs the same code as an explicit conversion. To my mind, it's worse, because it's hidden in the query plan. Better to have it in plain view in the query text.
That's not to say that 1.0 * doesn't have its place: I'm not suggesting you try to explicitly type every column at every stage - following those formulas would quickly drive you mad. Some common sense is required.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
July 2, 2010 at 6:57 am
Thanks Paul, I haven't read the article you posted about precision, scale and length rules. It makes sence now.
Usually, I would explicitly convert the first required value(s) in the calculation formula and let the end result be converted implicitly as per destination or formatted as required if it is for output (report or other).
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply