December 28, 2009 at 5:16 pm
Hi,
I keep getting an arithmetic overflow error when I'm trying to update a field's value by multiplying that field by another field. The field that I'm trying to update is decimal(9,4) and the field that I'm multiplying by is tinyint. Using the decimal(9,4) datatype, I should have room for 5 digits before the decimal and 4 digits after.
Naturally I assumed that one or more records exceeds the capacity of the decimal(9,4) datatype. So I tried doing the multiplication in a select query so I could find the problem record(s). The values I got ranged from 19.7522 to 7409.6850, which should all fit in the decimal(9,4) datatype.
I tried increasing the precision incrementally and found that it works if I cast to decimal(11,4). This doesn't make sense to me since the largest value calculated has only 4 digits before the decimal place, so why do I need room for 7 digits??
Thanks, Heather
December 28, 2009 at 6:26 pm
Do you have an example of two numbers that you know caused the overflow error?
--Jeff Moden
Change is inevitable... Change for the better is not.
December 28, 2009 at 6:32 pm
It's definitely a data issue. As much as Jeff might not want to hear it, I think the quickest way to track down the data is to run a cursor, selecting each row and doing the multiplication as a (9,4) until you find the one that explodes.
December 28, 2009 at 7:17 pm
Thanks for your replies, Jeff and Steve.
I did start trying to isolate the problem records and so far I've figured out that there's more than one record causing the problem. There is a field that indicates the state (as in United States) the data was collected from which can have 3 different values. I tried running the select query for each state separately (and casting to decimal(9,4)) and got the overflow error all 3 times.
Since I figured out that I only needed to increase the overall precision and not the scale, I should be able to find the offending records just by running the query (without trying to cast the results) and sorting the records. That's the part that confuses me...when I sort the results the maximum value is 7409.6850, which should fit!
I know this should be an easy problem to solve, but since it's not making sense, I started thinking maybe I have a data corruption issue (or something like that). I'll try looping through each record with a cursor tomorrow and will report back what I find.
Heather
December 28, 2009 at 11:32 pm
Steve Jones - Editor (12/28/2009)
It's definitely a data issue. As much as Jeff might not want to hear it, I think the quickest way to track down the data is to run a cursor, selecting each row and doing the multiplication as a (9,4) until you find the one that explodes.
Heh... actually, cursors are good for that kind of stuff.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 28, 2009 at 11:33 pm
Heather May (12/28/2009)
Thanks for your replies, Jeff and Steve.I did start trying to isolate the problem records and so far I've figured out that there's more than one record causing the problem. There is a field that indicates the state (as in United States) the data was collected from which can have 3 different values. I tried running the select query for each state separately (and casting to decimal(9,4)) and got the overflow error all 3 times.
Since I figured out that I only needed to increase the overall precision and not the scale, I should be able to find the offending records just by running the query (without trying to cast the results) and sorting the records. That's the part that confuses me...when I sort the results the maximum value is 7409.6850, which should fit!
I know this should be an easy problem to solve, but since it's not making sense, I started thinking maybe I have a data corruption issue (or something like that). I'll try looping through each record with a cursor tomorrow and will report back what I find.
Heather
Heh... can't help unless I get the data. Would you provide the two biggest numbers in each table, please?
--Jeff Moden
Change is inevitable... Change for the better is not.
December 29, 2009 at 9:53 am
The biggest number in the decimal field (the field I'm trying to update) is 740.9685, and the biggest number in the other (tinyint) field it gets multiplied by is 10. That's where I got 7409.6850 as the biggest result after multiplication.
Going the other way, the biggest number in the tinyint field is 20 and the biggest number it gets multiplied by is 59.4833 for a result of 1189.6660.
If I try the multiplication (and cast to decimal(9,4)) on just those 2 rows, I don't get the overflow error.
December 29, 2009 at 10:56 am
Okay, I tried the cursor approach and interestingly I don't get the overflow error when I use a cursor. I can't figure out why it works with the cursor but not without so I'm posting the code here.
Code without cursor:
SELECT
CAST((m.TPH_PNT * m.TREE_COUNT) AS DECIMAL(9,4)) AS TPH_PNT_NEW
FROM dbo.LIVE_MASTER m
JOIN dbo.UDB_PNT p ON m.PNTID = p.PNTID
JOIN dbo.UDB_CC c ON p.CCID = c.CCID
JOIN dbo.UDB_FC f ON c.FCID = f.FCID
JOIN dbo.UDB_PLT pl ON f.PLTID = pl.PLTID
JOIN test_lemma.src.NIMS_v3_TREE t
ON t.TREE_SOURCE_ID = m.SOURCE_ID
WHERE pl.SOURCE_DB = 'nims_v3'
AND m.TREE_COUNT > 1
Result:
Msg 8115, Level 16, State 8, Line 1
Arithmetic overflow error converting numeric to data type numeric.
=========================================================
Code using cursor:
DECLARE @tph_pnt_new DECIMAL(11,4)
CREATE TABLE #results (tph_pnt_new decimal(9,4))
DECLARE mycursor CURSOR FOR
SELECT
m.TPH_PNT * m.TREE_COUNT AS TPH_PNT_NEW
FROM dbo.LIVE_MASTER m
JOIN dbo.UDB_PNT p ON m.PNTID = p.PNTID
JOIN dbo.UDB_CC c ON p.CCID = c.CCID
JOIN dbo.UDB_FC f ON c.FCID = f.FCID
JOIN dbo.UDB_PLT pl ON f.PLTID = pl.PLTID
JOIN test_lemma.src.NIMS_v3_TREE t
ON t.TREE_SOURCE_ID = m.SOURCE_ID
WHERE pl.SOURCE_DB = 'nims_v3'
AND m.TREE_COUNT > 1
OPEN mycursor
FETCH NEXT FROM mycursor INTO @tph_pnt_new
WHILE @@fetch_status = 0
BEGIN
BEGIN TRY
INSERT #results (tph_pnt_new)
SELECT CAST(@tph_pnt_new AS DECIMAL(9,4))
FETCH NEXT FROM mycursor INTO @tph_pnt_new
END TRY
BEGIN CATCH
PRINT ERROR_MESSAGE()
END CATCH
END
SELECT * FROM #results
ORDER BY tph_pnt_new DESC
CLOSE mycursor
DEALLOCATE mycursor
DROP TABLE #results
Result: 681 rows of data, which are all the rows I need to update
==========================================================
So I could try using the cursor to update each row individually, but I'd really like to understand why I'm getting different behavior with and without the cursor. Maybe someone can point out differences in the code that I'm missing?
December 29, 2009 at 11:28 am
that is strange, but I was thinking something simpler:
DECLARE mycursor CURSOR FOR
SELECT
m.TPH_PNT,
m.TREE_COUNT
FROM dbo.LIVE_MASTER m
JOIN dbo.UDB_PNT p ON m.PNTID = p.PNTID
JOIN dbo.UDB_CC c ON p.CCID = c.CCID
JOIN dbo.UDB_FC f ON c.FCID = f.FCID
JOIN dbo.UDB_PLT pl ON f.PLTID = pl.PLTID
JOIN test_lemma.src.NIMS_v3_TREE t
ON t.TREE_SOURCE_ID = m.SOURCE_ID
WHERE pl.SOURCE_DB = 'nims_v3'
AND m.TREE_COUNT > 1
OPEN mycursor
FETCH NEXT FROM mycursor INTO @tph_pnt, @tree_count
WHILE @@fetch_status = 0
BEGIN
select @TPH_PNT,
@TREE_COUNT,
CAST((@TPH_PNT * @TREE_COUNT) AS DECIMAL(9,4)) AS TPH_PNT_NEW
FETCH NEXT FROM mycursor INTO @tph_pnt_new
END
SELECT * FROM #results
ORDER BY tph_pnt_new DESC
CLOSE mycursor
DEALLOCATE mycursor
December 29, 2009 at 11:54 am
Steve, actually I did try code like you posted first, but it exceeded the maximum number of result sets that could be displayed (100 is the max and my query has >600 rows). So I just stored the results in a temp table instead.
It is reassuring to know someone else thinks this is strange behavior, even though that doesn't help solve the problem.
December 29, 2009 at 5:41 pm
Just wanted to wrap this thread up...
I couldn't figure out why the code using the cursor worked but the set-based code did not, so I updated each record individually using the cursor. Very strange.
Thanks to those who tried to help!
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply