June 28, 2018 at 10:04 am
For those having an issue with this error, an easy workaround is to change the INSERT INTO/SELECT statement into a SELECT... INTO statement. That way the new table will contain the actual size of every column which can be compared against the target table's schema column sizes. This workaround is just a temporary way of determining the error, after the culprits are found the SELECT... INTO statement needs to be converted back into the INSERT INTO/SELECT statement. I hope it helps...
June 28, 2018 at 10:44 am
orosado - Thursday, June 28, 2018 10:04 AMFor those having an issue with this error, an easy workaround is to change the INSERT INTO/SELECT statement into a SELECT... INTO statement. That way the new table will contain the actual size of every column which can be compared against the target table's schema column sizes. This workaround is just a temporary way of determining the error, after the culprits are found the SELECT... INTO statement needs to be converted back into the INSERT INTO/SELECT statement. I hope it helps...
That works if you do an insert/select. If you're loading from SSIS or BULK INSERT, this error is maddening. You often need to create a table with varchar(max) columns and then start searching
June 28, 2018 at 11:04 am
Steve Jones - SSC Editor - Thursday, June 28, 2018 9:50 AMMatthew Joughin - Thursday, June 28, 2018 4:53 AMI think a table or column option would be a great solution for this - then its up to the developer to turn it on for the table or column if they want it to truncate varchar's and nvarchar'sIt's not always a fundamental coding issue. This could be a data issue for a single data load.
I do get that - but thats exactly it - for mass data imports, which i use to do for every client that went live on my old companies system, there was often data from the previous system that didn't quite fit - and it didn't really matter.
Anyway - if Microsoft gave a better error message like you are asking then it would solve the issue better
if you don't have the time to do it right, when will you have the time to do it over ?
June 28, 2018 at 11:14 am
For sure. This is a horrible error message.
June 28, 2018 at 1:08 pm
If I had ever written code like this I would hope I would have been fired. But unfortunately, my employers also pretty much tolerated this also. If the code is smart enough to tell two fields are incompatible, then tell me which fields. That should be the easy part. I have fought this issue while learning to create packages for my personal use in retirement. I finally ended up manually writing code adding a single field at a time out of records with 20 or 30 fields. There is no excuse for this.
I fight the same lack of caring and making excuses with another popular product I use, Quicken which used to be owned by Intuit but now is owned by some accounting firm. They want you to use their software, which I have for over 28 years, to record your financial data. OK, so I have 28 year history of investment transactions in their files. And yes, you can even EXPORT the investment accounts. And you can also IMPORT transactions from financial institutions. OH, BUT WAIT!
You can export your data to several different formats, but you can't IMPORT that same data back into their own software. Account types other than investments CAN be re-imported. The excuse is that investment accounts 'don't have a transaction register'. OK, so what is that thing you use for me to enter and display my data?
In this situation, I and many others will not be spending any more money on their software or services. So who wins?
Rick
Disaster Recovery = Backup ( Backup ( Your Backup ) )
June 28, 2018 at 1:16 pm
Steve Jones - SSC Editor - Thursday, June 28, 2018 9:50 AMMatthew Joughin - Thursday, June 28, 2018 4:53 AMI think a table or column option would be a great solution for this - then its up to the developer to turn it on for the table or column if they want it to truncate varchar's and nvarchar'sIt's not always a fundamental coding issue. This could be a data issue for a single data load.
OK, but even if it IS a data issue, the CODE should handle it gracefully. THEN you move on to the data problem.
Rick
Disaster Recovery = Backup ( Backup ( Your Backup ) )
June 28, 2018 at 3:31 pm
skeleton567 - Thursday, June 28, 2018 1:16 PMOK, but even if it IS a data issue, the CODE should handle it gracefully. THEN you move on to the data problem.
That, I agree with.
June 29, 2018 at 4:38 am
Correction to the article - "*some* developers dislike relational schemas"
I would not want to use anything but a relational schema.
June 29, 2018 at 9:00 am
The Informix relational database engine has had this feature for years. During a bulk load, you can tell it to ignore the error or place the offending record into a separate file.
I am an "ex" Informix DBA.
June 29, 2018 at 12:51 pm
It would be very helpful if the data truncation error message were more like the duplicate key message, having the actual column name and offending value are included.
If you anticipate occasional truncation errors on a specific table, perhaps a staging table for an ETL process, then one way to help mitigate this problem is to define the VARCHAR length greater than what is allowed, but then declare named check constraints to throw the exceptions.
For example:
CREATE TABLE MyTable
( col1 VARCHAR(30)
CONSTRAINT CC_MyTable_col1
CHECK (DATALENGTH(col1) <= 3) );
INSERT INTO MyTable ( col1 ) VALUES ('aaaa');
Msg 547, Level 16, State 0, Line 10
The INSERT statement conflicted with the CHECK constraint "CC_MyTable_col1".
The conflict occurred in database "master", table "dbo.MyTable", column 'col1'.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
June 29, 2018 at 1:24 pm
And with some row or list of data values.
July 2, 2018 at 8:39 am
I agree the error provides no useful information for correcting the problem and it should be something that Microsoft can correct relatively easily.
However, instead of letting the problem annoy us given that we know it exists, we should build our systems to allow for the problem. In an ETL process, have a data validation step that catches the problems before a load. In a transaction based application, validate the data before an insert is attempted. Yes, this is extra work for us (that one can argue shouldn't be necessary) but it is proactive and it eliminates frustration that we are well aware may occur.
This isn't about letting Microsoft off the hook, just minimizing our own annoyance levels.
July 2, 2018 at 9:28 am
mercurej - Monday, July 2, 2018 8:39 AMI agree the error provides no useful information for correcting the problem and it should be something that Microsoft can correct relatively easily.However, instead of letting the problem annoy us given that we know it exists, we should build our systems to allow for the problem. In an ETL process, have a data validation step that catches the problems before a load. In a transaction based application, validate the data before an insert is attempted. Yes, this is extra work for us (that one can argue shouldn't be necessary) but it is proactive and it eliminates frustration that we are well aware may occur.
This isn't about letting Microsoft off the hook, just minimizing our own annoyance levels.
We can do these, things, but we can't always catch where/when this error happens in a load. If we engineer this to be sure we always can, then we're going to have a very complex, double work effort. How can you know that a name field you've spec'd at 50 char has data in a flat file of 1,000,000 rows has one row that's 51 char? Unless you read and pre-process, or you load into a large staging table.
This isn't a simple, it's easy to account for, issue. This is one that's a PIA when you run into it, despite your efforts.
July 2, 2018 at 12:56 pm
Steve Jones - SSC Editor - Thursday, June 28, 2018 9:45 AMJapie Botma - Thursday, June 28, 2018 2:15 AMI am sorry Steve, but truncating data without the user knowing is an error. It is frustrating that the source and target schema don't match, but it is still an error. Either the source data is wrong (swapped columns by mistake, etc) or the target schema is wrong. Either way: It should tell me that. A simple substring in the source query can fix it, if you are willing to lose data.You've missed the entire point of the piece. This isn't about eliminating the error or silently truncating data. This is about ensuring the developer or DBA can find the source of the error easily.
Totally agree with you, Steve. No where did you suggest it wasn't an error and the link you provided is pretty good confirmation of that.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 14 posts - 16 through 28 (of 28 total)
You must be logged in to reply to this topic. Login to reply