July 10, 2013 at 5:52 pm
Hey Everyone,
I'm having a problem importing an Excel file that has a column with a variable data size but can get up to 4000 characters (it's a note field). I'm importing it into an nvarchar(max) field, but I keep getting a truncation error. I've gone into the package, went to the Advanced Editor for Source -> Input and Output Properties tab, and changed the Output Column length for the column to 4000. If I add a size 4000 record to the top of the file, it imports fine; but if it is at the bottom of the file, it errors out. I've heard people say that the data import for Excel sets the size dynamically based on the first X number of rows, but I have a hard time believing if my largest record is at the bottom of the file, that I won't be able to get the file imported.
Any help on this would be greatly appreciated, I'm at my wits end here!
Thanks,
Cat
July 11, 2013 at 12:38 am
You need to believe those people 🙂
The JET or ACE OLE DB provider scans by default the first 8 rows to determine the data type.
If the first 8 rows are small strings, it will think it will need only small strings.
So you need to do two things:
* add IMEX=1 at the end of your connection string. This tells the provider that if mixed data types are found, he needs to choose string
* set the registry setting TypeGuessRows of the provider to 0. This tells the provider to scan about 60000 rows instead of the default 8.
More information about these actions can be found here: What’s the deal with Excel & SSIS?
It's possible that even this is not enough and that you have to change the data type of the column in the data flow to DT_NTEXT, but I have to test this out myself.
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply