November 14, 2011 at 1:52 pm
We have a large amount of jobs that take data from our Oracle database, transform it, and then import it into or SQL Server databases. We recently changed our Oracle, source, database from WIN1252 to UTF-8. This has broken our SSIS packages and producing multiple errors with ..."cannot convert between unicode and non-unicode"...
Changing the source back to WIN1252 is not an option.
I believe we may have found a work around by adding a data conversion task within the package. But, there are over 500 hundred packages and mapping each column within each package will be VERY time consuming. (http://msdn.microsoft.com/en-us/library/aa337316%28v=SQL.90%29.aspx)
Has anyone tried this work around before? Have you had any issues? I am worried about invalid characters, truncation, and the effects on performance.
I was looking into updating all the destination tables to only have unicode columns (nvarchar, nchar, etc) to see if I could do a direct import if the columns were large enough. But, from what I read, translations will not match.
We are running SQL Server 2005 SP2.
Thank you for your time!
November 15, 2011 at 8:29 am
I was able to create a new table using the import/export wizard. The wizard was nice as it created the correct columns needed in order to hold the unicode data. I have read that SQL Server stores unicode in UCS-2 encoding schema, and the source was UTF-8. So, I'm unsure if there will be issue there. Has anyone taken this approach? Will the difference between encodings cause issues?
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply