January 22, 2007 at 2:02 pm
Hi,
I have several tables that have been exported from Oracle 10g to text files using the UTF-8 encoding. These files contain foreign characters - Chinese, Russian, Thai, whatever. I then need to import these files into the corresponding SQL Server 2000 tables.
In the "Text File" connection object, I started by specifying Unicode as File type, {LF} as Row delimiter, and <none> and text qualifier. This gives me a "Could not find the selected row delimiter within the first 8 KB of data. Is the selected row delimiter valid?" error. (It is a valid row delimiter.) Needless to say, I can't go any further in the process.
If I change the File type to ANSI, I do not receive any errors, but the text localizations are lost.
Your assistance is appreciated.
TIA,
Mike
January 23, 2007 at 7:35 am
How do you define the SQL table ? The only data type that supports unicode in SQL Server is NVARCHAR, NCHAR and NTEXT.
January 23, 2007 at 7:52 am
I created a text file with chinese characters in it and saved it as Unicode text file. Then I created a table with nvarchar for the fields with chinese character. I used DTS transform data task to transform the data from the text file to the table and the chinese characters coming out fine in the table.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply