January 29, 2004 at 4:15 pm
I would like to know how I could write a script that would import and process a text file (comma-separated values) where there are "Key Fields" in the text file and the fields may be in different orders.
For example, records in the text file may be as follows:
<First Record>
"ColumnName1","Data1","ColumnName2","Data2","ColumnName3","Data3",..."ColumnNameN","DataN"
<2nd Record>
"ColumnName1","Data1","ColumnName5","Data5","ColumnName22","Data22",..."ColumnNameN","DataN"
The data element to be imported will always FOLLOW the Key Field, which will contain the Column Name into which the data should be placed. However, all columns may not be present in the text field records, so a standard CSV import would not work.
Any ideas? Thanks in advance.
Richard D. Cushing
Sr. Consultant
SYNGERGISTIC SOFTWARE SOLUTIONS, LLC
January 29, 2004 at 7:24 pm
Sounds a rather nasty type of job. Haven't tried anything like that ... but here's something that might help.
Imagine you would have to get the data into a temporary table using DTS or similar. The table would perhaps be a series of columns, all varchar().
Read the table with a cursor, and then create dynamic sql to update the real table.
Sounds like a lot of work and very inefficient. The "real" experts might have some much better ways of going about the task.
January 30, 2004 at 6:58 am
Personally I would preprocess the data outside of sql then load.
But if you want to use sql
1. DTS the data into staging table (varchar columns as Mike suggested) including an IDENTITY column
2. Parse the data into rows of ColumnName,Data pairs (including identity)
3. Create table consisting of all columns
4. Insert.Update table with data pairs.
Far away is close at hand in the images of elsewhere.
Anon.
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply