July 16, 2003 at 8:28 am
HI,
I have a very big text file wich consists od 1.5 million records.But i am getting some error and the import is getting failed.So i wanted to break that text file into some small files.Is there any effective way doing that.
Thanks
July 16, 2003 at 8:45 am
What type of error are you receiving? We import files with millions of rows with no problems.
Diane
July 16, 2003 at 2:40 pm
Too many columns found in the current row.non-whitespace characters foundafter the last defined column's data.
I could not fine that row exactly because my text file row consistf of 3 to 4 rows.(one text file row=3 or 4 sql server table rows).
how to find that where is the exact propblem.
Thanks
July 16, 2003 at 4:14 pm
Sounds like you have delimiters embedded in your data or maybe text going into number fields, etc. I periodically receive some third party data that includes quotes, commas, and other inappropriate data - rather a pain to scrub.
To find the bad data, double click on the transform task, click options, then set name for error file, set check for error text, source and dest error text, then set max error count to a low number like 2 or 3 just to trap an error or two to see what the problem is. Run the transform and check the error files to find where the bad delimiters are.
In my case I have to create some empty columns at the beginning of the text file (ie ,"","","",) to allow the transform task to pull the data in. The bad rows have data in those dummy columns. Then I update the table, pulling the bad fields back into place, such as:
set col8=col8 (plus)' '(plus)col9, (can't get plus signs to show)
col9=col10,
col10=col11 (etc)
where ISNUMERIC(SUBSTRING ((col011),1,1))=0 --ie state is in zip field
This is done in a separate table of bad rows which are then inserted into the final table.
By the way, Textpad is very helpful w/large text files. I have opened a 550MB+ file - 3MM+ rows.
Bill.
OCP
Viewing 4 posts - 1 through 3 (of 3 total)
You must be logged in to reply to this topic. Login to reply