February 1, 2010 at 11:01 am
How do I import a text file into a sql table which has inconsistent data at random intervals .
Sample data:
a b c d e f
a b c d e f
a b c d e f g h
a b c d e f
a b c d e f
a b c d e f
a b c d e f
a b c d e f
a b c d e f g h
a b c d e f
a b c d e f
a b c d e f g h
Thanks
February 1, 2010 at 12:11 pm
Standard answer for non-precise questions:"It depends"
If you import those sample data into one column: the variable length doesn't matter as long as the column is wide enough.
If you need to import it into a table with separate columns, then the table would have to be predefined since there are no column definitions in your sample data.
And much more if then else...
Please provide more detailed information including ready to use sample data, how you'd like to import the data (SSIS, SSMS, bcp or anything else) target table def and so on...
February 2, 2010 at 10:44 am
Apparently I wouldn't have asked if I wanted to import into 1 column 🙂
columns are
February 2, 2010 at 10:46 am
those are the columns
a b c d e f
a b c d e f
a b c d e f g h
a b c d e f
a b c d e f
a b c d e f
a b c d e f
a b c d e f
a b c d e f g h
a b c d e f
a b c d e f
a b c d e f g h
All through there are 6 columns a b c d e f excepting for some rows where there are 8 columns a b c d e f g h .
I am using SSIS to import the data into a table .
February 2, 2010 at 2:00 pm
One approach would be to import all the data a table with 9 columns and then parse out the primary column into the other columns.
CREATE TABLE Test
(
[All] varchar(20),
[1] varchar(1),
[2] varchar(1),
[3] varchar(1),
[4] varchar(1),
[5] varchar(1),
[6] varchar(1),
[7] varchar(1),
[8] varchar(1)
)
INSERT INTO test ([All]) VALUES('a b c d e f g h')
--Then use substring
UPDATE Test SET [1] = (SELECT SUBSTRING ([All],1,1) FROM Test)
UPDATE Test SET [2] = (SELECT SUBSTRING ([All],3,1) FROM Test)
--etc...
SELECT * FROM Test
DROP TABLE Test
So your SSIS package would import and also run an Execute SQL Task to populate your columns.
February 4, 2010 at 6:58 am
emily-1119612 (2/2/2010)
One approach would be to import all the data a table with 9 columns ...
Wouldn't this fail at the import stage, because SSIS would validate the number of fields per row?
I would be tempted to bring the data into SSIS as one field and then use derived columns to do the parsing in the pipeline.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
February 5, 2010 at 8:01 am
In a situation like this I would use Phils approach
with bulk insert in one staging table one field and then
pars data with substring functions to extract the different fields to separate tables.
/Gosta
February 5, 2010 at 11:21 am
Phil Parkin (2/4/2010)
emily-1119612 (2/2/2010)
One approach would be to import all the data a table with 9 columns ...Wouldn't this fail at the import stage, because SSIS would validate the number of fields per row?
I would be tempted to bring the data into SSIS as one field and then use derived columns to do the parsing in the pipeline.
I think we have the same idea but my choice of words was poor. Should have said: One approach would be to import all the data into a single column within a table with 9 columns and then parse the data into the other columns. My DDL statements hopefully demonstrated what I was thinking better than my poor sentence.
February 5, 2010 at 11:25 am
I was obviously in a 'read English, not code' mood when I responded, because I didn't even look at your DDL. Now that I have, I see that we were on the same path, just taking slightly different approaches ...
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply