September 6, 2008 at 10:52 pm
i am trying to insert fixed width flat file to sql table thru import/export wizard
1. it goes to all one column
2. even if i change column in advanced option by specifying the column output width and input width matching the same, it doesnt store in right way in sql table.
can anybody tell how to do it?
thanks
September 7, 2008 at 5:31 pm
It'[s pretty simple with Bulk Insert and a BCP format file. Take a look in Books Online. If your were to provide the first 100 lines of the flat file and attach it to your next post as a txt file AND provide the record layout AND provide the CREATE statement for the target table, I'm thinking that somone could probably bang it out for you.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 8, 2008 at 8:06 am
You should be able to this pretty simply in SSIS, but without any example data and information about the fixed with sizes it is hard to give you any kind of good advice.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
September 8, 2008 at 11:44 am
TYPE PFORM TNAME LOGONID NAME NUM
aaaaY Nsdffgg testb 14433333
bbbbRCH0C tesgggd testb 13343434
sample of the text file and there are many more columns
value of each column starts exactly the same position as the header. but in this posting it gets shifted little bit here and there.
September 8, 2008 at 12:19 pm
using SSIS you can use your flat fiel as your source file and the SQl table as you destination. in your destination click on mapping to match all columns. if you see only one columns check your flat file connection asnd make sure your Format is correct: Delimited
then click on column and check your row and column delimeter.
September 8, 2008 at 12:36 pm
it is not a delimted file. it is fixed width flat file.
September 8, 2008 at 3:20 pm
what is a row delimeter?
or it is a fixed width including the last column.
If so , when you create the flat file connection using dataflow, map the columns accordingly. you should also have the total width for the file which is marked by red line during the mapping.
If u can attach here a copy of ur text file and table code and i will try to map for you.
thanks
September 8, 2008 at 7:18 pm
keywestfl9 (9/8/2008)
TYPE PFORM TNAME LOGONID NAME NUMaaaaY Nsdffgg testb 14433333
bbbbRCH0C tesgggd testb 13343434
sample of the text file and there are many more columns
value of each column starts exactly the same position as the header. but in this posting it gets shifted little bit here and there.
Doesn't help me help you... whole row not displayed and was looking forward to you attaching one of the files. I'd also need the Create table statement to really help.
The suggestion immediately above this post would also work.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 9, 2008 at 3:40 am
You can define cr/lf delimiter for getting data in rows. As far as columns are concerned, you can use substring function of sql to create and fill columns. eg: if you need to spilt 'goodluck' into 2 columns - good|luck
col1 = substring(columnName,starting position,length)
col1=substring(ColumnName,1,4).....will give 'good' from column columnName which can be filled into new created column..
September 9, 2008 at 5:28 am
GAURAVKAUSHIK26 (9/9/2008)
You can define cr/lf delimiter for getting data in rows. As far as columns are concerned, you can use substring function of sql to create and fill columns. eg: if you need to spilt 'goodluck' into 2 columns - good|luckcol1 = substring(columnName,starting position,length)
col1=substring(ColumnName,1,4).....will give 'good' from column columnName which can be filled into new created column..
Yep, that would work, as well, and it'll be pretty fast. But a Bulk Insert using a BCP Format file would be faster.
Still waiting on the OP to attach a sample file and the CREATE statement for the target table, though...
--Jeff Moden
Change is inevitable... Change for the better is not.
July 20, 2009 at 9:37 am
The original poster never took your offer, but I'd like to. I haven't configured a BCP file using SSIS Bulk Insert before, but if it's going to process my file more quickly, I'd be interested in some help to get it setup correctly.
I've attached a snippet of the file. There's a Record Code in each row that indicates what the remainder of the row contains. The longest row in the file format is 832 characters. The attached snippet doesn't have all of the possible row types, but it's enough to get started.
Each row (except the header) begins with the following:
Client Code
Company Code
File Number
Client Payroll Year
Client Payroll Week
Client Payroll Number
Batch Number
Pay Number
Calc Number
Entry Number
Record Code
Record Sequence Number (Continuation Record Number)
After that, the content of the record changes based on the Record Code.
I've attached the file formats for all of the record codes contained in the snippet.
If you're still willing to help, I'd greatly appreciate it!
Let me know if you need anything else.
Thank you,
Jessica
June 28, 2010 at 4:08 am
in data flow
1st) use oledb-source
2nd) use charater map
3rd) flat file destination
in 2nd) use charater map
input column destination operation Output Alias
<<col name1> In-Place Change FullWidth <<colname1>>
in 3rd) flat file destination
in connection string only select fixedwith
then
each column
i/p filed charters width also o/p charaters(width) give
also check mapping is correct
June 28, 2010 at 7:17 am
huber_jessica (7/20/2009)
The original poster never took your offer, but I'd like to. I haven't configured a BCP file using SSIS Bulk Insert before, but if it's going to process my file more quickly, I'd be interested in some help to get it setup correctly.I've attached a snippet of the file. There's a Record Code in each row that indicates what the remainder of the row contains. The longest row in the file format is 832 characters. The attached snippet doesn't have all of the possible row types, but it's enough to get started.
Each row (except the header) begins with the following:
Client Code
Company Code
File Number
Client Payroll Year
Client Payroll Week
Client Payroll Number
Batch Number
Pay Number
Calc Number
Entry Number
Record Code
Record Sequence Number (Continuation Record Number)
After that, the content of the record changes based on the Record Code.
I've attached the file formats for all of the record codes contained in the snippet.
If you're still willing to help, I'd greatly appreciate it!
Let me know if you need anything else.
Thank you,
Jessica
Oh my... I'm sorry. I definitely lost track of this one. Sorry, Jessica.
Now, if you have the time, is there only one type of record (according to the record code) per file?
--Jeff Moden
Change is inevitable... Change for the better is not.
June 28, 2010 at 9:45 am
Wow! The client ended up going in a different direction for data conversion so I never needed to finish that awful package. However, it's a standard file format for a popular payroll software company so I would be interested to learn if there's an efficient method to handle it, because I'd like to be able to convert from it in the future. To answer your question, there are multiple records of each record type in the file. It's historical payroll information for employees, so an employee might have 27 different record types - demographic information, federal tax history, tax filling statuses, deduction history, earnings history, hours history, etc. One person may also have the same record type repeated, like if they have a lot of deductions. I ended up using conditional logic to parse the record based on the record code and then branching the logic from there to process the conversion for each type. I don't even remember what I chose to parse the file, which was my initial question here.
But thanks for the reply!
Jessica
June 29, 2010 at 6:49 pm
huber_jessica (6/28/2010)
Wow! The client ended up going in a different direction for data conversion so I never needed to finish that awful package. However, it's a standard file format for a popular payroll software company so I would be interested to learn if there's an efficient method to handle it, because I'd like to be able to convert from it in the future. To answer your question, there are multiple records of each record type in the file. It's historical payroll information for employees, so an employee might have 27 different record types - demographic information, federal tax history, tax filling statuses, deduction history, earnings history, hours history, etc. One person may also have the same record type repeated, like if they have a lot of deductions. I ended up using conditional logic to parse the record based on the record code and then branching the logic from there to process the conversion for each type. I don't even remember what I chose to parse the file, which was my initial question here.But thanks for the reply!
Jessica
Yuck! 😛 Sounds almost like :sick: EDI "formatting". Sounds like you sussed it , though. Nicely done and thanks for taking the time for the feedback. It's much appreciated. 🙂
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply