January 7, 2010 at 6:43 pm
Hi,
I have to load a csv file on monthly basis. The columns in the file are not fixed.
example : Id, 200911,200910 might be the columns in one file
Id, 200811,200810,200809 might be the columns in the other file
I have to go the properties of the file connention manager and click 'Reset Columns', works fine.
1) Is there any way to use the script task and have the columns 'Reset'.
2) I googled and found that we can use a script task and can load the columns dynamically.
Can any one provide me with steps and code for the same. I can't do complicated coding with script task.
Thanks in advance
January 8, 2010 at 2:14 am
Sounds to me like you are asking someone here to provide you with a working solution. That's not really how this forum works - people here will provide you with all the help they can to get you able to do the work yourself, but generally they're not going to send you the finished article ... unless you pay them 🙂
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
January 8, 2010 at 9:11 am
If you can use third-party solutions, check CozyRoc Data Flow Task Plus. It has ability to extract column information dynamically at runtime and load this data to another dynamic destination. You can also define mapping dictionary between them. Also the task doesn't require you to be programmer.
January 3, 2011 at 11:14 am
I'm looking to do something similar without using a third party solution.
We've got files coming in that are all in different formats but need to go to the same destination. It would be GREAT to be able to pull the column definitions from a table and use parameters/expressions to set the connection manager properties for that specific file. Currently we have to write new packages every time we get a new file type and while it works it's not exactly elegant.
I'm not seeing anywhere I can go to set the properties for each column in a flat file data connection. My sad guess is that they don't exist.
SQL Server 2005/ VS 2005 here.
January 4, 2011 at 12:37 am
JustMarie (1/3/2011)
I'm looking to do something similar without using a third party solution.We've got files coming in that are all in different formats but need to go to the same destination. It would be GREAT to be able to pull the column definitions from a table and use parameters/expressions to set the connection manager properties for that specific file. Currently we have to write new packages every time we get a new file type and while it works it's not exactly elegant.
I'm not seeing anywhere I can go to set the properties for each column in a flat file data connection. My sad guess is that they don't exist.
SQL Server 2005/ VS 2005 here.
I'm afraid it can't be done.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
January 4, 2011 at 12:46 am
Phil Parkin (1/4/2011)
JustMarie (1/3/2011)
I'm looking to do something similar without using a third party solution.We've got files coming in that are all in different formats but need to go to the same destination. It would be GREAT to be able to pull the column definitions from a table and use parameters/expressions to set the connection manager properties for that specific file. Currently we have to write new packages every time we get a new file type and while it works it's not exactly elegant.
I'm not seeing anywhere I can go to set the properties for each column in a flat file data connection. My sad guess is that they don't exist.
SQL Server 2005/ VS 2005 here.
I'm afraid it can't be done.
I'm going to have to disagree with Phil, slightly, here. There are workarounds. However, you can't do it directly. The short form of the issue is that you can't dynamically modify the metadata for connections/tables outside of design time.
Now, the workarounds are actually more painful than building either an SSIS package with a modified enabled/disabled VBScript chain depending on file type unless your files change daily and come with a formatting file.
If you have to deal with that (and here's hoping you don't) what you end up doing is creating a roll-your-own file interpreter, where you load the rows wholesale into a varchar(8k) column, and then processing internally based on what you received in the format file. You could, if you wanted, manually build your own rollout data for lookup as well.
That said... unless you absolutely have to do something that painful... it can't be done. :Whistling:
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
January 4, 2011 at 2:06 am
I'm not sure if SSIS is the right tool for it...
Why not load and process the file using T-SQL? It's kinda old-fashioned, but I'm confident much less painful than trying to tweak SSIS...
January 4, 2011 at 8:55 am
After I started doing the research into this I found out that even once the files get imported there's processing variations. :rolleyes:
We have no control over the format of the source files and finding out that there's processing differences for each one had me tell my boss that the way they've been doing it - with new packages for each file type - is the proper way to handle the situation. He was disappointed but there's no easy or more efficient way to do it.
Thanks to everyone to answered. It would have helped had I known "the rest of the story" so I could have given the correct response in the first place.
January 4, 2011 at 9:13 am
JustMarie (1/4/2011)
...but there's no easy or more efficient way to do it....
Would it be possible for you to provide two files and the final table structure ( = expected result)?
If the process is based on predictable rules I guess there will be a generic solution. It might not be easy but it should be more efficient.
January 4, 2011 at 9:15 am
JustMarie (1/4/2011)
After I started doing the research into this I found out that even once the files get imported there's processing variations. :rolleyes:We have no control over the format of the source files and finding out that there's processing differences for each one had me tell my boss that the way they've been doing it - with new packages for each file type - is the proper way to handle the situation. He was disappointed but there's no easy or more efficient way to do it.
Thanks to everyone to answered. It would have helped had I known "the rest of the story" so I could have given the correct response in the first place.
Of course there is easier and more efficient way. But it involves third-party solution (the one described above). The CozyRoc's solution was designed explicitly based on the feedback people have posted in the forums. It is a small premium to pay, when you consider the time you will safe just from maintenance stand point of view. And the solution above doesn't require programming skills.
January 4, 2011 at 7:13 pm
Hello everyone,
I agree with Craig and disagree with Phil.
I succeeded in loading the file. Yes, it seems impossible but I succeeded in loading the file dynamically without using any third party tool.
I took help of a .NET developer and he coded the script task for me, in about an hour :). The credit goes to him. I had to accommodate two excel file in a single DFT, depending on the file received by the external team.
edit : I have mentioned excel here, it was a csv file.
January 5, 2011 at 12:55 am
touchmeknot (1/4/2011)
Hello everyone,I agree with Craig and disagree with Phil.
I succeeded in loading the file. Yes, it seems impossible but I succeeded in loading the file dynamically without using any third party tool.
I took help of a .NET developer and he coded the script task for me, in about an hour :). The credit goes to him. I had to accommodate two excel file in a single DFT, depending on the file received by the external team.
My 'it can't be done' comment was directed specifically at this requirement:
We've got files coming in that are all in different formats but need to go to the same destination. It would be GREAT to be able to pull the column definitions from a table and use parameters/expressions to set the connection manager properties for that specific file. Currently we have to write new packages every time we get a new file type and while it works it's not exactly elegant.
I'm not seeing anywhere I can go to set the properties for each column in a flat file data connection.
So ... what you are saying, in disagreeing with me, is that your .NET developer has been able to write a script task which dynamically modifies the connection manager properties for a connection within a package and then use that modified connection in a data flow task.
If that is true, you need to post the code for the community and I will nominate the developer for a medal.
What I was certainly not saying, just to clarify, is that there is no way of achieving a dynamic data load based on fluid external meta data. As I'm sure that you know, almost every IT requirement is possible if you throw enough resources at it, if there is no constraint on the methods used to implement the solution.
The absence of evidence is not evidence of absence
- Martin Rees
The absence of consumable DDL, sample data and desired results is, however, evidence of the absence of my response
- Phil Parkin
January 6, 2011 at 8:10 pm
Phil, I never meant to offend you. People agree and disagree with many things.
But kindly, do not post any reply if your reply discourages others. People post here when they are in need or when they need advice from experts and their ideas. Your post (atleast here doesn't help anyone).
And Yes, I would love to nominate my colleague. Kindly read my first post. I wanted to load two excel files, problem is mentioned there. I managed it with a single DFT along with a script task. He helped me with with his code, without which it wouldn't be possible.
As mentioned in my first post, I am not an expert but I tested the package and FYI, it is in production now.
January 6, 2011 at 8:30 pm
touchmeknot (1/6/2011)
Phil, I never meant to offend you. People agree and disagree with many things.But kindly, do not post any reply if your reply discourages others. People post here when they are in need or when they need advice from experts and their ideas. Your post (atleast here doesn't help anyone).
And Yes, I would love to nominate my colleague. Kindly read my first post. I wanted to load two excel files, problem is mentioned there. I managed it with a single DFT along with a script task. He helped me with with his code, without which it wouldn't be possible.
As mentioned in my first post, I am not an expert but I tested the package and FYI, it is in production now.
Your post is very rude. Phil's original reply was quite on the spot because you never made any effort to post your results or code or whatever. It is strange to blame him for your laziness.
And you are still not posting the code for the community to see and learn, which is also NOT cool.
January 6, 2011 at 10:35 pm
Just to add, seeing that the original question was about reading in CSV file, I’ve got a scenario where I need to read in txt files from the cell phone company and the various responses send back from the clients. These files varies in format, delimiters (as the client can reply with a comma, quote, double quote or any other charaters)
Seeing that the response can only be so long, I read in the complete line into 1 column in a table (with a ID column for sorting) into SQL Server.
Then in SQL Server I can code to my heart’s content what I want to do with the data in the column, so gaining much more control than in SSIS. Sometimes you would find linefeed characters in the responses as well; in SSIS there is no way that you will be able to control this, but in SQL I can join the lines back up together and put everything into 1 line again.
Remember SSIS is just a tool. Sometimes its better to get the data into SQL and then massage it.
Viewing 15 posts - 1 through 15 (of 23 total)
You must be logged in to reply to this topic. Login to reply