August 16, 2018 at 7:35 pm
Is it possible?
If my file has 50 columns that had been written to it by a Select Statement in a data flow in one of the previous tasks in the package
can i (possibly using script task ? or what else and how?) can I insert a column between column 13 and column 14 in the text file that is comma-delimited? the new column will have the same value for all rows = 'Hello, column'
Thanks..
Likes to play Chess
August 17, 2018 at 1:49 am
You could probably achieve this with a Script task, yes; you could certainly do it with a Script Transformation (but those were introduced with SSDT 2012). Your post, however, seems that you are simply suggesting that you want to add the same value to end of every row though, is that correct? If that's the case, then this would be quite the simple task for a Script; at the end of the day a CSV is simply a text file so if you treat it like you you can easily append more text to the end of each line. Have you tried anything at all so far? If you can post your code then we can see where you might have gone wrong, or push you in the right direction.
Thom~
Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
Larnu.uk
August 17, 2018 at 5:34 am
VoldemarG - Thursday, August 16, 2018 7:35 PMIs it possible?
If my file has 50 columns that had been written to it by a Select Statement in a data flow in one of the previous tasks in the package
can i (possibly using script task ? or what else and how?) can I insert a column between column 13 and column 14 in the text file that is comma-delimited? the new column will have the same value for all rows = 'Hello, column'
Thanks..
It is possible, but not without rewriting the file. I think I'd probably use Powershell, given that the new column will be the same throughout.
The absence of evidence is not evidence of absence.
Martin Rees
You can lead a horse to water, but a pencil must be lead.
Stan Laurel
August 17, 2018 at 8:57 am
Why not just make sure that the full monty of data is present to begin with instead of writing out a file, then reading it back in, adding a column to it, and then writing it out again?
--Jeff Moden
Change is inevitable... Change for the better is not.
August 17, 2018 at 9:00 am
Jeff Moden - Friday, August 17, 2018 8:57 AMWhy not just make sure that the full monty of data is present to begin with instead of writing out a file, then reading it back in, adding a column to it, and then writing it out again?
I am hoping/assuming that there's a very good reason ...!
The absence of evidence is not evidence of absence.
Martin Rees
You can lead a horse to water, but a pencil must be lead.
Stan Laurel
August 17, 2018 at 9:04 am
Phil Parkin - Friday, August 17, 2018 9:00 AMJeff Moden - Friday, August 17, 2018 8:57 AMWhy not just make sure that the full monty of data is present to begin with instead of writing out a file, then reading it back in, adding a column to it, and then writing it out again?I am hoping/assuming that there's a very good reason ...!
I'm sure there's a reason, I just doubt that it's good.
Drew
J. Drew Allen
Business Intelligence Analyst
Philadelphia, PA
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply