March 7, 2012 at 10:29 am
I need to write to a CSV file from a table in the DB. This is fairly easy and I have done this. But I have to create multiple file and each file can have only 99 records and the file name should be file1....file2...filen
In my query I have SELECT TOP 99 * FROM Table 1 then I write to the CSV file...after that I am stuck..
any help whould be greatly appreciated.
Thanks
March 7, 2012 at 10:46 am
It seems to me that you'll need to create a script object in a package to do the job. That will give you the most flexibility. Select the entire record set then count the records you write to the file from your record set within your script.
I don't know of a way to do this with SSIS objects, other than a script object.
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
March 7, 2012 at 10:53 am
just an idea:
select *, (row_number() over (order by [object_id],[column_id]) - 1) /99 as FileNo
from sys.columns
As you can see, data in the table will be given incrementing FileNo for the froup of upto 99 records.
Then you can query the result of per FileNo...
March 7, 2012 at 11:19 am
The rownumber idea looks like a great way to start. At some point it seems you are going to have to perform the file creation process an indeterminate number of times.
Some options:
1) Put DTEXEC in a cursor. Use the DTEXEC SET option to pass: a parameter to your SSIS query to return the correct records; and the filename itself to the package
2) Put a for each loop in your package and loop through a record set that contains a distinct list or you filenames/record sets.
3)Scripting-not so familiar with that
So choose you method of iteration (sounds better than loop or cursor).
Think about your skill sets. Option 1 emphasizes T-SQL skills, 2 SSIS skills, 3 scripting skills.
March 7, 2012 at 1:05 pm
I created a file and has 1000 records now I need to split that file to numerous file having 99 records per file each. How can I do that in SSIS I tried using a For Loop and then rowcount but could not go further
March 8, 2012 at 12:11 am
What is your source? First it is a table, than it is a file.
We need to know this or we can't help you.
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
March 8, 2012 at 4:23 am
You could try using an expression on the connection string for your destination that constructs the file name from variables that are incremented each time you SELECT your TOP 99.
[font="Courier New"]sqlmunkee[/font]
[font="Courier New"]Bringing joy and happiness via SQL Server since 1998[/font]
March 8, 2012 at 8:53 am
Koen Verbeeck (3/8/2012)
What is your source? First it is a table, than it is a file.We need to know this or we can't help you.
I could not figure out how to write to multiple files from a table source. So I created one file and now trying to split that one big file based on N rows into multiple files.
e.g. BigFile.csv - 1000 rows
file1 - 100 rows
file2 - 100 rows
....
....
file10 - 100 rows
Thanks for your help.
March 8, 2012 at 9:02 am
Ok then it sounds you have left the realm of SQL Server. Maybe this.
Or back the situation up to a single recordset and explain to us what you have tried when you failed to do this in SQL Server.
March 8, 2012 at 9:57 am
Do you know upfront how many little files you need to create?
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
March 8, 2012 at 10:11 am
Koen Verbeeck (3/8/2012)
Do you know upfront how many little files you need to create?
No idon't
March 8, 2012 at 10:31 am
If you can put a row number on it what about using a conditional split and something like this?
RowNumber >= @[User::RecordLow] && RowNumber <= @[User::RecordHigh]
and update the variable after each insert to file and set it back to 0 at the end of the run?
I haven't tried this on my own but it looks like it might work
March 8, 2012 at 11:48 am
Recently did this.
Create a proc performing the file grouping like above with row_number().
Perform an execute sql task to determine the max file iteration you need from the query. Can also be done with a max() aggregate in a dataflow.
Use this as a feeder into a For Loop as an iterator. Filter on the iterator and create your files via an expression fed dataflow.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
March 8, 2012 at 4:03 pm
Have to ask, but why the restriction on the number of records in each file? Personally, I can't think of a good business case for this.
March 8, 2012 at 5:11 pm
Lynn Pettis (3/8/2012)
Have to ask, but why the restriction on the number of records in each file? Personally, I can't think of a good business case for this.
In my case, it was an arbitrary restriction due to feeding a mainframe system. Some hills aren't worth dying on, so I implemented it instead of fighting the good fight.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply