September 1, 2013 at 4:08 pm
Hi Professionals
I have a script thats exports to a csv file
exec exporttocsv 'select top 5 * from newtable', 'test.csv'
the problem I am encountering is that some of the csv files are over 100MB making it virtually impossible to open, manipulate or email to work colleagues.
i am sure it is possible but do not know the correct syntax but is there a way to select say the first 100,000 rows then the next 100,000 rows and so on then finallly however may rows are left into more than one csv
eg
exec exporttocsv 'select top 100 * from newtable', 'test.csv'
exec exporttocsv 'select NEXT100,000 rows from newtable', 'test1.csv'
exec exporttocsv 'select REMAINING rows from newtable', 'test2.csv'
September 2, 2013 at 3:00 am
Oracle765 (9/1/2013)
Hi ProfessionalsI have a script thats exports to a csv file
exec exporttocsv 'select top 5 * from newtable', 'test.csv'
the problem I am encountering is that some of the csv files are over 100MB making it virtually impossible to open, manipulate or email to work colleagues.
i am sure it is possible but do not know the correct syntax but is there a way to select say the first 100,000 rows then the next 100,000 rows and so on then finallly however may rows are left into more than one csv
eg
exec exporttocsv 'select top 100 * from newtable', 'test.csv'
exec exporttocsv 'select NEXT100,000 rows from newtable', 'test1.csv'
exec exporttocsv 'select REMAINING rows from newtable', 'test2.csv'
Hopefully the following template code will help you solve this problem:
declare @PageSize int, -- or bigint if needed
@PageNum int; -- or bigint if needed
set @PageSize = 1000; -- or what ever size you want.
set @PageNum = 1;
while ((@PageSize * (@PageNum - 1)) + 1) <= (select count(*) )from <schema_name>.<table_name>)
begin
select
src.*
from
<schema_name>.<table_name> src
inner join (select rn = row_number() over (order by <pk_column>), <pk_column>
from <schema_name>.<table_name>)dt(rn,<pk_column>)
on (src.<pk_column> = dt.<pk_column>)
where
dt.rn between ((@PageSize * (@PageNum - 1)) + 1) and (@PageSize * @PageNum);
set @PageNum = @PageNum + 1;
end
September 8, 2013 at 5:47 pm
hi there
this just comes up with an error saying incorrect syntax near the keyword from
September 9, 2013 at 7:13 am
Oracle765 (9/8/2013)
hi therethis just comes up with an error saying incorrect syntax near the keyword from
That's because, as Lynn pointed out, the code is a template. You need to replace <schema_name> with your schema name and <table_name> with your table name. You'll also need to replace pk_column with your primary key column.
Please make sure you review and understand code you get from the internet before running it in your production environment.
September 9, 2013 at 9:46 am
Oracle765 (9/8/2013)
hi therethis just comes up with an error saying incorrect syntax near the keyword from
I'm sorry, but my crystal ball is broken and though the force may be strong it isn't that strong. What I provided was a template from which you could develop a solution to your problem.
What you posted was simply not enough from which to provide you with a fully developed and tested solution. You get back what you put into your question.
If you want more from us (and we are all volunteers on this site helping when and where we can on our time) then you need to provide more details regarding your problem. A good starting point would be to read the first article I reference below in my signature block regarding asking for help. The article will walk you through the steps you should follow on what and how to post the information so you get the best answers possible plus tested code.
September 10, 2013 at 8:16 am
Lynn Pettis (9/2/2013)
Oracle765 (9/1/2013)
Hi ProfessionalsI have a script thats exports to a csv file
exec exporttocsv 'select top 5 * from newtable', 'test.csv'
the problem I am encountering is that some of the csv files are over 100MB making it virtually impossible to open, manipulate or email to work colleagues.
i am sure it is possible but do not know the correct syntax but is there a way to select say the first 100,000 rows then the next 100,000 rows and so on then finallly however may rows are left into more than one csv
eg
exec exporttocsv 'select top 100 * from newtable', 'test.csv'
exec exporttocsv 'select NEXT100,000 rows from newtable', 'test1.csv'
exec exporttocsv 'select REMAINING rows from newtable', 'test2.csv'
Hopefully the following template code will help you solve this problem:
declare @PageSize int, -- or bigint if needed
@PageNum int; -- or bigint if needed
set @PageSize = 1000; -- or what ever size you want.
set @PageNum = 1;
while ((@PageSize * (@PageNum - 1)) + 1) <= (select count(*) )from <schema_name>.<table_name>)
begin
select
src.*
from
<schema_name>.<table_name> src
inner join (select rn = row_number() over (order by <pk_column>), <pk_column>
from <schema_name>.<table_name>)dt(rn,<pk_column>)
on (src.<pk_column> = dt.<pk_column>)
where
dt.rn between ((@PageSize * (@PageNum - 1)) + 1) and (@PageSize * @PageNum);
set @PageNum = @PageNum + 1;
end
Actually, I ran it after replacing everything to fit my schema and still got an error. It's on this line:
(select count(*) )from <schema_name>.<table_name>)
I've bolded the extraneous parenthesis just before the FROM.
This is really cool though. Thanks Lynn.
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply