January 21, 2016 at 11:26 am
Hi,
I have table which has 3000 records which doesn't have primary key, timestamp. I need to write a stored procedure to return 100 each time. Meaning first run I should get first 100 and second run I should get second 100 batch like that until end of table. I can't change any process in my application. I was asked to give this result :(. Can someone help me in this? I am using SQL 2012.
Thanks
January 21, 2016 at 11:33 am
why doesn't your table have a primary key? What order do you want them returned in? I would look into using Row_Number and paging methods
For better, quicker answers, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
January 21, 2016 at 11:40 am
The table was provided by product that we use in client location. Order can be anything, but need to cover all records from table in 100 batches.
Thanks,
January 21, 2016 at 11:44 am
Rows are not guaranteed to return in any specific order unless you specify an ORDER BY clause. Please provide the DDL for the table so we have a better picture of what we're working with.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
January 21, 2016 at 11:53 am
Here is the query:
SELECT [USERLOGIN]
,[USERFIRSTNAME]
,[USERLASTNAME]
,[ACTIVETODATE]
,[ACTIVATED]
,[DEACTIVATED]
,[USERID] - Unique ID
FROM
January 21, 2016 at 12:50 pm
select UserID, [USERLOGIN]
,[USERFIRSTNAME]
,[USERLASTNAME]
,[ACTIVETODATE]
,[ACTIVATED]
,[DEACTIVATED]
,[USERID]
from (
SELECT [USERLOGIN]
,[USERFIRSTNAME]
,[USERLASTNAME]
,[ACTIVETODATE]
,[ACTIVATED]
,[DEACTIVATED]
,[USERID] ,
Row_number() over (order by UserID) Row
FROM ) v
where Row between 1 and 100 -- increment here
--where Row between 101 and 200 -- etc...
-- DDL
--create table [User] (
-- [USERLOGIN] nvarchar(100)
--,[USERFIRSTNAME] nvarchar(100)
--,[USERLASTNAME] nvarchar(100)
--,[ACTIVETODATE] datetime
--,[ACTIVATED] bit
--,[DEACTIVATED] bit
--,[USERID] int not null)
For better, quicker answers, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
January 21, 2016 at 1:49 pm
Small footnote to Mike's solution. This works well when UserID is guaranteed to be unique. Otherwise it is a bit risky. If, for instance, rows #100 and #101 are for the same userid, then they will again be #100 and #101 in the next execution, but they can switch places - so you could get the same row as #100 in the first batch and #101 in the second batch.
A similar problem can occur when data can change between executions.
For a problem like this, the prefered solution is to cache the returned data in the application and do the paging from there. If that is really impossible, then I would catch the data in a temporary table, using an IDENTITY column to add numbers. Define the clustered index on that identity column, and use that to return 100-row chunks to the client,
January 21, 2016 at 2:29 pm
for clarification. The OP mentioned that UserId was unique (although somewhat hidden)
Here is the query:
SELECT [USERLOGIN]
,[USERFIRSTNAME]
,[USERLASTNAME]
,[ACTIVETODATE]
,[ACTIVATED]
,[DEACTIVATED]
,[USERID] - Unique ID
FROM
For better, quicker answers, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
January 21, 2016 at 3:21 pm
Here is another solution that uses new functionality specifically for paging that was introduced in SQL 2012.
DECLARE @pg_num INT = 1,
@pg_size INT = 100
SELECT [USERLOGIN]
,[USERFIRSTNAME]
,[USERLASTNAME]
,[ACTIVETODATE]
,[ACTIVATED]
,[DEACTIVATED]
,[USERID]
FROM
ORDER BY USERID
OFFSET (@pg_num - 1) * @pg_size ROWS
FETCH NEXT @pg_size ROWS ONLY
I find this simpler, because you don't have to use a CTE/derived table.
Drew
J. Drew Allen
Business Intelligence Analyst
Philadelphia, PA
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply