SQL server Paging

  • Hi i need some help in SQL Server paging, i have developed a Search pages using ASP.NET and SQL server database, the table which the search is querying has 500000 recrods is very slow, i have used the logic from this site

    http://rosca.net/writing%5Carticles%5Cserverside_paging.asp

    My Problems goes like this

    ----------------------------

    The scenario

    --------------

    Table

    ------

    CREATE TABLE [dbo].[tblQry] (

    [id] [numeric](18, 0) IDENTITY (1, 1) NOT NULL ,

    [Name] [varchar] (250) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,

    [IcNumber] [varchar] (250) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL

    ) ON [PRIMARY]

    GO

    ALTER TABLE [dbo].[tblQry] WITH NOCHECK ADD

    CONSTRAINT [PK_tblQry] PRIMARY KEY CLUSTERED

    (

    [id]

    ) ON [PRIMARY]

    GO

    CREATE INDEX [IX_tblQry] ON [dbo].[tblQry]([id]) ON [PRIMARY]

    GO

    Stored Proc

    -----------

    CREATE proc test as

    declare @i Numeric

    set @i = 1

    while @i <= 500000

    begin

    insert into tblQry (Name,ICNumber) values ('Name'+ convert(varchar, @i) , 'ICnumber'+ convert(varchar, @i) )

    set @i = @i + 1

    end

    GO

    --This Proc i used to insert records to the table

    Query

    -----

    select top 10 * from tblQry where id not in

    (select top 490000 id from tblQry where Name like '%name%' or icnumber like '%name%' or id like '%name%' order by name) order by name

    The Query takes about 10 to 15 secs to rective the record set from a table of 500000.

    we also have a table will few millions, how will the response.

    ----------------------------

    Do you have any suggestions, your suggestion will be helpful, thanks.

    regards,

    sathya

  • Not the greatest article in the world. Very inefficient algorithm. Any time you see NOT IN used in a WHERE clause, there is cause for concern. A better idea is to keep track of all the variables you will need in the ASP and pass the stored proc the values for use in an exclusion for the clustered index on the IDENTITY field.

    --

    For instance, you know that you will need X records per page, call it @RecordsPerPage. You also know that the first time the query is run, it will return @RecordsPerPage that meet the conditions supplied by the searching user. In returning the data, the procedure will return the ID values of all the records in the TOP @RecordPerPage resultset. The maximum and minimum values of the ID field become your "bookmarks". You should use these bookmarks to define the second call to your query. So, given your seach criteria supplied, and the sort order is ASC for the searching procedure, say you have a minimum ID value of 34009 and a maximum ID value of 43889. If you have 20 records per page, that means that the query found 20 records matching the given search criteria between IDs 34009 and 43889 inclusively. So, the next call to your procedure would supply the previous maximum ID value as the minimum bookmark:

    
    
    SELECT TOP 20 FieldList
    FROM MyTable
    WHERE ID > 43889
    AND ... rest of search criteria

    In the ASP.NET page, keep track of the maximum ID value returned, and use it in the call to the next page...

    Hope this is decently expressed. There are certainly other methods of doing this kind of thing, and I'm sure others will respond,

    Jay

  • hi jay,

    i welcome your commments,

    thanks for your reply,

    i know is not a efficient algorithm, i just wanna know whether i can get some good stuff, that will help me.

    currently i am using the logic that you have specified for tha past 2 yrs but

    i have the following problems in that ...

    for Example if the records perpage is 10, then for example there are 100 records in the table, so i will get 10 pages with 10 records each.

    so the problem is, what will happen if the user delete 1 record and adds records,

    currently in this method it is caring only for the previous and the current page stuff, for example if i am in third page and a record is delete in the second page, then when i go to first page i will get only 9 records. Will it not be a kinda mess.

    this is the current problem i am having, so i am thinking of some other logic, that will help me to get the update records based on some kinda paging....

    do u think something.....

    ur commments are welcome....

    sathya

  • quote:


    ...for Example if the records perpage is 10, then for example there are 100 records in the table, so i will get 10 pages with 10 records each.

    ...


    Sathya,

    Don't supply a top value in the query, and you shouldn't run into this problem. For instance, say you have, like you suggest, 100 records, in pages of ten. You want to get the first ten on the first page, and issue a call to get these:

    
    
    SELECT TOP 10 IDField, OtherField
    FROM Table
    ORDER BY IDField

    This will return the first ten records, no matter what. If it returns IDs 1-10 at first, then you delete record 9, then next call to it will return IDs 1-8, 10-11. You should keep track of the minimum and maximum values returned.

    --

    So, to get from this first page to the next page, (after deleting record #9), you would issue a call similar to:

    
    
    SELECT TOP 10 IDField, OtherField
    FROM Table
    WHERE ID > 11
    ORDER BY IDField

    This would presumably return IDs 12-21. Say, you go to the next page of records (22-31), and decide to go back a page, you would issue a call similar to:

    
    
    SELECT TOP 10 IDField, OtherField
    FROM Table
    WHERE IDField < 22
    ORDER BY IDField DESC

    This will return the top previous IDs to the lower bookmark you passed (22), which presumably would be 12-21. If someone had deleted record #17 during the past few seconds, the query would show IDs 11-16, 18-21.

    --

    Now, assume someone deletes the first two ID values (1,2). Let's say you wanted to go back another page. Since you have been taking care to track which page of data you are requesting, you would know that you are requesting the first page again. The first page is the exception to the rule, and would issue another call to our first procedure:

    
    
    SELECT TOP 10 IDField, OtherField
    FROM Table
    ORDER BY IDField ASC

    This would return us 10 IDs, 3-8, 10-13. If you wanted to go forward a page, you would be throwing the number 13 to the next query, as such:

    
    
    SELECT TOP 10 IDField, OtherField
    FROM Table
    WHERE IDField > 13
    ORDER BY IDField ASC

    And you get the next 10 IDs, and so on, and so on.

    --

    Hope this gives you an idea of what I was talking about before. The trick is to keep track of the low and high bookmarks, ensure you order by ASC or DESC appropriately, and treat the first page differently. You will run into occasional situations where so many records have been deleted that the page does not return enough records, but you can remedy this easily by double-checking in the ASP.NET page for an amount of records, and re-running the query with different criteria if the amount of records returned was not proper.

    --

    HTH,

    Jay

  • One more thing to remember... when you use TOP #....ORDER BY the system must produce the entire result set anyway. Performance improves significantly when you can use an index hint rather than the ORDER BY.

    SELECT TOP 10 * FROM <table> (INDEX (sortingIndex))

    WHERE <criteria>

    OR just continue with the idea Jay is proposing. If there aren't many holes in the sequence, you could

    SELECT * FROM <table>

    WHERE ID BETWEEN first AND first + count

    Obviously - this is inconsistent if there are gaps in the ID being used.

    Guarddata-

  • quote:


    ...

    SELECT TOP 10 * FROM <table> (INDEX (sortingIndex))

    WHERE <criteria>

    ...


    Wonderful suggestion, guarddata! Only problem is, you'd have to define two indexes on the same column; one ASC and one DESC order to fill the requirements. Still, could be useful.

    quote:


    OR just continue with the idea Jay is proposing. If there aren't many holes in the sequence, you could

    
    
    SELECT * FROM <table>
    WHERE ID BETWEEN first AND first + count

    Obviously - this is inconsistent if there are gaps in the ID being used.


    Right. My whole point was to avoid using BETWEEN because of the problems that sathya was talking about with deleted values.

  • Hi there,

    ever wondered how this task is accomplished in this forum software ?

    Obviuously it has some kind of paging for long threads or memberlists.

    The source code can be downloaded and evaluated. When I read it right, all rows are retrieved at startup, pushed into an array, the page is fed with data from the array.

    Hm, seems to work

    Cheers,

    Frank

    Time to go home now!!

    I forgot to mention, the whole stuff isn't done at the server, but the client. Might be a problem with large resultsets

    Edited by - Frank Kalis on 08/28/2003 08:38:13 AM

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • Mind still not in gear this morning... JPipes - why do you say two indexes would be needed?

    We have even had good success by using ROWCOUNT:

    SET ROWCOUNT 10

    SELECT * FROM <table> (INDEX(MyIndex))

    WHERE ID > @StartID

    But I haven't tested this one as much. I don't know if it continues in the background or not.

    Guarddata-

  • Because, going to a previous page, you would take the low bookmark and search backwards:

    (Assuming 22 is the low bookmark)

    
    
    SELECT TOP 10 IDField, OtherField
    FROM Table
    WHERE IDField < 22
    ORDER BY IDField DESC

    This would give you the previous ten records from the low bookmark.

  • quote:


    ...

    all rows are retrieved at startup, pushed into an array, the page is fed with data from the array.

    ...


    Research has shown that barely 2% of people receiving paged data actually bother to click on the next page of data in a resultset. I tend to lean on the side of: why bother getting all the data if they're rarely going to look for it. There's nothing wrong with just getting the data you need, and nothing more, right?

  • quote:


    Research has shown that barely 2% of people receiving paged data actually bother to click on the next page of data in a resultset. I tend to lean on the side of: why bother getting all the data if they're rarely going to look for it. There's nothing wrong with just getting the data you need, and nothing more, right?


    Yes, I think I have also read somewhere that only a few people tend to look a google results beyond page 2 or 3.

    So, assuming page splits of 20 rows, I'll suggest a SELECT TOP 80 ...

    ...but what if they do?

    Msgbox "Sorry, our programm does not fit for a user like you. Try somewhere else" ???

    Cheers,

    Frank

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • quote:


    Msgbox "Sorry, our programm does not fit for a user like you. Try somewhere else" ???


    Hmm, I like the sound of that, Frank!

  • Jay,

    quote:


    Hmm, I like the sound of that, Frank!


    9:00 im the morning here???

    Couldn't get to sleep ?

    Cheers,

    Frank

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply