Archiving and Performance

  • Hi all, in my current project we are evaluating the implementation of an archiving capability.

    Can anyone shed some light on the extent to which table size affectes performance of queries, where those queries are all covered sufficiently with indexes.

    My assumption has been that if an index is used, then performance (all other things being equal) will have negligible degredation as table size increases, rather than a linear decrease as some of my collegues suspect.

    If anyone has any comments relating to table size vs performance, or perhaps even some figures that would be great.

    Cheers,

    Brendon.

  • Doesnt seem to be linear. Rather than trust what you hear, why not put together a simple test? Test with 1000, 10k, 100k, 1m, 10mil rows, run a query. Server hardware and load aside, its typically incredibly fast.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • It very much depends on the type of query.

    If you have a simpel query such as:

    select column1

    from table

    where ...

    then the performance will not suffer much if the table grows very large. This is because of the way the indexes are constructed as it might take 2 index reads to determine the data page containing the data which would increase to 3 reads for a very much larger table.

    However, if your query contains a join or a subquery then you have a different situation. If you now bring back 100 times as many rows from one of the tables in the query, SQL Server has to perform 100 times as many look ups which could then cause a very big performance hit.

    I have some very large tables (50 million rows) and see no significant performance difference between queries against these tables and queries against much smaller tables (100,000 rows).

    As Andy said, try it and see.

    Jeremy

  • Don't forget Fragmentation can increase the query time significantly if the data is not residing in the memory cache, as the disk IO operations wold be increased if table/s are fragmented etc..

    I usually keep my table frag down to less than 5% with regular (daily) index defrag operations - this is ven on 60GB of data

  • And in addition everything depends on you application.

    If you have a lot of range queries where the range is quite large the you will see a kind of performance degradation coming with the size increase.

    But basically follow the tip Andy gave you and keep in mind that if your database size is increasing significantly additional memory and/or processor will be helpful.

    Bye

    Gabor



    Bye
    Gabor

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply