How to speed up the search

  • We have a database design having a single database for the insurance data.I have split the data on calendar tables meaning each month there is a new policy table being incepted, no of records in one such table is : 5 lakh records.

    So in the course of six months I will have to search all these table to check uniqueness of policies,Q1: How Do I Tune my query to quick search in each such table. The search criteria will have 23 characters in two specific columns?

    Q2: Are month year table option a bad way of the designing? If yes Why?

    What are the other alternates to such a huge growth, I can not separate the data on products etc... It is a combined policy data...

    Thanks In anticipation,

  • I also work in insurance business and the best thing you can do, is to properly design your schema right from the start. One single table should work just fine here. Your approach violates several fundamental principles of relational databases.

    Anyway, see if this helps http://www.sommarskog.se/dyn-search.html

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • Perhaps it's just me being silly, but... how many are 5 lakh ...?

    /Kenneth

  • I might be wrong, but I think this is an Indian measure.

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • 1 lakh = 100,000

     


    * Noel

  • Ah, okies - tnx

    So, that means 500k rows for each month, totalling 6 million rows for a full year. That alone is imo not any reason at all to split up the tables by month - for any decent (or halfdecent even) SQL Server, a table with 6 million rows is a breeze to handle.

    The number of rows in this case is not an issue in itself. If there are some kind of performance troubles, then there are other factors - indexing, access patterns, bad code doing the access etc...

    just my .02 though

    /Kenneth

  • I'll add my .02 as well (soon we'll have that 1). I fully agree with Frank and Kenneth. Using a clustered index with a correctly designed table more or less makes the number of rows a non-issue in this case. In fact, IMHO partitioning is not really worth the pains unless you have either really huge tables (making the index traversals take a long time), special blocking issues (for instance only the latest data is modified, older is always just queried) or need to improve backups. Of course in SQL Server 2005 'semi-automatic' partitioning will be added which will remove most of the pains, so then it might be a different story.

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply