Database/Table partitioning strategy

  • We have the application which uses MS SQL database. Our customers have mostly MS SQL 2005, some of them use MS SQL 2000.

    As I know they don´t have more than Standard version.

    MDF files have from 1 to 6 GB (depends on customer).

    Our biggest tables have about 300000 records.

    Each year database size increases by no more than 1GB.

    Our normal application response is I would say "normal". The weakest points are in our reporting system, some big reports over several years' data are processed up to 10 minutes.

    Now our company decided to split somehow the database for better performance.

    What do you think, would it be beneficial for such database? I don't think so.

    If yes, which strategy could you suggest?

    Problem is, that our records are active for long time, it is not possible to do something like "what is more than year old move to archive"

    There is what I was thinking about:

    1) Partitioning - I have no exoperience with that but I have read something about this

    issue and I think it is not useful for me (not in Standard version, better performance on such database is unsure, many joins in our SELECT statements)

    2) Own application-driven system - each archived table will have archive version with the same structure and user will select records to move to archive. Application will move all related data. User can select archive records and move them back to active tables.

    3) Each table will have bit column "archive". It is the easiest way but I am not sure about performance benefit (set of records in the table is same)

    Could you suggest some solution?

  • There are lots of options and, unfortunately, you are going to have to look carefully at your situation to choose one.

    Here are some things to think about.

    SQL 2000 and SQL 2005 handle partitioning differently (partitioned views and actual table partitioning) and editions of SQL impact the options. So, to support this in your product you would need to handle the versions and editions of SQL differently.

    An application handled solution is going to be the most database-independant solution, but could be a lot of work to implement. This solution can be a real headache if you use identity columns as you can create key conflicts.

    I am not sure what you are trying to achieve with a bit field - other than possibly adding it to all of your indexes and queries. That will probably be a lot of work for very little benefit.

    On an additional note, a 6gb SQL database is pretty small and I would expect performance to be excellent in most cases. I would look toward application optimization - trace a database and use the index tuning wizard and you will probably find some gems pretty quickly. If you have reports taking 10 minutes, have someone performance turn the reports and you may find you have no problems. Your current growth rate is way below the growth rate of hardware - your customers should be replaing hardware twice before their database size has doubled (new hardware every three years is pretty reasonable) so I would expect the application to never get slow simply because the hardware will out-pace it.

  • Thank you very much for your opinion.

    Reporting system is our weakest point but we work on its improvement. It was much more worst several month ago and still there is something to do.

    I would say partitioning is not actual at this moment.

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply