Banking database design / Help Required

  • The design of the database is as below.

    Usually a Delete SP will be doing a delete , An update stored proc would be doing an update , and an insert stored proc would be doing an insert. But since it is a banking environment , there is no scope for delete and all data needs to be maintained . Hence in our environment,when the delete SP or the Update SP is called, it internally does an insert into the same table , so that the history of the transactions can be maintained. So what is happening is over a period of time, the number of records in the table has increased whether the user choses to delete or update and the consequent Delete or Update is taking a long time. So now the user is complaining that the system is slow.

    What is the best way to tackle this scenario.

    Would it be better to run an update statement internally setting some flag as deleted for that particular row, when a DELETE SP is called .

    and

    again running an Update statement internally in the SP and setting the flat as 'updated' for that particular row ?

    or any other better way. Is it better to have an history table separately from the main table, where all the updates are maintained.. Even then there has to be 2 SQL statements that has to run,when I am going to an Delete. One row entry to the history table saying update happened and one entry to the main table marking that row as deleted ( not deleting it, but marking as delete )

    Any good ideas .. Please help.

  • I'm running a system that has millions of rows from about eight years of data. It's based on inserts only. It runs fine. The issue is not the amount of data you're storing, it's the method that the data is stored and retrieved. What do your execution plans look like? Are you getting scans or other operations that are degrading as more data gets added to the system? If you're not seeing seeks, you probably need to reexamine the clustered index on your tables and the queries running against those tables.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • msforumpost (5/31/2010)So what is happening is over a period of time, the number of records in the table has increased whether the user choses to delete or update and the consequent Delete or Update is taking a long time.

    This looks like a typical scalability issue.

    A limit case would be a target table with no indexes at all so, when there are a few rows everything looks just fantastic but when volume increases performance issues get incrementaly worse.

    Start by checking indexing strategy... is there any index that would allow for a unique read?

    Once you have indentified the index you expect to be used... trace the troubled queries and check what is actually going on.

    _____________________________________
    Pablo (Paul) Berzukov

    Author of Understanding Database Administration available at Amazon and other bookstores.

    Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply