Help me in avoid IO delays!!!

  • Hi,

    We developed one application where we will store points of members. Our parent application will call our application. Our application should deliver results within 2 seconds. Our application uses 2 stored procedure mainly. If i run those stored procedures then it is taking 10ms. However when i started putting load on our application like >100 transactions per second, it will respond <2seconds for 5 min. Suddenly there will be a spike which leads to 30sec responds time. We found that spike is occurring because there is a database write activity is going on. Our challenge is:

    Member Table contains - 100000 records

    Transaction Table contains - 8000000 records (per day 500000 transactions will increase)

    Can you tell me is there any way to use memory to store insertions and later on writing to database? What is the best strategy we have to follow.

    PS:I already divided member and transaction table into multiple tables based on member name.

  • Can you tell me is there any way to use memory to store insertions and later on writing to database? What is the best strategy we have to follow

    if you are using .net you can take advantage of the disconnected recordset architecture of the data adapter, datasets etc.

    do you have any indexes set on your tables with the correct fill factors. and you can use the query plan to see which part of your query is being a problem.

    but since you said your app is being called by another, have you investigated to actually see that the problem is with your app and not with the other application? or the coupling method. [you might have to give use more info]


    Everything you can imagine is real.

  • I think we need more info here.  How do you know it is even I/O delays?  It could be due to prolonged blocking from long transactions or poorly written queries.  I would wait until the next time you are having a 'busy period' when your results are taking the 30 seconds and run a SQLDiag at that point.  This will give you the process/locking/waittype info you need to figure out why it is taking so long.  Check your indexes/query plan as bledu recommended, but this seems to be a blocking issue as their is load involved.

    John Rowan

    ======================================================
    ======================================================
    Forum Etiquette: How to post data/code on a forum to get the best help[/url] - by Jeff Moden

  • Maybe too much to ask for an easy fix but have you checked to make sure your database has enough room to accept all these inserts?

    Make sure the database has ample free space in both data and log files, turn off auto growth particularly in SQL 2000 - pre allocate enough space to allow the data within the database to grow without needing an auto grow during the day, schedule growth during maintenance windows/low volume times.  Even better allocate enough space in database to handle x days/weeks/months worth of anticipated growth.

    Check for tlog or other backups taking place in the background?

    Joe

     

  • I would also look at your logical/physical IO channels.  This may be a case where you need to have your OS on a physical channel and raid drive, separate channel for data/mdf and separate channel for transaction/ldf.  A few channels of IO with properly striped drives for each channel should provide a performance boost by reducing disk contention, (i.e. IO waits for a write to the transaction log when another use needs reads from the data that is already committed).

    If a fast dual-fiber HBA cached connected SAN is already in place, then this may already be handled.  In which case the query execution plan needs to be evaluated.

     

    [font="Arial"]Clifton G. Collins III[/font]

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply