Data Contention - How do I solve it ?

  • Folks,

    Our application is experiencing lots of Data Contention. The app is basically a financial system with a Visual Foxpro desktop app, an Asp.Net 2.0 web portal and a Sql 2000 backend. 

    My main problem is when we enter an order via the VFP app, AND my "Web Orders Update" job is running at the same time, is causes serious slowness for the user on the VFP app. I can't figure out which setting to use so that the VFP app can quickly Insert an order without waiting for the job to finish it's Update.

    I am trying to add lock hints in various places too (i.e. Updates Orders (WITH ROWLOCK) ) as well as READ UNCOMMITED but so far to no avail.

    Any advice would be greatly appreciated.

    Sincerely,

    Bob

    New York

  • To put it another way, if my "web order update" job is executing Updates, how can I prevent the front end app from hanging up when the user inserts new orders ?

  • When you say "VFP app can quickly Insert an order", where is the order inserted into?

    If it's sql Server, you could look into inserting the order Asynchronously - see How to: Use SQL Pass-Through Asynchronously in the vfp help.

    If you want a fire and forget approach, creating a vfp exe just to Insert the order and running it with the RUN /N command may be useful.

    The universal thread at  http://www.universalthread.com is among the best VFP forums.  You might try posting this question there.

     

     

  • You need to understand transactions and isolation levels and how this all works, then you would be able to make moves to improve performance. I suspect you have blocking processes.

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • "You need to understand transactions and isolation levels and how this all works"

    Yes, that's true. In fact I do have a basic understanding of transactions and isolation levels (i.e. dirty cache reads, etc.). However I'm posting here. I was hoping for further advice in this area.

  • First, the order is inserted into a Sql 2000 database. We are using SQLCONNECT( ) and SQLEXEC() is most cases; in many cases we use the VFP 8.0 CursorAdapter class for queries.

    We have a Sql stored procedure that uses dynamic sql to insert a record. I basically pass (from VFP) the table name, columns and values clause to this procedure.

    One of the problems is that we are generating new order numbers. We use the the Sql "Identity" type for generating pkeys, but unfortunately we took some legacy code over to our new version.

    When I get a ticket number I was attempting to lock my custom "Order Number" table in order to get the next unique order number. I was selecting from the table with a lock hint just after executing SET ISOLATION LEVEL SERIALIZABLE this way noone else could read the same order number. This was part of my locking problem as my other Update process was also running as a sql job.

    I will go ahead and try the SQLSETPROP(gnHandle, 'asynchronous', .F.) and see how that helps me.

    Thank you,

    Bob

  • I'd advise you remove all hints and test and monitor with "out of the box" setup. Then analyse how things may be improved. The biggest issue for locking is table scans ( and index scans ) these generally issue shared table locks ( pages for indexes generally ) which block writers.

    Serialisable is an excellent mode for one user!

    [font="Comic Sans MS"]The GrumpyOldDBA[/font]
    www.grumpyolddba.co.uk
    http://sqlblogcasts.com/blogs/grumpyolddba/

  • I started to redesign my logic when getting a new order number so as not to set mode to Serializable.

    In terms of table/index scans, how can I know if scans are causing table locks ? Do I run the sql profiler for that ?

    thanks,

    bob

  • You'll have to keep a watch on sp_lock, see if you get table locks. However in general, if you have a lot of inserts and updates to a table, then any table scan of that table = very bad.

    It sounds like your database is mainly for OLTP. If so consider the following

    Check your indexes, make sure they're optimal and used (execution plan)

    Check your fragmentation (DBCC ShowCONTIG).

    Check the execution plan of all your important queries and tune as necessary.

    Ensure that there are no long-running reports running during the times you're trying to do lots of updates

    Keep your transactions as short as possible.

    HTH

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Thank you Gail.

    My CONTIG our main orders table shows :

    DBCC SHOWCONTIG scanning 'ORDER_FX' table...

    Table: 'ORDER_FX' (1504177300); index ID: 0, database ID: 7

    TABLE level scan performed.

    - Pages Scanned................................: 22

    - Extents Scanned..............................: 6

    - Extent Switches..............................: 5

    - Avg. Pages per Extent........................: 3.7

    - Scan Density [Best Count:Actual Count].......: 50.00% [3:6]

    - Extent Scan Fragmentation ...................: 66.67%

    - Avg. Bytes Free per Page.....................: 1926.2

    - Avg. Page Density (full).....................: 76.20%

    I viewed the exec plan in my "watch dog" process which updates customer orders. A couple of the Table Scans are showing Cost: 33%.

    We know that our Watch Dog process is causing some locks (possibly table locks) because as it runs in the background, our Insert/Updates from the front end seriously hangs up until out watch dog is done updating. This is the crux of our problem.

    Thank you,

    Bob

  • Sounds like your watchdog process is the problem then.

    I notice that your orders table doesn't have a clustered index. Adding one may help out, especially with the reads, if it's on an appropriate column.

    A 'couple of table scans' is a good intdication of either poor query design or poor indexing.

    Can you post the table design (of the tables that are getting scans), including the indexes and the watchdog query? (and the execution plan if possible)

    I can probably help you with the blocking, but without the query and indexes it's difficult at best.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply