Performance degrades after significant insert/delete workload

  • Hello,

    I have several queues in my database.

    Queue is a table in which some actors write when others read and delete rows.

    I have data flow (both deletes and inserts) about 1K-10K rows per minute in batches 100-1000 rows each

    Row count in queues is about 100K-1M.

    My issue is that performance is degraded very fast depending on queue dynamics. Roughly it is degraded 10 times after 10M rows went through the queue.

    If I save queue to temp table, truncate queue table and restore queue rows - performance is restored to initial level even.

    How can I "fix" queue table without truncating ?

    Server version is MS SQL 2008

    one of queue table is

    CREATE TABLE [dbo].[FY_ObjectScore](

    [ObjectID] [bigint] NULL,

    [SalesVolume] [bigint] NULL,

    [Score] [bigint] NULL,

    [RedPrice] [real] NULL,

    [Created] [datetime] NULL

    ) ON [PRIMARY]

    Anton Burtsev

  • Probably stale statistics. Try an UPDATE STATISTICS <table name> WITH FULLSCAN. If it works, schedule a job that runs the update stats on a regular basis against that table.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Do any of your tables have indexes?

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • I'm with Gail. It sounds like statistics to me.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Thanks for your replies.

    But update statistics did not help

    Table has no indices, I'll try clustered index and report results here.

    I use the following scripts to model the issue:

    1. Create table

    create table TheQueue(

    ID uniqueidentifier,

    data nvarchar(100),

    Created datetime default getdate());

    2. Run simultaneously for 6 hours

    while 1=1 delete TheQueue where data like LEFT(NEWID(), 1) + '%'

    and

    while 1=1

    insert TheQueue(ID,data)

    select top 2000 NEWID(), NEWID() from sys.columns c1, sys.columns c2;

    3. Create a copy of data and rebuild stats as follows

    select * into TheQueueNew from TheQueue;

    update statistics TheQueue with fullscan;

    4. measure performance of

    a) select COUNT(*) from TheQueue with(nolock) option(maxdop 1);

    b) select COUNT(*) from TheQueueNew with(nolock) option(maxdop 1);

    a) ~4900 ms

    b) ~700 ms

    Anton Burtsev

  • burtsev (12/18/2013)


    Table has no indices,

    There's your problem right there. Identify most common queries, index to support them, have a good clustered index.

    http://www.sqlservercentral.com/articles/Indexing/68439/

    http://www.sqlservercentral.com/articles/Indexing/68563/

    http://www.sqlservercentral.com/articles/Indexing/68636/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Yes, If I add clustered index - performance degradation disappears.

    thanks.

    Anton Burtsev

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply