Query never finishing

  • So I have an audit table we load about 250K records into a night. This morning I went to look at the Reporting Server (2008 SP1, 64-bit) report for the records over the weekend. By default this reports pulls up yesterday's records, which of course being Sunday there were none. The default report pulled up empty just fine, indicating to me that the stored procedure finished just fine.

    Now, I changed the date to Friday until today. Report never came back. Just sitting there spinning its wheel. OK, maybe a lot of data. Go to run it in APEXSQL Edit, same thing. Just sits there. Run it in Apex with Sunday and today, came right back. hmmm Ok, interesting, but could still be too much data, just for fun, I decided to try it in SQL Server Management Studio. BANG, back in < 1 second. So tried to run it while Reporting Server was trying to build the report, in SSMS it still worked just fine.

    Now, the only thing I can think at this point is that Apex and Reporting Server is not using the right index. How this can be is beyond me as the command I gave in SSMS, was copy/paste from Apex. But, decided to rebuild the indexes on the underlying table. Now it works fine in all 3 places. Heck, just ran the report in RS for the last 2 weeks and created a 57 page report in <10 seconds.

    I'll give that the index was severely fragmented, but how the procedure can work fine in one tool to run, but not in 2 others is beyond me. Anyone have any good ideas. Relatively easy to fix, rebuild the index after the data load, but would like to know why this would happen.

  • My first inclination would be to make this query a stored procedure so that all three tools have the best chance of using the same qeury plan. This should improve the use of the index.

    As for fragmentation, you have that one taken care of with the rebuild.

    😀

  • It is, see end of first paragraph.

  • Each client tool has it's own connection and maybe even different connection settings. For example different values for ANSI_WARNINGS, ANSI_NULLS, QUOTED_IDENTIFIER, etc. can result in different execution plans. Some settings like COLLATION may even nullify the use of an index.

    The following article goes in detail.

    Batch Compilation, Recompilation, and Plan Caching Issues in SQL Server 2005

    http://technet.microsoft.com/en-us/library/cc966425.aspx

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Without knowing your topology, I would now be curious as to what the difference is in the APEXSql and RS servers from your SSMS. Is your desktop local to the SQL Server? Are the APEXSql and RS servers working through a WAN connection? This info could be helpful...

    A table of 250k rows or just a little over that should not be too bad, unless it's loaded with columns.

    😀

  • I agree with the ANSI settings. That can result in different execution plans for the same procedure. But, you might also be seeing parameter sniffing. Are all the apps running the proc using the same parameters?

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant and Eric,

    I would bet you are right about ANSI settings. Didn't even think of that!

    Not sure how much control I would have over that with Reporting Server though. Due to the load of the data I had planned on rebuilding that index every night anyhow since it becomes more or less perfectly fragmented with each load 😉 Just that the system is not 100% production yet.

    The table is a bit of a pain. It's a load of every save in a Cobol system, which can't track changes itself, so we intercepted the save to the ISAM file, and saved it to a sequential file that is then read into SQL. That file has about 300K records every 24 hours, but I filter them on saves to only have those that had a change in any of the 180 non key fields when I save it to SQL so as not to have more than needed stored in SQL. This is then further filtered in the procedure for reporting for the 8 columns they are interested in seeing changes for, just keeping those other columns in case they change their mind.

  • A possible useful query to further investigate the query plan use counts, if this is suspect.

    SELECT usecounts,

    cacheobjtype,

    objtype,

    [text]

    FROM sys.dm_exec_cached_plans P

    CROSS APPLY sys.dm_exec_sql_text(plan_handle)

    WHERE cacheobjtype = 'Compiled Plan'

    AND [text] NOT LIKE '%dm_exec_cached_plans%'

    AND [text] LIKE '%YourTableNameHere%'

    😀

  • coolwebs (9/13/2010)


    A possible useful query to further investigate the query plan use counts, if this is suspect.

    SELECT usecounts,

    cacheobjtype,

    objtype,

    [text]

    FROM sys.dm_exec_cached_plans P

    CROSS APPLY sys.dm_exec_sql_text(plan_handle)

    WHERE cacheobjtype = 'Compiled Plan'

    AND [text] NOT LIKE '%dm_exec_cached_plans%'

    AND [text] LIKE '%YourTableNameHere%'

    Since the goal is to see if there are differences in the plans, I'd also hit sys.dm_exec_query_plans(plan_handle). But before you do that you might want to qualify the column names.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Anders Pedersen (9/13/2010)


    Grant and Eric,

    I would bet you are right about ANSI settings. Didn't even think of that!

    ...

    ...

    A few years back I was trying to debug a situation similar to what you have here; a procedure call would do a table scan and chug away for a couple of minutes when called from the application, but the same procedure call with identical parameters would return in under 10 seconds from Query Analyzer (this was back in the SQL Server 2000 day).

    I wrote the query below from [syscacheobjects] that decodes the [setopts] column to determine what SET option settings were in effect for each SQL execution. You can filter on the column [SQL] to narrow it down to whatever procedure or select statement you're interested in. The table [syscacheobjects] is obsolete in SQL Server 2005/2008, and maybe someone can supply a more updated version of this query, but it still works on the 2005 dev instance I'm sitting in front of at the moment.

    Also the @@options system variable contains the SET option bitmap for the current session.

    select distinct

    BucketID,

    SQL,

    UseCounts,

    ObjType,

    case when setopts & 1 > 0 then 1 else 0 end as 'ANSI_PADDING',

    case when setopts & 2 > 0 then 1 else 0 end as 'max degree of parallelism',

    case when setopts & 4 > 0 then 1 else 0 end as 'FORCEPLAN',

    case when setopts & 8 > 0 then 1 else 0 end as 'CONCAT_NULL_YIELDS_NULL',

    case when setopts & 16 > 0 then 1 else 0 end as 'ANSI_WARNINGS',

    case when setopts & 32 > 0 then 1 else 0 end as 'ANSI_NULLS',

    case when setopts & 64 > 0 then 1 else 0 end as 'QUOTED_IDENTIFIER',

    case when setopts & 128 > 0 then 1 else 0 end as 'ANSI_NULL_DFLT_ON',

    case when setopts & 256 > 0 then 1 else 0 end as 'ANSI_NULL_DFLT_OFF',

    case when setopts & 512 > 0 then 1 else 0 end as 'NO_BROWSETABLE',

    case when setopts & 1024 > 0 then 1 else 0 end as 'TriggerOneRow',

    case when setopts & 2048 > 0 then 1 else 0 end as 'ResyncQuery',

    case when setopts & 4096 > 0 then 1 else 0 end as 'ARITHABORT',

    case when setopts & 8192 > 0 then 1 else 0 end as 'NUMERIC_ROUNDABORT'

    from

    master.dbo.syscacheobjects

    where

    SQL like '%from dbo.mytable%';

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • It sounds to me like the reporting server tried to do it all in memory and simply went to the swap file mode. Could be wrong but I'd look into that possibility, as well. Have seen it happen a lot.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply