Stored Procedure performance using work tables

  • We have fairly large stored procedures that uses what we call "work" table.  "work" table is basically a permanent table that gets deleted prior to execution of the stored procedure.  We do this mainly for troubleshooting reasons to have intermediate data available for research.

    We recently have seen couple of these stored procedures taking a very long time to run (1 hour+).  The places we found where it gets stuck is updates done that joins couple of these work tables.  For example, we have two tables with 400k rows and 16k rows that we join to do an update.  This occurs in several points in the stored procedure.

    I told my client to just update statistics then sp_recompile on these work tables before starting the process.  The tables start out empty but I have noticed that statistics are out of date of those tables when I run dbcc show_statistics.  That procedure fixes the performance issue.

    My client DBA ran DBCC REINDEX() instead of the procedure I provided but that did not fix the problem.  Can anyone provide some background information as to why update statistics would resolve the problem but not DBCC REINDEX()??  Thanks.

     

  • DBC REINDEX re-indexes the user-created indexes.

    If you ever look in the sysindexes table, you'll see a bunch of other indexes named _WA_Sys_.

    These are not true indexes but are the statistcis that SQL automatically maintains. Updating stats updates these, whereas DBCC doesn't, hence the difference.

  • So basically, when they run DBCC REINDEX(), they shoud also run UPDATE STATS (and sp_recompile), right?

    Since data is cleared out each time, DBCC REINDEX is probably not necessary for these tables.  It should be sufficient to run UPDATE STATS/sp_recompile.

  • Or, the simple answer is not to delete your worktables, but use "truncate" instead.

    cl

    Signature is NULL

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply