Does it help if I make the tables smaller?

  • Recently, we're experiencing system slowness. We did such things as query optimization, tuning advisor, statistics update, defgramentations, and tuning advisor. I'm thinking of cleaning up the tables by deleting the unneccessary records (50% of them are deleted logically). I have a dozens of tables holding more than 10 million rows. For some reason, We're not deleting them physically. instead, we are deleting them logically by flagging them as deleted status. Is it worth doing it?

  • Maybe, but if your indexes are well-designed that shouldn't make that much of a difference.

    Can you grab one or two of the worst performing queries and post them here? Please post table definitions, index definitions and execution plan, as per http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

    http://www.simple-talk.com/sql/performance/finding-the-causes-of-poor-performance-in-sql-server,-part-1/

    http://www.simple-talk.com/sql/performance/finding-the-causes-of-poor-performance-in-sql-server,-part-2/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Tend to agree with Gail, good indexes mean that 10mm rows shouldn't matter.

  • Table fragmentation can occur when modifying data with INSERT, UPDATE, or DELETE statements, which over time cause gaps in each page. If a query search is based on a table scan or partial table scan, then it will create overhead for the SQL Server process with additional page reads, leading to high CPU activity and unresponsiveness. It will degrade the performance; normal queries will take longer to display results.

    Using the dynamic management function sys.dm_db_index_physical_stats will help you detect fragmentation in a specific index or all indexes on a table or indexed view. In the case of partitioned indexes, this DMV will also provide fragmentation information for each partition.

    A Few Tips on Avoiding Fragmentation:

    * When a database is created, make sure the data files are created with or assigned the largest values possible. You can do this by planning to use a value that can fit the maximum amount of data during a certain period (say, three years at least).

    * Sometimes it is feasible to permit the data files to grow automatically while keeping a limit on the growth by specifying a maximum data file growth size that leaves some available space on the hard disk.

    * After a period of time, ascertain and re-evaluate the expected maximum database size by adding more files or filegroups to the database, if required.

    * Do not let the data files grow automatically if there many data files share the same disk partition. If the data files are heavily used then locate them in a different filegroup or on a different partition.

    * Perform regular database maintenance tasks, such as DBCC DBREINDEX, and recompiling stored procedures and triggers.

    * If the table row(s) are modified or deleted frequently then it is better to run intermittent UPDATE STATISTICS on the table, which will help it avoid any slow performance from the execution plan.

  • our system is so of over developed. We're looking for a quick and easy fix. I can't tell I have well-designed indexes with confidence. I have another question on tuning advisor. I know that I can't simply created those recommended indexes and have to be careful. What about those multi column statistics it's recommending? Is there any side effect having too much statistics?

  • chulheekim (10/15/2010)


    We're looking for a quick and easy fix.

    Sorry, no such thing in performance tuning. Seriously, find the worst offenders (5 at max), fix them. Often that will result in an improvement far beyond expected.

    Or get a specialist in for a week to do their best. If you don't have time/skills in-house that may be a very good investment.

    I have another question on tuning advisor. I know that I can't simply created those recommended indexes and have to be careful. What about those multi column statistics it's recommending? Is there any side effect having too much statistics?

    Not likely to be of much, if any, use. Personally I wouldn't bother.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • upstart (10/15/2010)


    Table fragmentation can occur when modifying data .

    If you're going to copy some one else's work into a forum post, at least have the courtesy to credit them. That chunk was entirely lifted from http://www.sql-server-performance.com/articles/per/detect_fragmentation_sql2000_sql2005_p1.aspx or one of the sites that has copied from there.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Looking at this from a different viewpoint there are other advantages to removing that redundant data.

    all maintenance tasks will run quicker, checkdb, reindex, stats updates. Backups will also run quicker and perhaps more importantly use up less space. There could be cost savings there.

    You need to analyse why the data is only marked as deleted? Perhaps it is still queried but is known to be static or required for legal reasons? You could move all this data off into another, archive, database and mark it read only. If data is being marked deleteable on an ongoing basis these could be moved to the archive database by a batch process so as not to interrupt the current application logic.

    Archiving is one of those things that is always left to last and in most cases never gets incorporated into the final design.

    To stick my neck out and disagree slightly on the performance ramifications, if your current indexing strategy is not optimal and you get a lot of scans rather than seeks then I would expect improvements from archiving. It certainly would not harm. This does not excuse you from getting the indexing right though and looking at your current worse queries as advised.

    ---------------------------------------------------------------------

  • Thank you guys. Your comments gave new views to me. I guess I have to go with the hardest way to fix it. Identify the troublesome queries and Optimize them. Thank you again.

  • will you be doing any physical archiving as well?

    ---------------------------------------------------------------------

  • No, I don't think I'm going with archiving. Instead, I have a good news. After I fixed a couple of sql jobs that are running slow, The performance is a lot better now. It turned out to be lack of indexes. I will definitely look into other sql jobs. Thank you guys.

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply