Turning off auto update of statistics

  • I still disacgree with this, that's not what i got from TheSLGuru's comments:

    2. Turn auto update off on certain indexes with alter index.... set norecompute=on and update manually as needed

    As i read his comments below, unless you KNOW that the autostats fired, and thus flushed some pages, which is then causing an issue, the rule is to leave auot update stats on always, and create a job that updates specific tables more frequently, right? turning off the stats is for a specific, high volume situations that are suffereing an identified problem, right? not a general rule, i think.

    here's what his comment was from page1 of the thread:

    TheSQLGuru (11/30/2010)


    1) have you looked into asynchronous stats updates?

    2) 60GB is WAY to high for max memory on 64GB box. Have you monitored for paging? I would try maybe 55GB for max.

    3) Why putting min so high too? That won't make sql use more memory or use memory more efficiently.

    4) auto stats firing (or updating them manually) WILL flush pages from the buffer pool, which can lead to IO contention and stalls. This can be undesirable in a high-volume, low-latency type environment. This can be a reason to disable autoupdate on some tables in some systems. But you best find an appropriate window for manual updates or you are gonna be in trouble. And for many systems once a week simply will not cut it.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • I need to do some testing on one of our dev boxes. But the question is, for a large, very frequently updated table, are you better off with statistics created with fullscan on the weekend than a statistic more recently updated with a low sampling rate.

    We started down this path quite a while ago when we had auto update on, async on, and just ran sp_updatestats on the weekend.

    The execution plans were missing the number of rows involved by a mile.

  • Indianrock (12/2/2010)


    The execution plans were missing the number of rows involved by a mile.

    exactly;

    I've got one table, for example, that i update every two hours during biz hours , and a couple of others that get updated once per day. once a week or once when changes rows = 20% is probably not often enough.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Lowell (12/2/2010)


    Indianrock (12/2/2010)


    The execution plans were missing the number of rows involved by a mile.

    exactly;

    I've got one table, for example, that i update every two hours during biz hours , and a couple of others that get updated once per day. once a week or once when changes rows = 20% is probably not often enough.

    DEFINITELY an important issue there - TWO of them actually!

    On frequently updated tables the scenario is often that the 'newly inserted data' is data that will be hit soon, and yet the stats have no idea about them because no where near 20% of total table rows modified yet. In this case you could get better plans with a manually fired non-full-scan stats update that runs more frequently yet hits the server less hard than a full-scan one would.

    The flip side of this is if you DO have queries that hit throughout the range of data AND said data is not evenly distributed. In that case a low-percentage-scan stats update could well lead to bad plans for many queries. Obviously if you have this scenario you might also want to look into ways of avoiding parameter sniffing such as statement level OPTION (RECOMPILE) or other means.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • I need to do more research on the whole "parameter-sniffing" subject. For the most part our application generates sql using an object. I am able to view most of it with either Quest Performance Analysis or profiler.

Viewing 5 posts - 16 through 19 (of 19 total)

You must be logged in to reply to this topic. Login to reply