November 29, 2010 at 6:44 pm
On large, frequently-updated tables evidence shows that auto update of stats uses a low sampling rate and often creates horrible execution plans. The choice is whether to turn auto update of stats off database-wide, or turn it off on most indexes with a script using the alter index set norecompute=on Some of our developers believe there is one large, frequently updated table where we want auto update to work and I believe you can turn it on for individual indexes even if the database-level option is off.
The question is whether any index or statistics maintenance could possibly flip auto update back on. I think this would only be the alter index rebuild statement and if you don't mention norecompute I don't see why it would change the current setting.
November 29, 2010 at 10:33 pm
Honestly, my suggestion would be to leave it on and then have jobs (daily, hourly, etc) that do an update stats with full scan.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 30, 2010 at 5:47 am
GilaMonster (11/29/2010)
Honestly, my suggestion would be to leave it on and then have jobs (daily, hourly, etc) that do an update stats with full scan.
I tried to get approval for weeknight updates to stats and was turned down. I'll need more ammunition indicating it won't block normal processes. We're a mid-sized company and I've argued "we aren't Amazon" and our customers would understand a one-hour maintenance window on some or all weeknights.
But the question remains: can you turn off auto update of stats database-wide and activate it on specific tables/indexes? Or do you have to leave it on database-wide and set norecompute=on for the tables you don't want auto updated?
November 30, 2010 at 5:59 am
well, if you have auto update stats = ON currently, the update of statistics is running during the day (or whenever the threshold is reached). Not sure you need a full window of downtime, just perhaps a low point in activity.
Personally, I would leave the auto update ON and target VERY specific indexes that require a different SAMPLE ratio.
November 30, 2010 at 7:03 am
Indianrock (11/30/2010)
But the question remains: can you turn off auto update of stats database-wide and activate it on specific tables/indexes? Or do you have to leave it on database-wide and set norecompute=on for the tables you don't want auto updated?
Offhand not sure, but if you do disable autostats, you MUST have manual stats updates running for those tables that you disabled the auto stats on.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 30, 2010 at 7:14 am
i worked in a shop where someone turned off auto update statistics, and with only 20 users hitting with an application, the database ground to a halt in less than a day due to out of date statistics.
I would go with Gails suggestion; leave auto updates on, and create some jobs to update statistics on critical tables more frequently.
turning them off is a very bad idea, imho. there's nosignificant performance impact by leaving them on.
Lowell
November 30, 2010 at 7:40 am
This is piling on, but I agree with Gail & Lowell. I've had the situation where certain tables (badly designed tables I might add) needed stats updated hourly, with a full scan no less, or performance went into the toilet. Do that in addition to the auto stats, not instead of.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
November 30, 2010 at 6:44 pm
I'm convinced on the stats. I'm pretty sure the weekend maintenance is keeping the stats and indexes in reasonably good shape. Today I noticed that the plan cache, which was running at 99% plus hit rate and about 6GB yesterday, fell off a cliff around 11:15pm Monday evening. So we're suspecting some large DML jobs that run about then caused memory pressure and a plan flush. That cache really didn't recover much and early this morning had a fairly low hit rate and low contents. At 8am the cpu pretty much pegged so the decision was made to reset the memory settings back to max=55GB and min=50GB. This improved things almost immediately but the question is whether it was due to the cache flush caused by the reconfig, rather than memory pressure on the OS being eased.
We're running Windows Server 2008R2 64-bit with SQL 2005 Enterprise 64-bit. We bumped up from 32GB memory to 64GB on October 31st. At that time the max memory was set at 60GB and minimum at 50GB and things seemed stable. In a ( possibly foolish ) attempt to get sql to use more memory we bumped the minimum up to about 58GB last week. Looking back using our analysis tools, we've seen a few plan cache flushes not caused by config changes like memory settings.
At the moment we only have about an 85% hit rate and 70mb in the plan cache, so it really hasn't grown as I would have liked and expected.
November 30, 2010 at 10:39 pm
1) have you looked into asynchronous stats updates?
2) 60GB is WAY to high for max memory on 64GB box. Have you monitored for paging? I would try maybe 55GB for max.
3) Why putting min so high too? That won't make sql use more memory or use memory more efficiently.
4) auto stats firing (or updating them manually) WILL flush pages from the buffer pool, which can lead to IO contention and stalls. This can be undesirable in a high-volume, low-latency type environment. This can be a reason to disable autoupdate on some tables in some systems. But you best find an appropriate window for manual updates or you are gonna be in trouble. And for many systems once a week simply will not cut it.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
December 1, 2010 at 5:30 am
Thanks Kevin. I believe we have the auto update to stats happening asynchronously ( that feature turned on ). When I say cache flush, I'm referring to those times when it pretty much empties itself, not just a plan here or there being flushed.
How exactly would you look at paging to determine if our memory settings are too high? We have Quest Spotlight and Quest Performance Analysis for Sql Server which help a great deal, but I've noticed that DMV queries show quite a few more items in plan cache than Spotlight. Spotlight's "procedure cache" view seems to only show stored procedures.
December 1, 2010 at 7:45 am
Indianrock (12/1/2010)
Thanks Kevin. I believe we have the auto update to stats happening asynchronously ( that feature turned on ). When I say cache flush, I'm referring to those times when it pretty much empties itself, not just a plan here or there being flushed.How exactly would you look at paging to determine if our memory settings are too high? We have Quest Spotlight and Quest Performance Analysis for Sql Server which help a great deal, but I've noticed that DMV queries show quite a few more items in plan cache than Spotlight. Spotlight's "procedure cache" view seems to only show stored procedures.
1) Page Faults and PF Delta in Task Manager are real-timeish counters for paging. There are similar ones in perfmon too, but I can't recall exactly what they are.
2) I was speaking of the data buffer pool being flushed as an issue. Procedure cache - you say "pretty much empties itself". That is a peculiar statement. That could be an indication that your memory settings are indeed too high and memory is having to be forcibly flushed due to OS signals into SQL Server or from within the SQLOS itself.
3) have you reviewed output from DBCC memorystatus? undocumented in BOL, but you can find stuff online (including from Microsoft).
4) online you can find many queries to review various caches in the engine. Our own Glenn Berry has some very good scripts for differing flavors of SQL Server.
5) Another question that arises: do you have Lock Pages in Memory enabled? Read and really understand this if you do: http://support.microsoft.com/kb/918483
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
December 1, 2010 at 7:48 am
I see this morning that the plan cache went from being nearly empty for many hours and grew fairly rapidly up to a few GB starting at 12:45am. At the same time page file read transfers dropped from 400-500/second to around 100/sec. What does this tell me if anything?
December 1, 2010 at 7:51 am
My systems team assures me that, via group policy, the sql service has the lock pages in memory permission. Yes it's only the procedure/plan cache that is an issue. The data buffer stays around 45-50GB all the time.
December 1, 2010 at 9:39 am
Indianrock (12/1/2010)
I see this morning that the plan cache went from being nearly empty for many hours and grew fairly rapidly up to a few GB starting at 12:45am. At the same time page file read transfers dropped from 400-500/second to around 100/sec. What does this tell me if anything?
That isn't enough datapoints to say what might be going on. 400-500 pages/sec steady state is quite active. Uncertain why it dropped as plan cache went up - could well be unrelated information. Also no idea why plan cache was empty for hours either. Quite peculiar. Still think the most likely scenario is memory pressure causing various buffers to be flushed - that is most likely scenario. With lock pages in memory only a "give me memory NOW" command from the OS will flush the buffer pool, so non-buffer pool items like proc cache, etc are available to free up memory. Reread that article. Also set your min memory back to zero. I would also drop max memory too.
Time yet again for me to pull out the "what you REALLY need is to get a performance tuning professional in for a system review" line. That is better advice than everything you have seen on this thread thus far put together.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
December 2, 2010 at 5:55 am
I agree about the need for an experienced tuning consultant. Now we're about to add another 64GB of memory bringing this system to 128.
So the takeaways from this thread are:
1. Normally leave auto update of stats on
2. Turn auto update off on certain indexes with alter index.... set norecompute=on and update manually as needed
3. Don't set server max memory too high and remember that setting only applies to the data buffer cache so plan cache and any other overhead on the system is on top of that setting.
4. A nearly empty plan cache, that stays nearly empty, especially with elevated cpu may be memory pressure, not bad plans "stuck" in the plan cache.
I read that in sql 2008 the dbcc freeproccache command allows you to specify a specific plan handle rather that flush the entire plan cache.
Next I want to look more closely at all of the things that prevent an execution plan from being cached. For example, whether operators like TOP and NOT IN prevent caching only if not parameterized.
Viewing 15 posts - 1 through 15 (of 19 total)
You must be logged in to reply to this topic. Login to reply