May 11, 2009 at 7:58 am
We support a third party application with a big database (MSSQL 2k5 9.0.3282 / ~1300 tables / ~800Gb) and two weeks ago we went live with a new module. The vendor runs a job to update statistics every night at 2am and the DB is set with Auto Create Statistics ON, Auto Update Statistics ON and Auto Update Statistics Asynchronously ON.
After the update statistics job ran on the first Saturday after go live our performance degraded a lot! After investigating we discovered some big tables indexes were heavily fragmented so we defragmented them and performance came back to normal...
Now, this past Saturday (second after go live) it happened again! But this time the indexes are not fragmented… No other extra job runs on Saturday so in theory, the update stats jobs runs in the same circumstances every night…
Vendor support is kind of slow and we are kind of clueless so, any idea or suggestion?
May 11, 2009 at 8:05 am
- You're sure the slow response is not caused by another issue? (network, other resources on the server)? (Can you reproduce a "slow" query?)
- You're sure the indexes are not fragmented?
- Strictly spoken, an indexplan can change in time, depending on changing the number of rows
- The recalculate statistics method is the same (every time)?
Wilfred
The best things in life are the simple things
May 11, 2009 at 8:16 am
- You're sure the slow response is not caused by another issue? (network, other resources on the server)? (Can you reproduce a "slow" query?)
Network & server are fine and we can reproduce that slow query ... It went from 2 seconds avg to 80seconds avg...
- You're sure the indexes are not fragmented?
Yes, we just checked that...[/quote]
- Strictly spoken, an indexplan can change in time, depending on changing the number of rows
True, but it didn't change much from 1:50am to 2:10am on Saturday
- The recalculate statistics method is the same (every time)?
Yes...
Keep the question and suggestions coming guys!
May 11, 2009 at 8:33 am
From BOL (2008 version but I couldn't find it in 2005 but sure it's there!!!!!!):
"Updating statistics ensures that queries compile with up-to-date statistics. However, updating statistics causes queries to recompile. We recommend not updating statistics too frequently because there is a performance tradeoff between improving query plans and the time it takes to recompile queries. The specific tradeoffs depend on your application."
Are you sure it's not just that every query has to recompile?
May 11, 2009 at 10:18 am
BU69 (5/11/2009)
From BOL (2008 version but I couldn't find it in 2005 but sure it's there!!!!!!):"Updating statistics ensures that queries compile with up-to-date statistics. However, updating statistics causes queries to recompile. We recommend not updating statistics too frequently because there is a performance tradeoff between improving query plans and the time it takes to recompile queries. The specific tradeoffs depend on your application."
Are you sure it's not just that every query has to recompile?
The test for this is just to run the query twice. It should only have to recompile the first time.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
May 11, 2009 at 11:04 am
That's correct. Query's are not been recompiled.
May 11, 2009 at 1:16 pm
I'm assuming that the update statistics is not based on a full scan. It's possible that the data in a sampled scan doesn't result in statistics as good as that when you rebuild the index during the defrag (which effectively does a full scan). You might want to customize the statistics update so that it skips those tables or it does a full scan on those tables.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 11, 2009 at 1:22 pm
Ivan Ojeda (5/11/2009)
We support a third party application with a big database (MSSQL 2k5 9.0.3282 / ~1300 tables / ~800Gb) and two weeks ago we went live with a new module. The vendor runs a job to update statistics every night at 2am and the DB is set with Auto Create Statistics ON, Auto Update Statistics ON and Auto Update Statistics Asynchronously ON.
But Why a job every night if you have Auto Update statistics ON? You either have to have Auto update Statistics ON or OFF and run a job manually to update it but not both. Ideally its always a best practice to update the statistics manually and put your auto update statistics off.
After the update statistics job ran on the first Saturday after go live our performance degraded a lot! After investigating we discovered some big tables indexes were heavily fragmented so we defragmented them and performance came back to normal...
How do you know that your server performance degraded? and Why do you think that its only after updating the statistics, have you considered any other issues on your server?
Now, this past Saturday (second after go live) it happened again! But this time the indexes are not fragmented… No other extra job runs on Saturday so in theory, the update stats jobs runs in the same circumstances every night…
Vendor support is kind of slow and we are kind of clueless so, any idea or suggestion?
What is the configuration of your server? And also what type of queries do you run against the server? Have u got proper indexes before you even defragment them or considering updating them?
Thanks,
May 11, 2009 at 1:34 pm
Krishna Potlakayala (5/11/2009)
But Why a job every night if you have Auto Update statistics ON? You either have to have Auto update Statistics ON or OFF and run a job manually to update it but not both. Ideally its always a best practice to update the statistics manually and put your auto update statistics off.
Actually, I think most people use the automated statistics to maintain the system, which works fine in most situations, most of the time, and then run the manual update on a regular basis just to get the statistics cleaned up on those tables that didn't automatically fire during data changes over time. Some tables will also need to have a full scan rather than the sampled scan.
I wouldn't recommend an either/or approach on this unless there was an overriding reason to disable the automatic statistics maintenance.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 11, 2009 at 2:10 pm
Ivan Ojeda (5/11/2009)
... and Auto Update Statistics Asynchronously ON.
Make sure that you take good note of this, because this actually runs on a background connection to the database that will cause you some serious grief if you don't keep this in mind when trying to put the database in single_user mode for maintenance, or to detach it for movement to a different server.
I've been bitten by this before in the past when trying to commit certain changes that require exclusive access be held to the database, like enabling/disabling service broker, or changing parameterization options. It is always the last thing you think about when doing something like this.
Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
My Blog | Twitter | MVP Profile
Training | Consulting | Become a SQLskills Insider
Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]
May 11, 2009 at 2:12 pm
Grant Fritchey (5/11/2009)
I wouldn't recommend an either/or approach on this unless there was an overriding reason to disable the automatic statistics maintenance.
I agree with Grant on this point. You should leave your auto-update stats enabled, and just add a job to perform a UPDATE STATISTICS WITH FULLSCAN on the tables that require it due to odd distribution of data. You may find that only one or two tables actually have the need to be updated with a FULLSCAN.
Jonathan Kehayias | Principal Consultant | MCM: SQL Server 2008
My Blog | Twitter | MVP Profile
Training | Consulting | Become a SQLskills Insider
Troubleshooting SQL Server: A Guide for Accidental DBAs[/url]
May 11, 2009 at 2:23 pm
With a system as yours that is apparently vulnerable to fast degradation of statistics, and that doesn't perform well after the daily updatestats I would try to disable the daily updatestats and see how outdated the statistics get.
-- When are autostats refreshed
SELECT db_name()
, RTRIM(object_name(I.[id])) tablename
, RTRIM(I.[name]) Ixname
, DATALENGTH (statblob) [Stats_size]
,STATS_DATE (I.[id], I.indid) last_updated
FROM sysindexes as I
WHERE OBJECTPROPERTY(I.[id], N'IsUserTable') = 1
AND INDEXPROPERTY (I.[id] , I.[name] , 'IsAutoStatistics' ) = 1
order by tablename,last_updated ASC, Stats_size DESC
Or you run sp_updatestats PLUS dbcc freeproccache (at least for the db in this case) to be sure all plans are recompiled.
Auto update stats needs a relative portion of the data to be modified before it launches, so with larger volumes that may be an issue if you only tend to query the most recent data that varies statistically from older data.
Maybe a ClusteredIndex usage or partitioning study may give you more accurate stats behaviour, as well as more optimal IO.
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
May 11, 2009 at 2:24 pm
Jonathan Kehayias (5/11/2009)
Grant Fritchey (5/11/2009)
I wouldn't recommend an either/or approach on this unless there was an overriding reason to disable the automatic statistics maintenance.I agree with Grant on this point. You should leave your auto-update stats enabled, and just add a job to perform a UPDATE STATISTICS WITH FULLSCAN on the tables that require it due to odd distribution of data. You may find that only one or two tables actually have the need to be updated with a FULLSCAN.
I agree only WITH FULL SCAN rather than updating all, but still it does slightly degrade the performance?
To OP:
How do you update the statistics and what are the steps involved?
May 13, 2009 at 6:26 am
Thanks for all the replies!
We are bound by SLAs on what we can and cannot do to this server... Adding, changing or removing indexes is a big no no... Maintenance task are kind in a gray area...
We actually involved the vendor on this issue and they are still 'investigating' 😉
To monitor the performance we use a few queries that are very consistent regarding to number of records returned and execution time and we did notice the degradation of their performance after the Saturday's 2am run of the update stats jobs in two Saturdays in a row.
The good news is that we found the issue:
Since we were clueless about what it was we went back to basics. A colleague of mine started playing with an sproc affected by the issue and, for the heck of it, changed a temporal table by a table variable and the performance improved from 2 minutes to 2 seconds! That gave us the break we needed.
After investigation, we found out 2 things:
- The new app module uses a lot of temp tables
- one of the data files for tempdb (the biggest one with the most free space usually, therefore the most used by SQL) was in the same drive were the full bkp of the db is stored (~800Gb) and picked by IBM Tivoli (takes about 10 hours and starts at 2am... )
We still do not understand why it happened only on Saturdays but it is fixed now.
Thanks a lot!
May 13, 2009 at 7:13 am
What's your backup frequency ?
You may experience the result of IO shortage.....
If you have online index rebuilds, tempdb will be used !
This may cause read/write contention with tempdb file(s) and your bak files...
Perfmon will show disk queing ...
As you have noticed, it isn't because you bought a solution (software) that it is written optimal from every angle !
In many bigger projects, we have traces running when they go live and monitor at least the cycle of a full months workload. (keep in mind this needs space !)
This way, if we get a phone call with complains during a certain time, we can dig into the traces and figure out what was going on.
In many cases you will detect less optimal or bad sql stuff. What I do is collect those statements/proc in an inventory and hand them over to the SV. (If to collaboration is good, even provide the optimisations 😉 )
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply