September 5, 2008 at 6:52 am
@Moderator : I hope I'm posting in the right spot, if not, feel free to move it to a more appropriate place.
Hi,
since a couple of months I run into severe performance problems from time to time.
The first time I noticed it a query which used to take <3 secs was running for more than 10 hours and still did not produce results.
I looked for (b)locking, server load ... but did not find anything wrong. Eventually aborted the query and ... about 5 hours later performance was back at <3 secs.
A month later when the query was executed again, and I had to kill it after 15 hours, again a couple of hours later performance was back at <3 secs.(Again without me making any changes)
At that time I could not think of any logical explanation so I was rather puzzled. Searching the internet it suddenly hit me, could this be a statistics problem?, would that be the logical explanation ?
A couple of weeks we had similar trouble with another query which were immediately solved when we issued an 'UPDATE STATISTICS tablename' for the implicated table.
In the meantime I found yet another problem query, solved it the same way but this time I had the SQL Profiler running;
Before the update statistics the query performed > 80,000,000 logical reads and took > 90 minutes.
After the update statistics it only needed about 240,000 logical reads and finished in 28 seconds.
So apparently I suddenly have to issue UPDATE STATISTICS statements !
Both "Auto create Statistics" and "Auto update statistics" are enabled.
The query runs in a DTS-package right after the load of the table, this is the case since years and never have I had problems.
To my knowledge the only thing that we changed on the SQL-Server was the installation of SQL-Server 2000 SP4. Is there a change to the "UPDATE STATISTICS" strategy? Has the query optimizer changed?
Has anyone got any idea how I can prevent this behavior ?
I've got serious performance issues in another application but I'm not sure (yet) if they have the same cause but I do suspect so.
Thanks for any hints, thoughts or solutions
Marc
September 5, 2008 at 8:43 am
1st of all, autp update is triggered after a huge % of the data has been changed (5% or 20%)... not before that... so you have 1M rows, you need at least 50 000 row changes before the auto update kicks in.... which is way to much. You might consider doing that daily, or a couple times a week.
2nd, this seems to be a case of parameter sniffing. Where 1 plan works for 90% of the cases but brings the server to its knees the other 10%. One way around that is to recompile the plan on every executions. Search this site for other options, that has been discussed 100s of times here.
September 5, 2008 at 5:41 pm
Thank you for your reaction
Ninja's_RGR'us (9/5/2008)
1st of all, autp update is triggered after a huge % of the data has been changed (5% or 20%)... not before that... so you have 1M rows, you need at least 50 000 row changes before the auto update kicks in.... which is way to much. You might consider doing that daily, or a couple times a week.
I already inserted update statistics commands right after the load of the data tables.
I don't think however that a sudden lack of changed data trigger these problems, all of a sudden it starts to happen, on different queries on different tables, even in different databases. I seems to me that that's too much of a coincidence.
Furthermore on one of the queries the problem "resolved itself" after a couple of hours without any update statistics or supplementary load of data.
Ninja's_RGR'us (9/5/2008)
2nd, this seems to be a case of parameter sniffing. Where 1 plan works for 90% of the cases but brings the server to its knees the other 10%. One way around that is to recompile the plan on every executions. Search this site for other options, that has been discussed 100s of times here.
Prior to making the post I did a lot of searching and reading and had indeed read different posts on the problem of cached execution plans and parameter sniffing but thought I could rule that out for two reasons :
1. The only parameter that changes in the whole query is the account year and month, as we have roughly the same volume of data every month this could/should, imho, not lead to another execution plan.
2. Again it seems to be too much of a coincidence that all of a sudden different queries on different tables .... act up.
Marc
PS Earlier this evening another topic was posted describing a problem which seems very similar to mine.
http://www.sqlservercentral.com/Forums/Topic564844-65-1.aspx
September 5, 2008 at 8:03 pm
That's a long shot, but any chance that you have both execution plans?
Do you have blocking issues?
Do you have a big job running on the server that could eat up a lot of ressources (even outside sql server)?
September 8, 2008 at 12:21 pm
Hi Marc/Ninja,
I've a similar post below:
http://www.sqlservercentral.com/Forums/Topic564844-65-1.aspx
Thanks
September 8, 2008 at 12:51 pm
just curious: Do you have "auto shrink" enabled for this database ?
(Bad practise ! If you actually need a shrink operation, you'd be better off doing it in a planned and controlled way)
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
September 8, 2008 at 1:54 pm
Thanks ALZDBA for your response. "Autoshrink" option is turned off. Finally, it worked. When I checked the event log, there was "tempdbfull" error at exact same time when the package threw error "Invalid destination table". However, everything is running fine now and package succeeded.
My sincere thanks to Marc.
September 8, 2008 at 11:30 pm
Ninja's_RGR'us (9/5/2008)
That's a long shot, but any chance that you have both execution plans?
Sorry, no execution plans, probably they will be (very) different but even if they were the same I am not sure SQL-Server effectively would execute the query the same way (see topic http://www.sqlservercentral.com/Forums/Topic564620-9-1.aspx)
Ninja's_RGR'us (9/5/2008)
Do you have blocking issues?
Definitely not the case here, that was the first thing I checked
Ninja's_RGR'us (9/5/2008)
Do you have a big job running on the server that could eat up a lot of ressources (even outside sql server)?
No big job running concurrently and cpu-usage < 50%
September 8, 2008 at 11:52 pm
ALZDBA (9/8/2008)
just curious: Do you have "auto shrink" enabled for this database ?(Bad practise ! If you actually need a shrink operation, you'd be better off doing it in a planned and controlled way)
Auto shrink is enabled, but imho this cannot be the cause, as :
- the db's only grow, especially during the procedures of the monthly closing
- performance was back after Updating statistics
September 9, 2008 at 1:29 am
Considerations for the "autogrow" and "autoshrink" settings in SQL Server
http://support.microsoft.com/kb/315512
Johan
Learn to play, play to learn !
Dont drive faster than your guardian angel can fly ...
but keeping both feet on the ground wont get you anywhere :w00t:
- How to post Performance Problems
- How to post data/code to get the best help[/url]
- How to prevent a sore throat after hours of presenting ppt
press F1 for solution, press shift+F1 for urgent solution 😀
Need a bit of Powershell? How about this
Who am I ? Sometimes this is me but most of the time this is me
September 9, 2008 at 7:50 am
1) Turn off autoshrink. No back talk - just do it. :hehe: There are a ton of bad things that happen when this fires (which is does - SURPRISE - automatically).
2) Double the size of your database file(s). This will provide contiguous disk space for the next step.
3) Rebuild all indexes (with appropriate fill factors). Actually, given how bad your fragmentation probably is you should drop all NC indexes (script them first!), rebuild the clustered indexes, then recreate all NC indexes.
4) Set up a job to maintain your indexes on an appropriate schedule based on table/db activity. BOL has a script to get you started (do NOT use the maintenance plan tool for this!).
5) Consider targeted update statistics jobs for high-activity tables to occur outside of the index maint schedule. I have some clients that update stats on a few key tables multiple times a day.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply