February 20, 2010 at 5:02 am
Hi,
I'm running SQL Server 2000 Enterprise Edition on Win2k3 Enterprise Edition and I've been having quite a few performance problems, I've resolved most but I'm hoping someone might be able to help with the remaining two:
1. All our code is executed via stored procedures and when I analyse the code and run it through the query execution plan I'm seeing a large amount of table scans. In the code I'm seeing the creation of a lot of table variables which are populated with data from real tables and the table variables are then joined to other real tables to produce the output. I've inherited this code and I'm told it that a re-write just isn't possible at the present time.
My question is, could all the table scans be caused by the fact that table variables can't be indexed and therefore when columns in the table variables are joined to the real tables this is causing table scans?
Table variables are being used instead of temp tables because SQL Server would re-compile the procedure each time it was executed if temp tables were used. Having said that, if temp tables were used instead of table variables I would be able to index the columns, is this a better option?The data volumes are in the millions so I'm assuming the creation on indexes will help reduce the number of table scans.
2. I'm also seeing a lot of deadlocks, could this be due to the lack of indexing as well? Would it help to set an Isolation Level prior to executing the procedures to help reduce this?
Thanks in advance for any help, advise, suggestions.
www.sqlAssociates.co.uk
February 20, 2010 at 10:35 am
yes
and
yes, maybe.
All of this requires testing. Moving to temp tables could help, but you would have to run a lot of tests, probably some on the live system, to see if there is a better execution plan being produced.
In terms of isolation levels, that might help, but again, be sure this won't cause issues. For deadlocks, be sure that the code you check accesses tables in the same order. I've seen order reversed in code and that can easily cause deadlocks.
February 20, 2010 at 11:49 am
I don't think it's just the lack of indexes that's hurting you (fwiw, you can create some indexes on table vars), it's lack of statistics. The optimiser's going to be assuming only 1 row in those table variables, hence it will generate a plan optimal for that. With no recompile and no stats, there's no way for it to know any better.
A plan optimal for 1 row is seldom optimal when there are millions of rows.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 20, 2010 at 11:59 am
Chris
If this seems way too obvious then I apologise beforehand - table variables can very often be directly swapped out for derived tables which the optimiser knows and understands.
Are the table variables updated before they are JOINed?
Can you please confirm which version of SQL Server you are using?
Cheers
ChrisM
For better assistance in answering your questions, please read this[/url].
Hidden RBAR: Triangular Joins[/url] / The "Numbers" or "Tally" Table: What it is and how it replaces a loop[/url] Jeff Moden[/url]
February 20, 2010 at 12:13 pm
ChrisM@home (2/20/2010)
Can you please confirm which version of SQL Server you are using?
Chris Kitchen (2/20/2010)
I'm running SQL Server 2000 Enterprise Edition on Win2k3 Enterprise Edition
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 20, 2010 at 12:28 pm
GilaMonster (2/20/2010)
ChrisM@home (2/20/2010)
Can you please confirm which version of SQL Server you are using?Chris Kitchen (2/20/2010)
I'm running SQL Server 2000 Enterprise Edition on Win2k3 Enterprise Edition
D'oh, thanks Gail.
For better assistance in answering your questions, please read this[/url].
Hidden RBAR: Triangular Joins[/url] / The "Numbers" or "Tally" Table: What it is and how it replaces a loop[/url] Jeff Moden[/url]
February 20, 2010 at 5:49 pm
Thanks guys for your comments, they are really useful and greatly appreciated.
So from the information you've provided I going to look to do the following:
1. Look at the ordering of the syntax.
2. Test temp tables rather than table variables.
3. Test isolation level setting.
A couple of quick questions;
1. Would you recommend looking into anything else to try and pin point where the table scans and deadlocks are coming from?
2. Using temp tables rather than table variables means SQL Server will re-compile the procedures each time they are executed, could you please confirm this is correct?
3. I'm certain the client will want to "read committed", therefore I might just use "NOLOCK" in the statements.
Many thanks again guys.
Chris
www.sqlAssociates.co.uk
February 20, 2010 at 11:46 pm
Chris Kitchen (2/20/2010)
1. Would you recommend looking into anything else to try and pin point where the table scans and deadlocks are coming from?
Yes. Look at the exec plans, look at the procs see if they are written optimally, look at the indexes on the permanent tables, see if they support the workload. If you need help, post here
2. Using temp tables rather than table variables means SQL Server will re-compile the procedures each time they are executed, could you please confirm this is correct?
On SQL 2000, yes, possibly more than once. It's those recompiles that allow the query optimiser to produce accurate plans, ones that account for the number of rows that really are in the table.
The guideline here is to create all temp tables upfront (at the beginning of the proc) to minimise the number of times the proc will compile due to DDL changes.
Since it will recompile on every execution, you may want to add WITH RECOMPILE to the proc definition. That'll remove the possibility of contention on the plan cache, which is something I've seen on SQL 2000 with lotts of fast-running procs with temp tables
3. I'm certain the client will want to "read committed", therefore I might just use "NOLOCK" in the statements.
Will the client mind if the data is sometimes inaccurate? (duplicate rows, perhaps missing rows, in a way that cannot be reproduced)? If not, stay away from NOLOCK. In fact, I'd say stay away from nolock altogether, unless you have exhausted every other possibility and still can't get this fixed.
See - http://sqlblog.com/blogs/andrew_kelly/archive/2009/04/10/how-dirty-are-your-reads.aspx
This is especially pertinent (and likely to occur) if you have lots of table scans happening)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 21, 2010 at 8:09 pm
You mentioned that you had a fair number of deadlocks and the code causing those deadlocks is also responsible for a fair bit of your performance problem. First, when a deadlock occurs, there's always a "victim" and the victim gets rolled back... which takes time.
Second, deadlocks are most likely to occur (if they occur) when there are explicit transactions encapsulating more than one query at a time. I'd check to make sure that those transactions have error handling code associated with them or there's simply no need for the explicit transaction to begin with. Look into SET XACT_ABORT ON as well. My recommendation is that there's no such thing as a good or even benign deadlock. I would concentrate my efforts there first. Long winded, poorly written, and unnecessary explicit transactions will be a large part of the performance problem you're having. Most of the rest will just be poor code and incorrect indexing in general.
Another less obvious source of deadlocks and performance problems are in triggers. See if any of the tables have triggers.
As a side bar, recompiles are not necessarilly bad... like everything else, "It Depends".
--Jeff Moden
Change is inevitable... Change for the better is not.
February 21, 2010 at 8:21 pm
Chris Kitchen (2/20/2010)
I've inherited this code and I'm told it that a re-write just isn't possible at the present time.
p.s. I honestly feel for you. You're told to fix inherited technical dept that was caused by someone in a hurry and you're also told that no one else is going to care enough "at the present time". I went through a similar thing at one of my jobs... inherited an average of 640 deadlocks a day (with spikes to 4000 a day) and most nightly runs were appropriately named because they did, in fact, take the whole bloody night and part of the next day to run. I believe you'll find that it's all caused by just one thing... crap code. You'll need to find the source(s) of the deadlocks and a couple of the big problems and fix them yourself. Make sure you keep very well written "Before'n'After" stats to show the big dogs. Start training your current developers to write better code and, maybe, make fixes when they run across bad code. Gain their trust and have them bring anything that they think might be a problem. Shoot... get management to buy into it and find some way to reward the developers for turning in/fixing problem code they find while doing mods to existing code. Remember... reward the messengers.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 22, 2010 at 12:58 am
Hi guys
Thanks for all your comments, they've given me a great starting point. With regards to the following comment could someone please explain what it means or point me in the direction of an article (I've had a look around but can't find anything specific enough)
"Second, deadlocks are most likely to occur (if they occur) when there are explicit transactions encapsulating more than one query at a time. I'd check to make sure that those transactions have error handling"
None of the procedures contain any roll back syntax or commit transaction syntax, they basically contain a number of SQL statements written one after another and most procedures are a few hundred lines long.
Thanks again,
Chris
www.sqlAssociates.co.uk
February 22, 2010 at 9:10 am
I use this article (and the two follow-on ones) as my Bible for troubleshooting/resolving deadlocks:
http://blogs.msdn.com/bartd/archive/2006/09/09/Deadlock-Troubleshooting_2C00_-Part-1.aspx
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
February 22, 2010 at 9:18 am
Hi,
I couldn't see a link to the articles in your reply, would you mind posting again?
Many thanks,
Chris
www.sqlAssociates.co.uk
February 22, 2010 at 9:26 am
Chris Kitchen (2/22/2010)
Hi,I couldn't see a link to the articles in your reply, would you mind posting again?
Many thanks,
Chris
There's a very good reason why you didn'g see the link, but I won't bore you with the details . . . :hehe:
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply