March 12, 2008 at 7:49 am
Matt Miller (3/12/2008)
I know I'm showing late to the party - but is there any chance to make that a covering index? Assuming those extra fields aren't ludicrously long - you'd be able to get rid of the bookmark lookups....
Matt,
Mind reader!!
Thanks,
Phillip Cox
March 12, 2008 at 7:56 am
Matt Miller (3/12/2008)
I know I'm showing late to the party - but is there any chance to make that a covering index? Assuming those extra fields aren't ludicrously long - you'd be able to get rid of the bookmark lookups....
Matt,
From now on I will be referring to you as the LATE Matt Miller.
Ronnie,
Here is a list of questions that have popped into my head as I have thought about the issue (questions on the same line are related):
Is the query always run from the same place in the application or are there different places it is run from? If you run the "same" query with schema.table and table you will get 2 plans and reduce the space in cache by having "duplicate" queries cached. You really should use teh schema.table syntax.
Have you checked CacheHits, CacheMisses?
What is the order of the columns in the composite index? Do they match the order of the Where clause?
Have you tried reversing the order of the SARG to see if that causes the optimizer to always choose the composite index?
How often are CRUD operations run on this table? It could be a longer running transaction causing contention.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
March 12, 2008 at 8:00 am
Jack Corbett (3/12/2008)
Matt,
From now on I will be referring to you as the LATE Matt Miller.
Hey hey hey....
As the famous line from Monty Python's Holy Grail...
"I'm not dead yet...."
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
March 12, 2008 at 12:26 pm
Ronnie:
You know, I do believe that I had this 4 days ago:
rbarryyoung (3/8/2008)
Although there are a huge number of possible reasons for this behavior, in my experience it is almost always due to one of two probable influences:1) Irregularities in the optimizer/statistics facilities. SQL Server 2000 had a number of issues in this area which you can research at Microsoft, however, it is usually easy enough to tell if this may be the cause. Compare the query plan of 20 sec responses to the 2 sec instances. If they are significantly different then that is your likely cause. The easiest Quick & Dirty short-term fix is to add query hints to force them to choose the plan you know they should use.
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
March 13, 2008 at 1:34 am
All,
Thanks for the help on this.
Ronnie
March 13, 2008 at 5:04 am
Hi,
Sorry if I'm locking the stable door after the horse has bolted, but have you checked out this article on "Parameter Sniffing"?
http://omnibuzz-sql.blogspot.com/2006/11/parameter-sniffing-stored-procedures.html
March 13, 2008 at 8:19 pm
Never too late for good info... thanks for the link.
One of the things I've found is that even when you use that method to "disable" parameter sniffing, SQL Server can still make some pretty bad choices... I've found that WITH RECOMPILE usually works. I've also found that, for non GUI facing code, dynamic SQL works awesome, as well.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 31, 2008 at 3:24 am
Hi,
We've had the same problem before where our APPLICATION was the complain of the customer. Have you checked your server RAM?
Initially, we were only using 1gb of ram with 5 client users accessing one server. As per checking in task manager mem peaks ar .91gb of mem usage, so we just upgraded the RAM to 2 gb and everything worked like a miracle. It so happens that SQL Server 2000 peaks at arounf 1.71gb or RAM usage.
So you might want to try this option.
March 31, 2008 at 3:59 am
As Phillip suggested, you may want to check the index stats. Use DBCC SHOWCONTIG to see if indexes are fragmented and consider rebuilding/defragging them. Another thing to look at is external fragmentation of the disk. Check it with a tool like DiskKeeper. That also affects DB performance.
April 3, 2008 at 11:56 pm
I wonder if you application is using cursors.
Viewing 10 posts - 16 through 24 (of 24 total)
You must be logged in to reply to this topic. Login to reply