February 5, 2009 at 2:56 pm
I have an app that hits on only one table. This app uses dynamic sql to build an ad-hoc query every time. I attached a pic of the app.
This one table has 500 inserts per second. The problem is it takes too long to return results. One out of ten times they get a time-out.
I noticed if I don't DBCC FreeProcCache & DBCC DROPCLEANBUFFERS - it returns results very quickly! Even if I change the columns in the where or change the orderby columns, drop a column, etc. It returns records really quickly. This is great and I suspect might be the solution to solve my dilema..but..
If you look at the pic of the app you'll see they have a 'where' checkbox and can type in anything they want. So, every query sent from this app is an adhoc. According to them this must be, they say there's no way to use a stored proc with the app(?)
How can I make it so that any field (or fields) they may choose in the ad-hoc will already be in the cache and stay in the cache. If it's something that's already in the cache, it seems to really fly, even with the heavy inserts going on.
Would creating a sp_execute proc with all the params using sp_executesql accomplish this? I've done a little research on sp_executesql, but a unsure if this would work in my situation.
February 5, 2009 at 3:24 pm
The only time an adhoc plan will be reused is if the sql matches exactly to the one that create the plan. Exactly meaning down to the white space and the data types of the parameters.
If there's any difference in the where clause, it's a different query and will get a different plan.
Have you tried querying the plan cache to see what's in it?
I suspect it's not compilation that's the problem. Unless you've got hundreds of compiles/sec and there's contention on the plan cache. Is that the case?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 5, 2009 at 3:54 pm
The only time an adhoc plan will be reused is if the sql matches exactly to the one that create the plan. Exactly meaning down to the white space and the data types of the parameters.
If there's any difference in the where clause, it's a different query and will get a different plan.
Have you tried querying the plan cache to see what's in it?
I suspect it's not compilation that's the problem. Unless you've got hundreds of compiles/sec and there's contention on the plan cache. Is that the case?
I queried my plan & There were over 500 entries in my cache. I don't think there's any real contention on the plan.
I think what you are eluding to tho is that I must have a different plan for each one of these examples:
select * from logger2 where severity = 5 AND eventid = 9200 order by id
select * from logger2 where severity = 3 AND eventid = 9303 and category = 5 order by id
It doesn't seem to matter that I add or remove columns, or change the values, it's still very fast - as long as I don't free the cache. But I must have a plan in cache for both of these examples.
And if you look at the app there's a text box beside the 'where' checkbox, that they can type in anything. So there would be almost no way for me to come up with a huge set of 'all inclusive' type statements that would cover all possible combinations, correct? Was hoping I could write a big inclusive statement that would be put into cache, where I keep adding columns like:
select * from logger2 where id = '1'
select * from logger2 where id = '1' and severity = 2
select * from logger2 where id = '1' and severity = 2 and username = jim
select * from logger2 where id = '1' and severity = 2 and username = jim and timestamp = 02/03/2008
I hope pic of the app helps illustrate my dilema.
They are saying indexes should be kept at a minimum because of the inserts.
Is there some other way alltogether that could solve this problem? Forced parameterzation?
February 5, 2009 at 5:05 pm
DBCC DROPCLEANBUFFERS should only get run as a test. That removes all the data from memory. Based on the queries you've shown, compiles aren't your issue. You're looking at memory problems and probably contention. What you need to do is monitor wait states on the machine and get a server-side trace set up. Then, when you see the waits going up, you can correlate it to the procs that are running at the moment to determine the cause of the timeout.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 5, 2009 at 5:20 pm
DBCC DROPCLEANBUFFERS should only get run as a test. That removes all the data from memory. Based on the queries you've shown, compiles aren't your issue. You're looking at memory problems and probably contention. What you need to do is monitor wait states on the machine and get a server-side trace set up. Then, when you see the waits going up, you can correlate it to the procs that are running at the moment to determine the cause of the timeout.
There are no procs, this is a dedicated one table DB. The only thing it is used for is ad-hocs in the app in the pic.
So, there's nothing for me to have to track down. I know it's the one insert statement that is causing the waits.
February 5, 2009 at 5:25 pm
Where you see proc, substitute query. Sorry I mistyped.
And yes, you do need to determine where the slow down is occuring because it's not from compile time on such simple execution plans.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 5, 2009 at 5:28 pm
Do you have WITH RECOMPILE at the top of the proc?
Edit: Sorry, didn't register that you're using adhoc sql generated in the app. If that is the case perhaps consider using sp_create_plan_guide which should work quite well with your unknown WHERE clause. BTW why aren't you using a proc?
Max
February 5, 2009 at 5:46 pm
Edit: Sorry, didn't register that you're using adhoc sql generated in the app. If that is the case perhaps consider using sp_create_plan_guide which should work quite well with your unknown WHERE clause. BTW why aren't you using a proc?
They say I cannot use a proc because if you look at the app there's a checkbox for the 'where' then can type in anything. So, the SQL statement has to be generated on the fly.
Looking into sp_create_plan guide now.
~Thanks
February 5, 2009 at 5:52 pm
And yes, you do need to determine where the slow down is occuring because it's not from compile time on such simple execution plans.
Grant did u see the app? Are the plans that simple, even with the adhoc part? If they are in the cache they run quick, even when the inserts are running.
There's only this one insert statement, repeated 1000 times a sec:
ALTER PROCEDURE [dbo].[sp_InsertLogMessage2]
(
@Severityint,
@Categoryvarchar(24),
@EventIdint,
@UserName varchar(24),
@KeyIdvarchar(36),
@Param1int,
@Param2int,
@Param3varchar(64),
@Messagevarchar(256),
@Locationvarchar(256),
@MachineNamevarchar(48),
@ApplicationNamevarchar(24),
@AppDomainNamevarchar(64),
@Identityvarchar(48),
@DetailMessagevarbinary(MAX)
)
AS
INSERT INTO Logger2
([TimeStamp],Severity,Category,EventId,UserName,KeyId,Param1,Param2,Param3,[Message],Location,MachineName,ApplicationName,AppDomainName,[Identity],DetailMessage, DetailMessageSize)
VALUES (GetDate(),@Severity,@Category,@EventId,@UserName,@KeyId,@Param1,@Param2,@Param3,@Message,@Location,@MachineName,@ApplicationName,@AppDomainName,@Identity,@DetailMessage, Datalength(@DetailMessage))
Isn't that whats causing the slowdowns?
All the selects are generated ad-hoc dynamically from the app.
February 5, 2009 at 6:46 pm
Sorry, didn't register that you're using adhoc sql generated in the app. If that is the case perhaps consider using sp_create_plan_guide which should work quite well with your unknown WHERE clause. BTW why aren't you using a proc?
Thanks Max,
I looked into using plan guides. I have a question..
Looking at the app & thinking out loud, how would I make a plan guide for all those different options?
Can u give me a quick example?
February 5, 2009 at 11:55 pm
krypto69 (2/5/2009)
I looked into using plan guides.
Don't. Plan guides are a last resort fix for a query that's always generating a bad plan and that cannot be changed. They're a form of query hint and as such should only be used when you know completely what they're going to do and you are 100% sure that you know better than the optimiser.
What you need to do is work out why they queries are sometimes slow.
Is there blocking?
Is the query waiting and, if so, for what?
Are the IO or CPU statistics different between the fast executions and the slow ones (use SQL Profiler)
Are the plans different between fast executions and slow? (use SQL Profiler)
How many compiles or recompiles are you getting per sec? (Use perfmon)
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 6, 2009 at 4:08 am
GilaMonster is absolutely correct, performance needs to be tweaked and the proper analysis on the query execution must be done first - hints are always a last resort. Non sql considerations such as load, network, memory, disk io and fragmentation should also be ruled out.
However, given that you are unable to create indexes and according to your previous comments about the query not being cached you may have to delve into the more obscure forms of query optimization. From what I can gather, there aren't any joins which is a good thing and this would suggest that hints may not be required but you are still receiving ad-hoc statements (albe they simple) which the optimizer may be struggling to deal with.
My first question was: why aren't the parameters being passed to a proc, even if the where clause is ad-hoc? The input variables that the app generates (including the on-the-fly where input) could be passed to a proc where you would do your string manipulation instead of the application. This approach would allow you to execute and analyse the proc itself and would make tweaking easier. I think it's a difficult approach to tweak a database where you can't alter the indexes or stats and where you're given a relatively unknown query to optimize. However, that situation may arise depending on whether the IT shop is dominated by application or server logic.
I would expect that a proc would be required to:
a. Receive the variables
b. Do any required string manipulation
c. Compile the INSERT statement
d. Execute the proc
Once that is done, it would be a matter of optimizing the proc to include any necessary hints.
This would be my approach but it may not apply to your circumstances.
Max
February 6, 2009 at 5:34 am
Yeah, I saw the app. I do understand what you're describing, but I'm 100% with Gail on this. You need to gather statistics to understand exactly where the slow-downs are occurring, first. Once you know, not suspect, not guess, not assume, but absolutely know where and how the slow downs are occurring, then you can address the issue directly.
As an aside, too many indexes will slow down inserts, but the right indexes, especially the right clustered index, will speed up insertions. Can you post the structure as well as an execution plan from the insert. An execution plan or two from the queries might help too.
On a different point, you can use stored procedures with ad hoc queries. Looking at the app, you could create three procs, the first a wrapper proc that accepts all the input from the app. That proc then determines which of the other two procs to call. The first option would be a proc that does not include the dynamic WHERE clause, but instead is a traditional structured procedure. The second would be some flavor of sp_executesql that allows you to get some degree of plan reuse. That's the approach I'd take.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
February 6, 2009 at 7:38 am
Grant Fritchey (2/6/2009)
As an aside, too many indexes will slow down inserts, but the right indexes, especially the right clustered index, will speed up insertions.
Indeed, and by too many indexes, we're not talking 2 or 3 here. It'll take more than that to cause a serious slowdown of the inserts. If one index slows down your inserts badly, then you have bigger problems than a badly-performing select.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
February 6, 2009 at 7:51 am
GilaMonster (2/6/2009)
Indeed, and by too many indexes, we're not talking 2 or 3 here. It'll take more than that to cause a serious slowdown of the inserts. If one index slows down your inserts badly, then you have bigger problems than a badly-performing select.
Absolutely, too true. I should have defined that term more appropriately.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 15 posts - 1 through 15 (of 17 total)
You must be logged in to reply to this topic. Login to reply