September 26, 2011 at 9:49 am
If, for example there are a large batch of statements like
select * from table1 where cola = @variable
delete from table1 where colb =@variable
It seems that DTA can not tune a workload like this captured from Profiler... am I missing something, or does this seem like a huge functionaly problem?
September 26, 2011 at 9:52 am
NJ-DBA (9/26/2011)
If, for example there are a large batch of statements likeselect * from table1 where cola = @variable
delete from table1 where colb =@variable
It seems that DTA can not tune a workload like this captured from Profiler... am I missing something, or does this seem like a huge functionaly problem?
just for example, I'm trying to tune a large batch of deletes provided froma vendor. Profiler, using the "tuning" template, produces a representative workload, but 90% of it "contains errors" since it uses variables as shown above... user error?
September 26, 2011 at 9:52 am
This seems very oversimplified, hence we can't help ;-).
There's very little to do here to solve the problem. Add the index on that field.
Assuming there are enough different values the index will help. If not a scan is the only good plan.
September 26, 2011 at 9:53 am
NJ-DBA (9/26/2011)
NJ-DBA (9/26/2011)
If, for example there are a large batch of statements likeselect * from table1 where cola = @variable
delete from table1 where colb =@variable
It seems that DTA can not tune a workload like this captured from Profiler... am I missing something, or does this seem like a huge functionaly problem?
just for example, I'm trying to tune a large batch of deletes provided froma vendor. Profiler, using the "tuning" template, produces a representative workload, but 90% of it "contains errors" since it uses variables as shown above... user error?
Do you have anything in the missing indexes when looking at the plan in ssms?
September 26, 2011 at 10:00 am
eh, I did oversimplify. The problem is that there are a large variety of statments... various delete and selects which are formatted as above. I wanted to use profiler, and DTA to look at all of them and say "hey, here's one which could be improved with an index"... so that I dont have to look at all of those statement manually and figure out which have appropriate indexes and which do not.
bascially I want to tune an entire process instead of statment by statement... which I thought a workload captured in profiler and submitted to DTA was good for... maybe better to look at the dmvs ?
September 26, 2011 at 10:04 am
NJ-DBA (9/26/2011)
eh, I did oversimplify. The problem is that there are a large variety of statments... various delete and selects which are formatted as above. I wanted to use profiler, and DTA to look at all of them and say "hey, here's one which could be improved with an index"... so that I dont have to look at all of those statement manually and figure out which have appropriate indexes and which do not.
DTA likely will not see these "missing" indexes until the statements have been run enough times without that index. It also could very well be that there are already indexes in place that satisfy these statements. You may want to check the DMVs and query the exec plans though rather than relying on DTA.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
September 26, 2011 at 10:24 am
There's no secret here to perf tuning.
Find the slowest thing <in a group> and make it run faster. As I said earlier in profiler you can include the statement complete events so that you get both the whole bactch and also each of the statements to find the real slow ones.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply