May 25, 2009 at 2:55 am
Hi,
I have a SP which process records in the tables. When this SP runs in Job it executes around 10 record/sec but when I run this SP in query analyzer it executes around 100 records/sec. Can Anyone tell me why there is so much of the difference in performance of same SP in both method of execution. Job's owner is sa.
I have SQL server 2000.
Thanks in advance.
vivek
May 25, 2009 at 5:56 am
I have had a problem smilar to this previously. An SP takes x amount longer when called through the application than when executed in Management Studio. The only noticable dfference from profler that I could see was the connection parameters were not identical for both connections...when I made these the same the query performance was equally poor in both... 🙂
In my case it seems that when the connection properties were different in Management studio, The SP execution was using a different execution plan. This was caused in my case to out of date statistics...that is I ran update stats and both connectons then used the same optimal execution plan...it went from taking over two minutes to run to executing in under a second...
I'm not saying that you are havng the same issue but it may provide a starting point.
Gethyn Elliswww.gethynellis.com
May 26, 2009 at 2:07 am
hi
when the sp which contains the query on a particular table is taking much time to excute it may has several reasons
first see the blocking issues on that query or table
and locking too
out dated statistics on db or table
poorly written with unneccery joins
and so on
goodluck
KingManjunath
May 26, 2009 at 4:04 am
Hi Ellis,
I have checked owner of the Job is same as login-user in query analyzer but no effect still the same difference in processing time.
Also the stat is updated.
Some other suggestion.Thanks for ur reply.
Manjunath,
I am running SP on QA and in Job one after another, so i dont think there is any lock issue on tables. There were no joins in the queries.
Thanks,
Vivek
May 30, 2009 at 5:12 am
Still waiting for any suggestation.
vivek
May 30, 2009 at 5:49 am
Hi
Can u pls ensure the following activities...
1. Check the TIME of the Job executed.
2. In that SAME TIME, Is there any other activities performing on the Database (ie: Backup / Transaction Log Clear / De-Fragmentation of tables / Statistics updated.,,,Etc.,)
Regards
May 30, 2009 at 5:17 pm
vivek (5/30/2009)
Still waiting for any suggestation.
Trace the query.
Compare execution plans, wait events.
_____________________________________
Pablo (Paul) Berzukov
Author of Understanding Database Administration available at Amazon and other bookstores.
Disclaimer: Advice is provided to the best of my knowledge but no implicit or explicit warranties are provided. Since the advisor explicitly encourages testing any and all suggestions on a test non-production environment advisor should not held liable or responsible for any actions taken based on the given advice.May 31, 2009 at 10:51 pm
Hi,
Good one. I have some questions on this.
Problem Description
================
--Procedure when called from Query Analyzer processes 100 records/sec.
--Procedure when called from a Job processes 10 records/sec.
Questions
==========
--Are we doing this test on the same machine i.e. machine from where
Job runs and from where we connect thru QA window?
--Are there any other steps in the Job (before/after)?
--Did we tried to capture Execution Plan for the procedure and if yes
then did we saw any difference? Can you share it with us?
--Did we notice any waittype when called from QA and JOB, if yes what
is that?
--Can you run Profiler at the back end (during both runs i.e. QA & JOB)
and share it with us?
I wish there could be a strate forward answer for this but there is nothing like that so we have to go thru all these steps.
Let me know in case you have specific questions.
Regards
GURSETHI
May 31, 2009 at 11:40 pm
Hi,
This job runs every minute with the same processing time, even when complete backup executes then also take same time to execute.
vivek
June 1, 2009 at 5:46 am
Hi GURSETHI (5/31/2009)
Here are the anwers -
Questions
==========
--Are we doing this test on the same machine i.e. machine from where
Job runs and from where we connect thru QA window?
** Yes both are same machine.
--Are there any other steps in the Job (before/after)?
** No
--Did we tried to capture Execution Plan for the procedure and if yes
then did we saw any difference? Can you share it with us?
** I capture the execution plan of SP,
-> 33 % used in Remote Query
-> 11 % used in Table spool/ Eager Spool
-> 3 % Table insert
-> 24 % used in cluster index scan
-> 1 % Hash match
-> 21 % Remote Query(update query)
Actually I am using Linked server.
**
--Did we notice any waittype when called from QA and JOB, if yes what
is that?
** haven't run Profiler**
--Can you run Profiler at the back end (during both runs i.e. QA & JOB)
and share it with us?
** Server is under having load, so can't use profiler**
Regards
vivek
June 7, 2009 at 10:56 pm
Still waiting.....
Regard,
Vivek
June 8, 2009 at 12:18 am
What is SQL Server 2005 version? Is it with SP2 and latest update if not plesae go ahead and apply.
>> You can compare both the execution plan and find out why query is faster from Management studio.
>>Find out if there is any network related or connection related issue.
>> Are you passing the same parameter while testing this if not please do that & capture time statistics ?
Look into this KB: http://support.microsoft.com/kb/940945
There might be number of reasons why SP is slow then query.
1.parameter sniffing look BOL for more info.
2.naming convention
3.don't use the prefix "sp_" in its name
4.Sp created with recompile option
5.By default, every time a stored procedure is executed, a message is sent from the server to the client indicating the number of rows that were affected by the stored procedure. Set NOCOUNT ON.
6.Change in input parameter value will recompile the SP
7.When calling a stored procedure from your application, it is important that you call it using its qualified name. Such as:
exec dbo.myProcedure
8.Update the statistics.
9.Id the SP is called from diff connection eachtime that will impact the SP perf.
10.May be Network issue
11.The stored procedure includes both DDL (Data Definition Language) and DML (Data Manipulation Language) statements, and they are interleaved with each other.
"More Green More Oxygen !! Plant a tree today"
June 8, 2009 at 12:59 am
Still Waiting
Regards,
Vivek
June 8, 2009 at 1:12 am
You can use Profiler to create the necessary sql commands to generate a server side trace that will not impact performance as much as a profiler trace. Since the job runs every minute, the trace should only have to run for 5 to 10 minutes to capture enough information about the processing. I would also capture the execution plan using the trace to see what is going on.
June 8, 2009 at 2:05 am
Hola,
Sorry if don't help, but..
IF:
"Server is under having load, so can't use profiler"
AND:
" 24 % used in cluster index scan"
THEN:
You should create an index to remove this table scan (cluster index scan=table scan) for better performance.
Regards
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply