April 7, 2011 at 9:09 am
Hi everyone,
I have a sql agent job which executes a store procedure for every 10sec recurring (the stored proc picks 1000 records if exist from a table assigns an id and inserts them in to another table)
The job was running fine until last week in 30 sec to 1min but now the job is running but its very slow
and taking really long to process 1000 records nearly 1hr 30 min but the job is not failing. when i tried to
execute the sp from ssms it runs fine in 1min 30 sec.
Sql agent is running under us\sqlexec acc and job is running under same account when i looked in history, the job owner is sa.
I had an issue last weekend on this db that the log file has grown out of disk space so I truncated the log file with truncate_only followed by a full bakup rather than this nothing has changed.
and also i had a job(reorganize indexes which runs every weekend) failed on same day due to log file growth but I dont think its a problem since the sp is running fine from ssms.
I tried all options like recreating new job which exec sp but nothing changed.
I also tried to sp_recompile 'sp_name' but no use
I stopped and started it several time from sql agent and also ssms but nothing help.
One more thing I also checked for any blocked process on db but they are none
Can anyone have idea whats going on please help.
April 7, 2011 at 11:03 am
Sounds like you have a couple of separate issues.
On the log, is your database in Full recovery? If so, are you running log backups? If your database is in full recovery and you're not running log backups, that would explain it filling up.
As to the query running slowly, it's really hard to know without seeing the code and the execution plan. What kind of indexes are on the table you're reading from? What kind are on the table you're writing to? Do other processes access the table you're writing to?
There are just too many outstanding variables at this point to even start making good guesses as to what the problem might be.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 7, 2011 at 4:00 pm
Hi,
thanks for your reply
yes the db is in full recovery and log backups job is running succesfully for every hour.
we have clustered and non-clustered indexes on tables and there are no other processes accessing the table.
The sp is too big to post here.
April 7, 2011 at 6:24 pm
CANCER (4/7/2011)
Hi,thanks for your reply
yes the db is in full recovery and log backups job is running succesfully for every hour.
we have clustered and non-clustered indexes on tables and there are no other processes accessing the table.
The sp is too big to post here.
Hourly might not be often enough for your log backups.
Indexes on the table doesn't tell me whether or not they are efficient or being used properly. No way to know without at least the execution plan.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 8, 2011 at 1:51 pm
hi grant,
The same sp is running on another 6 different servers and performance on everyone of them is very good.
I cant understand why this is acting so weird on this server. do you think the index reorg job failed over the weekend has an effect? but how come it running fine from query analyzer if indexes are fine?
i am out of options to look in to.
April 8, 2011 at 2:12 pm
Hi,
This is what i am getting when i try to collect execution plan.
The query has exceeded the maximum number of result sets that can be displayed in the Execution Plan pane. Only the first 250 result sets are displayed in the Execution Plan pane.
April 11, 2011 at 4:56 am
CANCER (4/8/2011)
Hi,This is what i am getting when i try to collect execution plan.
The query has exceeded the maximum number of result sets that can be displayed in the Execution Plan pane. Only the first 250 result sets are displayed in the Execution Plan pane.
Sounds like you have a cursor. You won't be able to capture all the execution plans for a cursor if the number exceeds 250, like you're seeing. Instead, you'd need to use profiler or extended events to capture it.
If it's only one server, then I'd check the ansi settings on the server to see if they are different. Does this server have less memory or fewer cpu's? I'd work on identifying the differences that might lead the query not to work on this server.
Cursors are notorious performance problems.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 12, 2011 at 7:22 am
Given that your query runs in 90 seconds from SSMS I am betting you have a suboptimal plan cached that is being used by the Agent job. The sledge hammer approach is to run dbcc freeproccache, which will flush ALL cached plans from your server. This could be unfortunate for other activity due to having to recompile lots of plans. Unless you are under significant activity load you probably will be OK with this action.
Another option is to put OPTION (RECOMPILE) on your statement(s) in the agent job to get a new plan for them each time it is run.
Oh, did your SSMS run use hard-coded values by any chance or was it variable driven?
Last thing: if this is a really important process, get a professional on board to get you up and running quickly.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
April 12, 2011 at 9:43 am
Hello,
I noticed the job is running perfectly awesome as it did before 2 weeks.
I havent changed anything except reorganize indexes job ran over the weekend.
But i am not sure what caused the job which usually takes 2 min to complete had taken 2 hours over last
2 weeks.
anyway thanks for all your support and I really appreciate your help.
April 12, 2011 at 9:15 pm
If your "reorg" job actually does a rebuild, then that would update stats, which will clear the cache and thus get you new plans that could be optimal for existing data. It could also have just been stale stats leading to a suboptimal plan. Every 2 weeks is often not enough for statistics updates or index defrag operations.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply