October 11, 2010 at 2:55 pm
Hi,
Lately I've been having to recompile the very same SP because it starts taking very long, at first I had to do it around once every 6 months, then it passed to once every 15 days and now it's about twice a week, is there something I can check or do to make the execution plans more accurate without having to modify the SP, neither adding the with recompile option to it (there's a technical limitation, this SP can be called 100 times during its process, adding the with recompile into it added about half hour to the process... 🙁 )
So far my approach have been doing a job that runs once a day and recompiles it, but I don't like that approach and I want a long term solution to it, if available.
Thanks so much for any help you can provide me.
Frank.
October 12, 2010 at 2:21 am
f_ernestog (10/11/2010)
Hi,Lately I've been having to recompile the very same SP because it starts taking very long, at first I had to do it around once every 6 months, then it passed to once every 15 days and now it's about twice a week, is there something I can check or do to make the execution plans more accurate without having to modify the SP, neither adding the with recompile option to it (there's a technical limitation, this SP can be called 100 times during its process, adding the with recompile into it added about half hour to the process... 🙁 )
So far my approach have been doing a job that runs once a day and recompiles it, but I don't like that approach and I want a long term solution to it, if available.
Thanks so much for any help you can provide me.
i think you need to study the execution plan.and see how much the related tables are fragmanted ?
-------Bhuvnesh----------
I work only to learn Sql Server...though my company pays me for getting their stuff done;-)
October 12, 2010 at 2:46 am
Frank
If the procedure is being called 1000 times but does not use the most appropriate execution plan every time, could this be because you are supplying different parameters each time? Without the procedure definition, table DDL, sample data and execution plans, it's difficult to help you. But it may be worth considering creating different procs for individual executions or for groups of executions where similar parameters are passed.
John
October 13, 2010 at 1:28 am
Possibly a case of parameter sniffing? http://sqlinthewild.co.za/index.php/2007/11/27/parameter-sniffing
October 13, 2010 at 8:01 am
1) rapidly out-of-date statistics perhaps?
2) how about doing option recompile on just a few problematic statements in the sproc instead of the entire thing? Usually it is just one or two baddies that cause problems.
3) widely disparate inputs perhaps? dynamic sql can be a boon here (beware sql injection!).
4) overly complex join/wheres? perhaps break down with intermediary temp tables (NOT table variables)
5) have you ensured it wasn't blocking or resource contention issues?? do a fileio stall and waitstats analysis during execution.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
October 13, 2010 at 9:24 am
Hi All,
Thanks for your replies.
I tried checking on table fragmentation, despite there's some, this doesn't seem to be the issue.
I can't create different procedures, since this is an iterative procedure that have to do the very same process for X amount of locations, and it's already as atomic as possible.
The input for this is a date, a WorkCenterId, an AreaId and a userId, the user have no way to call this directly from the app, but the app will call it based on the result of another SP, the app will save the list of areas into a list and then it will iterate it, calling this SP once per every result in there, the amount of times this executes can go from one to 30.
Parameter Sniffing is not a bad Idea, I will give it a shot, also I will check the statistics to make sure they are as accurate as possible (we recreate all the statistics on a weekly basis, when we do the maintenance).
I doubt on Highly complex joins, because it works some times...
I will try to do the hint to recompile just the problematic part of it, this might work fine.
Kevin - How do I check the FIleIO?
Another piece of info is that this all worked fine prior to set up merge replication.
Thanks for your help. 🙂
Frank.
October 13, 2010 at 12:32 pm
I'm kind of with John on this one. If re-compiling takes that long for just 100 or even 1000 iterations, then this must be a monster of a procedure. Just breaking it down into several sub procedures based on intermediate results could give you a much better optimization of each individual procedure based on input parameters, possibly bypassing the tendency towards parameter sniffing. It would probably also be easier to maintain.
Todd Fifield
October 13, 2010 at 3:52 pm
Since you mentioned something changed (merge replication), it is quite possible that blocking or resource contention is at fault (although I still bet some plan caching/parameter sniffing is to blame as well). Wait stats and blocking analysis are important to perform in this case.
Here is what I use for fileIO stall analysis:
exec [gather_file_stats_2005] 1
go
waitfor delay '00:05:00'
go
exec [gather_file_stats_2005] 0
go
exec [report_file_stats_2005]
go
IF EXISTS (SELECT *
FROM sys.objects
WHERE [object_id] = OBJECT_ID(N'[dbo].[gather_file_stats_2005]')
AND OBJECTPROPERTY([object_id], N'IsProcedure') = 1)
BEGIN
DROP PROCEDURE [dbo].[gather_file_stats_2005] ;
END
go
CREATE PROCEDURE [dbo].[gather_file_stats_2005] (@Clear INT = 0)
AS
SET NOCOUNT ON ;
DECLARE @dt DATETIME ;
SET @dt = GETDATE() ;
IF OBJECT_ID(N'[dbo].[file_stats]',N'U') IS NULL
CREATE TABLE [dbo].[file_stats](
[database_id] [smallint] NOT NULL,
[file_id] [smallint] NOT NULL,
[num_of_reads] [bigint] NOT NULL,
[num_of_bytes_read] [bigint] NOT NULL,
[io_stall_read_ms] [bigint] NOT NULL,
[num_of_writes] [bigint] NOT NULL,
[num_of_bytes_written] [bigint] NOT NULL,
[io_stall_write_ms] [bigint] NOT NULL,
[io_stall] [bigint] NOT NULL,
[size_on_disk_bytes] [bigint] NOT NULL,
[capture_time] [datetime] NOT NULL);
-- If 1 the clear out the table
IF @Clear = 1
BEGIN
TRUNCATE TABLE [dbo].[file_stats] ;
END
INSERT INTO [dbo].[file_stats]
([database_id]
,[file_id]
,[num_of_reads]
,[num_of_bytes_read]
,[io_stall_read_ms]
,[num_of_writes]
,[num_of_bytes_written]
,[io_stall_write_ms]
,[io_stall]
,[size_on_disk_bytes]
,[capture_time])
SELECT [database_id]
,[file_id]
,[num_of_reads]
,[num_of_bytes_read]
,[io_stall_read_ms]
,[num_of_writes]
,[num_of_bytes_written]
,[io_stall_write_ms]
,[io_stall]
,[size_on_disk_bytes]
,@DT
FROM [sys].dm_io_virtual_file_stats(NULL,NULL) ;
GO
IF EXISTS (SELECT * FROM sys.objects WHERE [object_id] =
OBJECT_ID(N'[dbo].[report_file_stats_2005]') and OBJECTPROPERTY([object_id],
N'IsProcedure') = 1)
DROP PROCEDURE [dbo].[report_file_stats_2005] ;
GO
CREATE PROCEDURE [dbo].[report_file_stats_2005]
( @EndTime DATETIME = NULL
, @BeginTime DATETIME = NULL )
-- Date & time of the last sample to use
AS
SET NOCOUNT ON ;
IF OBJECT_ID( N'[dbo].[file_stats]',N'U') IS NULL
BEGIN
RAISERROR('Error [dbo].[file_stats] table does not exist', 16, 1) WITH NOWAIT ;
RETURN ;
END
DECLARE @file_stats TABLE (
[database_id] [smallint] NOT NULL,
[file_id] [smallint] NOT NULL,
[num_of_reads] [bigint] NOT NULL,
[num_of_bytes_read] [bigint] NOT NULL,
[io_stall_read_ms] [bigint] NOT NULL,
[num_of_writes] [bigint] NOT NULL,
[num_of_bytes_written] [bigint] NOT NULL,
[io_stall_write_ms] [bigint] NOT NULL,
[io_stall] [bigint] NOT NULL,
[size_on_disk_bytes] [bigint] NOT NULL,
[capture_time] [datetime] NOT NULL);
-- If no time was specified then use the latest sample minus the first sample
IF @BeginTime IS NULL
BEGIN
SET @BeginTime = (SELECT MIN([capture_time]) FROM [dbo].[file_stats]) ;
END
ELSE
BEGIN
-- If the time was not specified exactly find the closest one
IF NOT EXISTS(SELECT * FROM [dbo].[file_stats] WHERE [capture_time] = @BeginTime)
BEGIN
DECLARE @BT DATETIME ;
SET @BT = @BeginTime ;
SET @BeginTime = (SELECT MIN([capture_time]) FROM [dbo].[file_stats] WHERE [capture_time] >= @BT);
IF @BeginTime IS NULL
BEGIN
SET @BeginTime = (SELECT MAX([capture_time]) FROM [dbo].[file_stats] WHERE [capture_time] <= @BT);
END
END
END
IF @EndTime IS NULL
SET @EndTime = (SELECT MAX([capture_time]) FROM [dbo].[file_stats]) ;
ELSE
BEGIN
-- If the time was not specified exactly find the closest one
IF NOT EXISTS(SELECT * FROM [dbo].[file_stats] WHERE [capture_time] = @EndTime)
BEGIN
DECLARE @et DATETIME ;
SET @et = @EndTime ;
SET @EndTime = (SELECT MIN([capture_time]) FROM [dbo].[file_stats] WHERE [capture_time] >= @et) ;
IF @EndTime IS NULL
BEGIN
SET @EndTime = (SELECT MAX([capture_time]) FROM [dbo].[file_stats] WHERE [capture_time] <= @et) ;
END
END
END
INSERT INTO @file_stats
([database_id],[file_id],[num_of_reads],[num_of_bytes_read],[io_stall_read_ms]
,[num_of_writes],[num_of_bytes_written],[io_stall_write_ms]
,[io_stall],[size_on_disk_bytes],[capture_time])
SELECT [database_id],[file_id],[num_of_reads],[num_of_bytes_read],[io_stall_read_ms]
,[num_of_writes],[num_of_bytes_written],[io_stall_write_ms]
,[io_stall],[size_on_disk_bytes],[capture_time]
FROM [dbo].[file_stats]
WHERE [capture_time] = @EndTime ;
IF @@ROWCOUNT = 0
BEGIN
RAISERROR('Error, there are no waits for the specified DateTime', 16, 1) WITH NOWAIT ;
RETURN ;
END
-- Subtract the starting numbers from the end ones to find the difference for that time period
UPDATE fs
SET fs.[num_of_reads] = (fs.[num_of_reads] - a.[num_of_reads])
, fs.[num_of_bytes_read] = (fs.[num_of_bytes_read] - a.[num_of_bytes_read])
, fs.[io_stall_read_ms] = (fs.[io_stall_read_ms] - a.[io_stall_read_ms])
, fs.[num_of_writes] = (fs.[num_of_writes] - a.[num_of_writes])
, fs.[num_of_bytes_written] = (fs.[num_of_bytes_written] - a.[num_of_bytes_written])
, fs.[io_stall_write_ms] = (fs.[io_stall_write_ms] - a.[io_stall_write_ms])
, fs.[io_stall] = (fs.[io_stall] - a.[io_stall])
FROM @file_stats AS fs
INNER JOIN (SELECT b.[database_id],b.[file_id],b.[num_of_reads],b.[num_of_bytes_read],b.[io_stall_read_ms]
,b.[num_of_writes],b.[num_of_bytes_written],b.[io_stall_write_ms],b.[io_stall]
FROM [dbo].[file_stats] AS b
WHERE b.[capture_time] = @BeginTime) AS a
ON (fs.[database_id] = a.[database_id] AND fs.[file_id] = a.[file_id]) ;
SELECT CONVERT(varchar(50),@BeginTime,120) AS [Start Time]
,CONVERT(varchar(50),@EndTime,120) AS [End Time]
,CONVERT(varchar(50),@EndTime - @BeginTime,108) AS [Duration (hh:mm:ss)];
SELECT fs.[database_id] AS [Database ID], fs.[file_id] AS [File ID]
,fs.[num_of_reads] AS [NumberReads]
,CONVERT(VARCHAR(20),CAST((fs.[num_of_bytes_read] / 1048576.0) AS MONEY),1) AS [MBs Read]
,fs.[io_stall_read_ms] AS [IoStallReadMS]
,CASE WHEN (fs.[num_of_reads]) <> 0 THEN (fs.[io_stall_read_ms]) / (fs.[num_of_reads]) ELSE 0 END AS AvgStallPerRead
,fs.[num_of_writes] AS [NumberWrites]
,CONVERT(VARCHAR(20),CAST((fs.[num_of_bytes_written] / 1048576.0) AS MONEY),1) AS [MBs Written]
,fs.[io_stall_write_ms] AS [IoStallWriteMS]
,CASE WHEN (fs.[num_of_writes]) <> 0 THEN (fs.[io_stall_write_ms]) / (fs.[num_of_writes]) ELSE 0 END AS AvgStallPerWrite
,fs.[io_stall] AS [IoStallMS]
,CONVERT(VARCHAR(20),CAST((fs.[size_on_disk_bytes] / 1048576.0) AS MONEY),1) AS [MBsOnDisk]
,(SELECT c.[name] FROM [master].[sys].[databases] AS c WHERE c.[database_id] = fs.[database_id]) AS [DB Name]
,(SELECT RIGHT(d.[physical_name],CHARINDEX('\',REVERSE(d.[physical_name]))-1)
FROM [master].[sys].[master_files] AS d
WHERE d.[file_id] = fs.[file_id] AND d.[database_id] = fs.[database_id]) AS [File Name]
,fs.[capture_time] AS [Last Sample]
FROM @file_stats AS fs
ORDER BY fs.[database_id], fs.[file_id] ;
GO
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
October 14, 2010 at 8:23 am
Thanks Kevin, I will try that.
I tried recompiling just the part of the SP that takes the long, but it decreased considerably the performance of the whole process, so I will give it a shot to this, also, I will do a parameter sniffing to this to see how it improves this. as I've stated before, I can't cut down this SP because now is as atomic as it can be :-/ thrust me, that would be my first move on this 🙂
Thanks all for your help! I really appreciate it and I have learned a lot so far!
Frank.
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply