January 5, 2019 at 8:56 pm
Question: in my production server I had noticed the update stats job was failing for few days due to deadloock.. and recently we faced slowness with regards to the same dB where the update stats job failed..how important is it to monitor the update stats job so I can place an alerting mechanism for the job in case it fails..thanks!
January 6, 2019 at 12:14 am
Change the job timing, where you have low activity.
Enable trace flag 1222,-1 to gather information in the log to review later.
Muthukkumaran Kaliyamoorthy
https://www.sqlserverblogforum.com/
January 6, 2019 at 2:13 pm
RGP - Saturday, January 5, 2019 8:56 PMQuestion: in my production server I had noticed the update stats job was failing for few days due to deadloock.. and recently we faced slowness with regards to the same dB where the update stats job failed..how important is it to monitor the update stats job so I can place an alerting mechanism for the job in case it fails..thanks!
Obviously, it's important because it caused you a problem but, heh.... you don't have to monitor a thing. The failures already do that for you. π
Seriously, though, all you need to do is have your "Morning Job Report" report the failure. If you've not built one of those, you should. π Either that or have the job jump to a "failure" step to report the failure by email.
Bottom line is... IMHO, if a job isn't important enough to monitor/report failures for, the job isn't important enough to be a job. π The urgency of the failure notification may vary depending on the importance of a job but every job should at least have a failure notification method if it doesn't show up on some all encompassing morning report. The one I've built for myself also shows "stale" jobs that haven't been run in a long time and new/changed jobs as well as several metrics such as the number of successful runs, the number of failed runs, the number of jobs that were stopped, durations, whether or not the job and any related schedule is enabled, etc, etc.
As a bit of a sidebar, it all comes out in a nicely formatted grid and is color coded and it's all done using only T-SQL.
BTW... just in case you don't know what Trace Flag 1222 is, it captures important extra information in in the SQL Log File that can help you troubleshoot deadlocks a bit more. Here's a link that summarizes what it does.
https://www.mssqltips.com/sqlservertip/2130/finding-sql-server-deadlocks-using-trace-flag-1222/
--Jeff Moden
Change is inevitable... Change for the better is not.
January 6, 2019 at 2:24 pm
p.s. What code are you using to identify when the stats need to be rebuilt and what is the code that you're using to do the actual rebuilds? By the same token when are you doing index maintenance and what code are you using for that? I ask because a lot of people end up shooting themselves squarely in the face with those two subjects.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 6, 2019 at 5:28 pm
I have a sql agent job that runs update stats every day actually every 5 mins for a database that is a sql server subscription database. This dB is actually used for reading. So when I noticed slowness I checked the job which was failing as dead lock error and so i decided to re-run this job a few more times finally was succeeded.
I donβt have a query to check when stats need to be updated.
I will post the rebuild index script soon which is done every day once.
January 7, 2019 at 4:15 am
RGP - Sunday, January 6, 2019 5:28 PMI have a sql agent job that runs update stats every day actually every 5 mins for a database that is a sql server subscription database. This dB is actually used for reading. So when I noticed slowness I checked the job which was failing as dead lock error and so i decided to re-run this job a few more times finally was succeeded. I don’t have a query to check when stats need to be updated. I will post the rebuild index script soon which is done every day once.
Running stats updates every 5 minutes with no apparent filter on what actually needs to have stats updated could be a pretty serious problem all by itself. While I practice a bit of overkill on stats rebuilds myself, I don't rebuild stats on things that haven't suffered a change (0 on the rows modified) and for those that have, I don't rebuild the stats (I DO use a FULL SCAN, which is overkill in a lot of places) unless they've been both modified and I haven't rebuilt the stats in at least 5 days.
Other folks have more complicated methods that can greatly reduce the overkill on stats rebuilds. Of course, that's what originally got our systems in trouble. Someone (before mu time at the company) downloaded some code that had a pretty good algorithm for which stats needed rebuilding (it even looked at table usage) but, hidden in the rubble of the code were a couple of lines of code that added stats to every bloody column if the columns didn't already have stats on them. Because of other legacy mistakes, we also have some nasty wide tables with 140 columns with millions of rows and other more narrow tables that are approaching the billion row level. It took me a long time to come up with a method (using Trace Flag 8666) to auto-magically determine which stats weren't actually ever used so I could review them and drop them. So I just keep things simple when it comes to rebuilding stats now.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 7, 2019 at 8:32 am
Jeff Moden - Monday, January 7, 2019 4:15 AMRGP - Sunday, January 6, 2019 5:28 PMI have a sql agent job that runs update stats every day actually every 5 mins for a database that is a sql server subscription database. This dB is actually used for reading. So when I noticed slowness I checked the job which was failing as dead lock error and so i decided to re-run this job a few more times finally was succeeded. I don’t have a query to check when stats need to be updated. I will post the rebuild index script soon which is done every day once.Running stats updates every 5 minutes with no apparent filter on what actually needs to have stats updated could be a pretty serious problem all by itself. While I practice a bit of overkill on stats rebuilds myself, I don't rebuild stats on things that haven't suffered a change (0 on the rows modified) and for those that have, I don't rebuild the stats (I DO use a FULL SCAN, which is overkill in a lot of places) unless they've been both modified and I haven't rebuilt the stats in at least 5 days.
Other folks have more complicated methods that can greatly reduce the overkill on stats rebuilds. Of course, that's what originally got our systems in trouble. Someone (before mu time at the company) downloaded some code that had a pretty good algorithm for which stats needed rebuilding (it even looked at table usage) but, hidden in the rubble of the code were a couple of lines of code that added stats to every bloody column if the columns didn't already have stats on them. Because of other legacy mistakes, we also have some nasty wide tables with 140 columns with millions of rows and other more narrow tables that are approaching the billion row level. It took me a long time to come up with a method (using Trace Flag 8666) to auto-magically determine which stats weren't actually ever used so I could review them and drop them. So I just keep things simple when it comes to rebuilding stats now.
Okay I see, for the rebuild index script here it is : (this is inside a SQL Agent Job that runs daily, the actual stored procedure is also below)
DECLARE @Database VARCHAR(255)
DECLARE DatabaseCursor CURSOR FOR
SELECT name
FROM sys.databases
WHERE name not in ('model','tempdb','master','msdb')
AND state = 0 -- Online Databases Only
AND source_database_id IS NULL --Not a database snapshot
AND is_in_standby = 0
OPEN DatabaseCursor
FETCH NEXT FROM DatabaseCursor INTO @Database
WHILE @@FETCH_STATUS = 0
BEGIN
PRINT @Database
EXEC master.dbo.usp_RebuildIndexes
@maxfrag=30.0,
@maxdensity=90.0,
@online = 'OFF', --Remove for standard edition
@databasename = @Database
FETCH NEXT FROM DatabaseCursor INTO @Database
END
CLOSE DatabaseCursor
DEALLOCATE DatabaseCursor
Actual Rebuild Stored Procedure:
USE [master]
GO
/****** Object: StoredProcedure [dbo].[usp_RebuildIndexes] Script Date: 2019-01-07 11:28:21 AM ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE PROCEDURE [dbo].[usp_RebuildIndexes] (
@maxfrag float = 15.0
, @maxdensity float = 75.0
, @databasename varchar(255)
, @online varchar(3) = 'OFF'
, @fullprocess varchar(3) = 'ON')
AS
/****************************************
--
-- OBJECT NAME: usp_RebuildIndexes
-- CALLED BY: Job SYSTEM - Index Maintenance
-- CREATED BY: Lara Rubbelke
-- CREATED ON: 7/29/2007
-- DESCRIPTION: usp_RebuildIndexes is a process which will assess
the level of fragmentation of all indexes in a
outside the specified parameters.
-- NOTES:
(See website for details)
http://blogs.digineer.com/blogs/larar/archive/2007/07/30/smart-index-defragmentation-for-an-online-world.aspx
ap_RebuildIndexes accepts the following parameters:
@maxfrag The maximum level of acceptable fragmentation
@maxdensity The minimum level of acceptable density
@databasename The database to assess and reorganize
@online Pass 'ON' to issue the reoganization ONLINE
Pass 'OFF' to issue the normal reorganization
@fullprocess Pass 'ON' to defragment all indexes
Pass 'OFF' to only defragment indexes which may
process with the ONLINE option. Note: Some
indexes may not process ONLINE. A partitioned index
or a clustered index with LOB data or a non-clustered
index which includes a column with LOB data will not be
processed ONLINE.
No indexes will be defragmented if the procedure is executed
with @online = 'OFF' and @fullprocess = 'OFF'.
This procedure assumes that a partitioned index will not
be processed ONLINE. If an index is partitioned, the following
options are available:
1. Rebuild or reorganize the entire index ONLINE or offline
2. Reorganize each index partition ONLINE
3. Rebuild each index partition offline
Example:
EXEC ap_RebuildIndexes @maxfrag=15.0, @maxdensity=90.0
, @databasename='AdventureWorks', @online='ON'
, @fullprocess='OFF'
****************************************/
SET NOCOUNT ON;
DECLARE @schemaname sysname;
DECLARE @objectname sysname;
DECLARE @indexname sysname;
DECLARE @indexid int;
DECLARE @Alloc_unit_type_desc varchar(18);
DECLARE @currentfrag float;
DECLARE @currentdensity float;
DECLARE @partitionnum varchar(10);
DECLARE @partitioncount bigint;
DECLARE @indextype varchar(18);
DECLARE @onlinestatus varchar(3);
DECLARE @updatecommand varchar(max);
DECLARE @command varchar(max);
-- ensure the temporary table does not exist
IF (SELECT object_id('tempdb..#work_to_do')) IS NOT NULL
DROP TABLE #work_to_do;
--Create the temporary table. We are using a
--temporary table (versus a table variable)
--since we need to pass this table into dynamic SQL.
CREATE TABLE #work_to_do(
IndexID int not null
, IndexName varchar(255) null
, TableName varchar(255) null
, Tableid int not null
, SchemaName varchar(255) null
, IndexType varchar(18) not null
, Alloc_unit_type_desc varchar(18) not null
, PartitionNumber varchar(18) not null
, PartitionCount int null
, CurrentDensity float not null
, CurrentFragmentation float not null
);
--Select indexes which fall within the specified parameters
--and have a minimum of 8 data pages.
INSERT INTO #work_to_do(
IndexID, Tableid, IndexType, Alloc_unit_type_desc, PartitionNumber, CurrentDensity, CurrentFragmentation
)
SELECT
fi.index_id
, fi.object_id
, fi.index_type_desc AS IndexType
, Alloc_unit_type_desc
, cast(fi.partition_number as varchar(10)) AS PartitionNumber
, fi.avg_page_space_used_in_percent AS CurrentDensity
, fi.avg_fragmentation_in_percent AS CurrentFragmentation
FROM sys.dm_db_index_physical_stats(db_id(@databasename), NULL, NULL, NULL, 'SAMPLED') AS fi
WHERE (fi.avg_fragmentation_in_percent >= @maxfrag
OR fi.avg_page_space_used_in_percent < @maxdensity)
AND page_count> 8
AND fi.index_id > 0
AND fi.Alloc_unit_type_desc <> 'LOB_DATA'
--Assign the index names, schema names, table names and partition counts
--Denote any clustered or non-clustered index which contains
--data types not supported with ONLINE index rebuild
SET @updatecommand = 'UPDATE #work_to_do SET TableName = o.name, SchemaName = s.name, IndexName = i.Name
,PartitionCount = (SELECT COUNT(*) pcount
FROM '
+ QUOTENAME(@databasename) + '.sys.Partitions p
where p.Object_id = w.Tableid
AND p.index_id = w.Indexid)
, Alloc_unit_type_desc = CASE
WHEN EXISTS(SELECT * FROM ' + QUOTENAME(@databasename) + '.SYS.COLUMNS c
WHERE w.TableID = c.OBJECT_ID
AND w.IndexType = ''CLUSTERED INDEX''
AND (user_type_id in (34, 35, 99, 241)
OR (user_type_id in (165, 167, 231) AND max_length = -1)))
THEN ''LOB_DATA''
WHEN EXISTS(SELECT * FROM ' + QUOTENAME(@databasename) + '.sys.index_columns ic
INNER JOIN ' + QUOTENAME(@databasename) + '.sys.Columns c
ON ic.Column_ID = c.column_id
AND ic.Object_ID = c.Object_ID
WHERE w.TableID = ic.Object_ID
AND w.IndexID = ic.Index_ID
AND w.IndexType = ''NONCLUSTERED INDEX''
AND (user_type_id in (34, 35, 99, 241)
OR (user_type_id in (165, 167, 231) AND max_length = -1)))
THEN ''LOB_DATA''
ELSE Alloc_unit_type_desc END
FROM '
+ QUOTENAME(@databasename) + '.sys.objects o INNER JOIN '
+ QUOTENAME(@databasename) + '.sys.schemas s ON o.schema_id = s.schema_id
INNER JOIN #work_to_do w ON o.object_id = w.tableid INNER JOIN '
+ QUOTENAME(@databasename) + '.sys.indexes i ON w.tableid = i.object_id and w.indexid = i.index_id';
EXEC(@updatecommand)
--Declare the cursor for the list of tables, indexes
--and partitions to be processed.
--Note: Reorganizing the clustered index will NOT require
--that the non-clustered indexes are reorganized.
DECLARE rebuildindex CURSOR FOR
SELECT QUOTENAME(IndexName) AS IndexName
, TableName
, SchemaName
, IndexType
, Alloc_unit_type_desc
, PartitionNumber
, PartitionCount
, CurrentDensity
, CurrentFragmentation
FROM #work_to_do i
ORDER BY TableName, IndexID;
-- Open the cursor.
OPEN rebuildindex;
-- Loop through the tables, indexes and partitions.
FETCH NEXT
FROM rebuildindex
INTO @indexname, @objectname, @schemaname, @indextype, @Alloc_unit_type_desc, @partitionnum, @partitioncount, @currentdensity, @currentfrag;
WHILE @@FETCH_STATUS = 0
BEGIN
--If the procedure was executed with ONLINE='ON', determine
--if there are any columns in the index with LOB data. When
--this criteria is met, the ONLINE status is set to OFF.
SET @onlinestatus =
(SELECT CASE WHEN @Alloc_unit_type_desc = 'LOB_DATA'
THEN 'OFF'
WHEN @indextype LIKE '%XML INDEX%'
THEN 'OFF'
ELSE @online
END)
--Rebuild the index where ONLINE='ON' and the above rules are satisfied
--If the index does not satisfy the requirements for an ONLINE index
--defragmentation, the index will be defragmented with required locks
--if @fullprocess='ON'
--Individual partitions on indexes with multiple partitions
--CAN NOT be REBUILT with the ONLINE feature.
SELECT @command = 'ALTER INDEX ' + @indexname + ' ON ' + QUOTENAME(@databasename) +'.' + QUOTENAME(@schemaname) + '.' + QUOTENAME(@objectname);
IF @onlinestatus = 'ON'
BEGIN
IF @partitioncount = 1
BEGIN
SELECT @command = @command + ' REBUILD WITH (ONLINE=ON)';
END
IF @partitioncount > 1
BEGIN
SELECT @command = @command + ' REORGANIZE PARTITION=' + @partitionnum;
END
EXEC (@command);
PRINT 'Executed ' + @command;
END;
ELSE
IF @fullprocess = 'ON'
BEGIN
-- If the index is more heavily fragmented, issue a REBUILD. Otherwise, REORGANIZE.
IF @currentfrag < 30
BEGIN;
SELECT @command = @command + ' REORGANIZE';
IF @partitioncount > 1
SELECT @command = @command + ' PARTITION=' + @partitionnum;
END;
IF @currentfrag >= 30
BEGIN;
SELECT @command = @command + ' REBUILD';
IF @partitioncount > 1
SELECT @command = @command + ' PARTITION=' + @partitionnum;
END;
EXEC (@command);
PRINT 'Executed ' + @command;
END;
FETCH NEXT FROM rebuildindex INTO @indexname, @objectname, @schemaname, @indextype, @Alloc_unit_type_desc, @partitionnum, @partitioncount, @currentdensity, @currentfrag;
END;
-- Close and deallocate the cursor.
CLOSE rebuildindex;
DEALLOCATE rebuildindex;
GO
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply