January 10, 2012 at 3:46 pm
I selected the data, it took micro seconds. But, when running an update statement (two columns) it takes too long. The server is going to a 'NOT RESPONDING' state and freezes for about 15 - 20 mins.
Below is the query and details of my research -
UPDATE Table
SET col1='C'
,DATE = '2012-01-10 00:00:00.000'
where NAME in ('A','B')
The table contains of over 2000 records.There is no specific activity on the server, there is a trigger associated with one of the columns which I am trying to update.
Can you get me some inputs on nailing down the cause of the issue. As I am new to be a DBA.
Eshika
January 10, 2012 at 4:31 pm
Need the ddl (CREATE TABLE script) for the table including the definition for the trigger. If the trigger affects any other tables we will need those also.
January 10, 2012 at 4:37 pm
2000 records should take very little time at all to update.
Please provide the trigger definition.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
January 10, 2012 at 6:50 pm
An overview of the trigger - There are two databases DB1-Transactional and DB2-Archive. There are two tables from each database, Tables1 & 2 from DB1 and Tables 3&4 from DB2.
The trigger involves both databases and 4 tables.
As stats column = A from table1 ;
1. updates the date column ;
2. The data will be moved to from TB1 -->Tb3 DB2
3. The corresponding data will be moved to from TB2 -->Tb4 DB2
And all the data from Tb1 & TB2 will be deleted after insert.
Table definition -
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [Schema1].[Table1](
[ROWID] [bigint] IDENTITY(1,1) NOT NULL,
[LID] [varchar](100) NOT NULL,
[NAME] [varchar](100) NULL,
[TOTALCOUNT] [int] NULL,
[PID] [int] NULL,
[CID] [int] NULL,
[OID] [int] NULL,
[ADATE] [datetime] NULL,
[RDATE] [datetime] NULL,
[CDATE] [date] NULL,
[STAT] [varchar](10) NULL,
[nvarchar](max) NULL,
[REACH] [varchar](4) NULL,
[SA] [varchar](4) NULL,
[RED] [char](1) NULL,
[DK] [varchar](4) NULL,
[DATEINDB] [datetime] NULL,
[MODDATE] [datetime] NULL,
[TID] [varchar](100) NULL,
CONSTRAINT [PK_Table1] PRIMARY KEY NONCLUSTERED
(
[ROWID] ASC,
[LID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [Schema1].[Table1] ADD CONSTRAINT [DF_Table1_STAT] DEFAULT ('A') FOR [STAT]
GO
ALTER TABLE [Schema1].[Table1] ADD CONSTRAINT [DF_Table1_DATEINDB] DEFAULT (getdate()) FOR [DATEINDB]
GO
Trigger Statement -
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [Schema1].[Table1_STAT]
ON [Schema1].[Table1]
FOR UPDATE ,INSERT , DELETE
AS
BEGIN
IF NOT UPDATE(MODDATE)
UPDATE [Schema1].[Table1]
SET MODDATE=GETDATE()
WHERE LID IN (SELECT LID FROM inserted WHERE STATUS in ('C','X'))
---Insert the C or X lead sheet to arch
INSERT INTO [DB2].[dbo].[ArchTable2]
([ROWID] ,[LID],[NAME],[TOTALCOUNT],[PID],[CID],[OID],[ADATE],[RDATE]
[CDATE],[STAT] , ,[REACH] ,[SA] ,[RED], [DK] ,[DATEINDB],[MODDATE],[TID])
SELECT [ROWID] ,[LID],[NAME],[TOTALCOUNT],[PID],[CID],[OID],[ADATE],[RDATE]
[CDATE],[STAT] , ,[REACH] ,[SA] ,[RED], [DK] ,[DATEINDB],[MODDATE],[TID]
FROM inserted
WHERE LID IN (SELECT LID FROM inserted WHERE STAT in ('C','X'))
-- Insert the closed or disabled lead sheet associated leads to arch
INSERT INTO [DB2].[dbo].[ARCHTable3]
(80 Columns)
SELECT 80 columns
FROM [DB1].[Schema1].[Table2] A INNER JOIN inserted I
ON A. [LID] = I.LID
WHERE I.STAT in ('C','X')
-- Delete all the closed or disabled lead sheets and leads after archiving
DELETE [DB1].[Schema1].[Table2] WHERE LID IN (SELECT LID FROM inserted WHERE STAT in ('C','X'))
DELETE [DB1].[Schema1].[Table1] WHERE LID IN (SELECT LID FROM inserted WHERE STATUS in ('C','X'))
END
January 10, 2012 at 8:17 pm
Eshika (1/10/2012)
An overview of the trigger - There are two databases DB1-Transactional and DB2-Archive. There are two tables from each database, Tables1 & 2 from DB1 and Tables 3&4 from DB2.The trigger involves both databases and 4 tables.
As stats column = A from table1 ;
1. updates the date column ;
2. The data will be moved to from TB1 -->Tb3 DB2
3. The corresponding data will be moved to from TB2 -->Tb4 DB2
And all the data from Tb1 & TB2 will be deleted after insert.
Table definition -
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO
CREATE TABLE [Schema1].[Table1](
[ROWID] [bigint] IDENTITY(1,1) NOT NULL,
[LID] [varchar](100) NOT NULL,
[NAME] [varchar](100) NULL,
[TOTALCOUNT] [int] NULL,
[PID] [int] NULL,
[CID] [int] NULL,
[OID] [int] NULL,
[ADATE] [datetime] NULL,
[RDATE] [datetime] NULL,
[CDATE] [date] NULL,
[STAT] [varchar](10) NULL,
[nvarchar](max) NULL,
[REACH] [varchar](4) NULL,
[SA] [varchar](4) NULL,
[RED] [char](1) NULL,
[DK] [varchar](4) NULL,
[DATEINDB] [datetime] NULL,
[MODDATE] [datetime] NULL,
[TID] [varchar](100) NULL,
CONSTRAINT [PK_Table1] PRIMARY KEY NONCLUSTERED
(
[ROWID] ASC,
[LID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_PADDING OFF
GO
ALTER TABLE [Schema1].[Table1] ADD CONSTRAINT [DF_Table1_STAT] DEFAULT ('A') FOR [STAT]
GO
ALTER TABLE [Schema1].[Table1] ADD CONSTRAINT [DF_Table1_DATEINDB] DEFAULT (getdate()) FOR [DATEINDB]
GO
Trigger Statement -
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [Schema1].[Table1_STAT]
ON [Schema1].[Table1]
FOR UPDATE ,INSERT , DELETE
AS
BEGIN
IF NOT UPDATE(MODDATE)
UPDATE [Schema1].[Table1]
SET MODDATE=GETDATE()
WHERE LID IN (SELECT LID FROM inserted WHERE STATUS in ('C','X'))
---Insert the C or X lead sheet to arch
INSERT INTO [DB2].[dbo].[ArchTable2]
([ROWID] ,[LID],[NAME],[TOTALCOUNT],[PID],[CID],[OID],[ADATE],[RDATE]
[CDATE],[STAT] , ,[REACH] ,[SA] ,[RED], [DK] ,[DATEINDB],[MODDATE],[TID])
SELECT [ROWID] ,[LID],[NAME],[TOTALCOUNT],[PID],[CID],[OID],[ADATE],[RDATE]
[CDATE],[STAT] , ,[REACH] ,[SA] ,[RED], [DK] ,[DATEINDB],[MODDATE],[TID]
FROM inserted
WHERE LID IN (SELECT LID FROM inserted WHERE STAT in ('C','X'))
-- Insert the closed or disabled lead sheet associated leads to arch
INSERT INTO [DB2].[dbo].[ARCHTable3]
(80 Columns)
SELECT 80 columns
FROM [DB1].[Schema1].[Table2] A INNER JOIN inserted I
ON A. [LID] = I.LID
WHERE I.STAT in ('C','X')
-- Delete all the closed or disabled lead sheets and leads after archiving
DELETE [DB1].[Schema1].[Table2] WHERE LID IN (SELECT LID FROM inserted WHERE STAT in ('C','X'))
DELETE [DB1].[Schema1].[Table1] WHERE LID IN (SELECT LID FROM inserted WHERE STATUS in ('C','X'))
END
Several things jump out at me:
1) The trigger is defined to be fired on INSERT, UPDATE, or DELETE on the table [Schema1].[Table1].
2) It appears, based on the logic, that perhaps this trigger should only be fired on UPDATE. Question, can data being INSERTed be immediately archived?
3) Do you have recursive triggers enabled on the database. I wouldn't be surprised if this trigger is blocking itself when fired.
January 10, 2012 at 8:18 pm
Based on this being related to leads, how large are the tables involved?
Please provide the execution plan too!
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
January 10, 2012 at 8:24 pm
SQLRNNR (1/10/2012)
Based on this being related to leads, how large are the tables involved?Please provide the execution plan too!
Agreed, and it should be the Actual execute plan if possible (but based on my assumption, the estimated may have to do).
January 11, 2012 at 1:02 pm
In Tables 1&3 there are over 2000 records,tables 2 & 4 there are over 20 million records.
The trigger should be fired on an update, there are no recursive triggers on the tables. I will post the execution plan soon.
I tried to update after disabling the trigger and it was fast. I suspect the size of the tables involved is causing the issue.
Please let me know your thoughts,Thank you for all your responses.
January 11, 2012 at 2:03 pm
Eshika (1/11/2012)
In Tables 1&3 there are over 2000 records,tables 2 & 4 there are over 20 million records.The trigger should be fired on an update, there are no recursive triggers on the tables. I will post the execution plan soon.
I tried to update after disabling the trigger and it was fast. I suspect the size of the tables involved is causing the issue.
Please let me know your thoughts,Thank you for all your responses.
You have the trigger defined for update, insert, delete. Should it only be for update based on what you just said?
Based on the exec plan, your trigger code can likely be optimized so that it will run faster.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
January 11, 2012 at 2:39 pm
On the really large tables, looks like you are using LID, varchar(100), to join / match on them.
Is there an index on LID on the large tables? If so, were you able to check the query plan and verify that SQL will use the index to do those joins / matches?
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
January 11, 2012 at 6:14 pm
I have a clustered and non-clustered index on LID in tables 1&3 but not in 2 & 4. The execution plan for the update statement includes execution plan for the trigger. The execution plan states there are missing indexes- which are already existing. I have real hard time inserting images in the post.
January 11, 2012 at 7:09 pm
Read this link and then post the execution plan
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/
The missing indexes being recommend by the execution plan are probably not covering indexes but we can't tell that if we can't see the plan.
We would also need to see some scripts for the tables involved.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
January 11, 2012 at 11:23 pm
Try the following trigger in a test environment. You will need to update it as you didn't provide all the information needed (no definition for the second table (80 columns)). Also, be sure to note the comments where I modified or added code.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
ALTER TRIGGER [Schema1].[Table1_STAT]
ON [Schema1].[Table1]
FOR UPDATE -- , INSERT, DELETE <- If you don't want this fired on INSERT or DELETE, don't include them
AS
BEGIN
-- IF NOT UPDATE(MODDATE)
-- UPDATE [Schema1].[Table1]
-- SET MODDATE = GETDATE() -- why do this when you are going to delete immediately anyway
-- WHERE LID IN (SELECT LID FROM inserted WHERE STATUS in ('C','X'))
-- declare table variables to hold the values being archived
declare @archive_table1 table (
--columns for table 1
)
declare @archive_table2 table (
--columns for table 2
)
---Insert the C or X lead sheet to arch
INSERT INTO [DB2].[dbo].[ArchTable2](
[ROWID] ,
[LID],
[NAME],
[TOTALCOUNT],
[PID],
[CID],
[OID],
[ADATE],
[RDATE],
[CDATE],
[STAT] ,
,
[REACH] ,
[SA] ,
[RED],
[DK] ,
[DATEINDB],
[MODDATE],
[TID]
)
output ( -- Output the rows to a table variable that are being archived
[ROWID] ,
[LID],
[NAME],
[TOTALCOUNT],
[PID],
[CID],
[OID],
[ADATE],
[RDATE],
[CDATE],
[STAT] ,
,
[REACH] ,
[SA] ,
[RED],
[DK] ,
[DATEINDB],
[MODDATE],
[TID]
) into @archive_table1
SELECT
[ROWID] ,
[LID],
[NAME],
[TOTALCOUNT],
[PID],
[CID],
[OID],
[ADATE],
[RDATE]
[CDATE],
[STAT] ,
,
[REACH] ,
[SA] ,
[RED],
[DK] ,
[DATEINDB],
GETDATE(), -- [MODDATE], <- Put the datetime here
[TID]
FROM
inserted
WHERE
STAT in ('C','X')
-- Insert the closed or disabled lead sheet associated leads to arch
INSERT INTO [DB2].[dbo].[ARCHTable3]
(80 Columns)
output ( -- Output the rows to a table variable that are being archived
80 columns
) into @archive_table2
SELECT 80 columns
FROM [Schema1].[Table2] A INNER JOIN inserted I
ON A.[LID] = I.LID
WHERE I.STAT in ('C','X')
-- Delete all the closed or disabled lead sheets and leads after archiving
DELETE FROM
[Schema1].[Table2]
FROM
[Schema1].[Table2] t2
inner join @archive_table2 at2
on (t2.LID = at2.LID)
DELETE FROM
[Schema1].[Table1]
FROM
[Schema1].[Table1] t1
inner join @archive_table1 at1
on (t1.LID = at1.LID)
END
January 14, 2012 at 11:05 am
Just checking to see how things are going with this issue. Any updates?
January 14, 2012 at 11:24 am
These problems are usually caused by the optimizer choosing a nested loops join for the trigger statements that join to the inserted or deleted pseudo-tables. This would be obvious from an execution plan with runtime statistics (trigger execution plans do not appear when only an estimated plan is requested). The solution is almost always to add OPTION (HASH JOIN, MERGE JOIN) to the problem statements in the trigger. This will prevent the nested loops join algorithm being considered by the optimizer.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply