May 15, 2003 at 11:31 am
In Microsoft SQL Server 2000 - 8.00.760 (Intel X86).
SQL server stores one cache object for a trigger (setopts & 1024 =
1024) affecting one row and one for a trigger that affects > 1 row.
This is somewhat ok.
As long as you only have updates that affect 1 row or/and 10 rows this
works ok. You have though two objects instead of one, is it worth it?
The real "problem" occurs when you mix these updates, together with
updates of 2-4 rows.
The following actions are taken according to profiler, on the last
update (three row update)
SP:StmtStarting
SP:CacheRemove
SP:CacheRemove
SP:Recompile
SP:CacheInsert
SP:StmtStarting
It clearly shows that two objects are removed from the cache, then
recompiled and a new object inserted into cache.
This may lead to excessive recompiles as all can understand.
One error in syscacheobjects table is that the usecounts for the
removed cacheobject, is used as a base for the new insert. Meaning
that when you control usecounts it does look that you have used the
executable plan in cache, but it was recompiled and a new inserted.
The correct usecounts should have been 1.
Why would the SQL server team design the caching this way??
We feel this is actually an error, perhaps more than one ...
Leif Haraldsson
(I have a script that can reproduce this easily)
May 15, 2003 at 11:58 am
Interesting. Could you post the test script? Thanks,
Jay
May 15, 2003 at 1:29 pm
Example script:
set nocount on
go
if object_id('cache_test') is not null
drop table cache_test
go
create table cache_test
(
id int not null,
descr varchar(30) not null,
rowcreateddt datetime not null default getdate(),
rowcreatedby varchar(30) not null default
right(system_user,30),
testcolumn varchar(30) null,
primary key (id)
)
go
create trigger cache_test_update
on cache_test
for update
as
declare @wi integer
if update(id)
begin
Print 'Update of id'
return
end
select @wi = count(*) from inserted
if exists ( select 1 from inserted where id = 98654 )
print 'xxxx'
return
go
truncate table cache_test
go
insert into cache_test (id, descr, testcolumn)
select 1,'Descr 1','Test 1'
insert into cache_test (id, descr, testcolumn)
select 2,'Descr 2','Test 2'
insert into cache_test (id, descr, testcolumn)
select 3,'Descr 3','Test 3'
insert into cache_test (id, descr, testcolumn)
select 4,'Descr 4','Test 4'
insert into cache_test (id, descr, testcolumn)
select 5,'Descr 5','Test 5'
insert into cache_test (id, descr, testcolumn)
select 6,'Descr 6','Test 6'
insert into cache_test (id, descr, testcolumn)
select 7,'Descr 7','Test 7'
insert into cache_test (id, descr, testcolumn)
select 8,'Descr 8','Test 8'
insert into cache_test (id, descr, testcolumn)
select 9,'Descr 9','Test 9'
insert into cache_test (id, descr, testcolumn)
select 10,'Descr 10','Test 10'
insert into cache_test (id, descr, testcolumn)
select 11,'Descr 11','Test 11'
insert into cache_test (id, descr, testcolumn)
select 12,'Descr 12','Test 12'
select SetOpts,* from master..SyscacheObjects where
ObjId=Object_ID('cache_test_update') order by SetOpts, CacheObjType
go
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 1
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 1
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 1
select SetOpts,* from master..SyscacheObjects where
ObjId=Object_ID('cache_test_update') order by SetOpts, CacheObjType
go
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 10
select SetOpts,* from master..SyscacheObjects where
ObjId=Object_ID('cache_test_update') order by SetOpts, CacheObjType
go
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 10
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 10
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 1
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 10
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 1
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 10
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 1
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 10
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 1
go
select SetOpts,* from master..SyscacheObjects where
ObjId=Object_ID('cache_test_update') order by SetOpts, CacheObjType
go
update cache_test
set testcolumn = 'testing '+cast(id as varchar)
where id between 1 and 3
go
select SetOpts,* from master..SyscacheObjects where
ObjId=Object_ID('cache_test_update') order by SetOpts, CacheObjType
go
May 15, 2003 at 3:14 pm
Can't speak about why this was designed this way though I suspect it was either "easier" to get working or it isn't a substantial impact on performance. Of course, without our own code, hard to test.
This would be an interesting one to submit to them. I'll see what I can do.
Steve Jones
May 16, 2003 at 11:10 am
The problem that we are facing is an unnecessary recompile of up to 5 seconds in certain situations. This is mostly due to the fact that we have most of our business logic in the database. We discuss if we need to restrict updates to maximum 4 rows to avoid the trigger recompile.
Leif Haraldsson
May 18, 2003 at 11:30 am
A temporarily solution could following code change be:
select @cnt = count(*) from #a, fh where
...
if @cnt between 1 and 4
begin
set @SQLCacheProblem = '1'
set rowcount 1
end
while @cnt > 0
begin
update fh set
...
select @dbc=@@RowCount, @dbe=@@Error
if @dbe<>0
begin
...
return
end
if @SQLCacheProblem <> '1'
set @cnt = 0
else
set @cnt = @cnt - @dbc
end
if @SQLCacheProblem = '1'
begin
set @SQLCacheProblem = '0'
set rowcount 0
end
Max 3 extra updates cost much less than the "forced recompile".
Leif Haraldsson
May 26, 2003 at 8:51 pm
Is it any different if you put your logic in a stored procedure that is called from the trigger ? (that is, SP may not recompile)
May 27, 2003 at 12:25 pm
No, the problem is not the code that makes the update. The trigger objects will recompile regardless of what object is doing the update.
From our point of view:
SQL server have three caching objects for triggers
Single row trigger (setopts & 1024 = 1024)
2-4 row trigger
5- row trigger (same caching "space" as 2-4 row trigger)
The problem arises because two objects share the space in the cache and hence forces removal of the other object. The second problem is that when recompiling, all objects are removed. Only the object with the correct setopts should be removed. The single row trigger should NOT be removed, when a multi row trigger needs to be recompiled.
Leif Haraldsson
May 28, 2003 at 12:01 am
I meant, move the code from the trigger to a stored procedure, so the trigger becomes just one line "exec triggerprocedure", recompiling that would be cheap, and I wondered if the stored procedure would still recompile.
But it seems there is a problem creating such a SP that references the psudotables INSERTED and DELETED, and even if it could do so, it would probably recompile all the time due to those psudotables being created and destroyed.
If you were doing a lot of work per triggered row you could put a loop in the trigger to call a SP for each record (passing the key value). That is, assuming the SP would do most of the work and the loop would be low volume. Introducing a loop of course may be counterproductive if you have a lot of rows triggered.
I didnt prattle on the first time because I was pessimistic about the ideas usefulness anyway. But 5sec to recompile sounds like A LOT of logic hanging on this trigger.
June 2, 2003 at 8:53 am
To minimize the recompile time your suggestion would perhaps work. We already have code that loops for executing SP's and some other code.
If Microsoft does not understand the performance problem they have created with this recompile functionality, I believe it would be better to update one row at a time or 5 rows or more. This method does not cause trigger recompilation.
We have to wait and see, and meanwhile rewrite some critical procedures and triggers.
Leif Haraldsson
June 3, 2003 at 7:29 pm
Kalen Delaney, author of Inside SQL Server 2000, wrote a series of articles (Dec. 2002, Feb. 2001, & Jan. 2000) in SQL Server Magazine regarding recompilations. You may find useful information in these.
The gist of these articles is that certain options are stored as server, database, and session options. A couple (QUOTED_IDENTIFIER and ANSI_NULLS) are stored as part of the bitwise value in status column of the sysobjects table for stored procedures and triggers. You can see their values when you edit these objects in Query Analyzer. Example below.
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO
Other options (e.g. ANSI_PADDING, etc.) are session options that are controlled by the type of connection provider used (OLEDB, etc.). See Tools/Options/Connection Properties in Query Analyzer for examples. These option states are stored in the setopts column of the syscacheobjects table when procedures and triggers are compiled.
When certain of these options are out of synch with the database states, recompiles are forced. I know the two shown above are capable of this. On is the default condition of both. If you see either as OFF, suspect this.
You can find the bitwise values of these in the master..spt_values table.
Hope this helps.
Larry Ansley
Larry Ansley
Atlanta, GA
June 4, 2003 at 11:50 am
Thank you for the suggestions.
We have gone through the syscacheobjects table. Note that the setopts values are not completely documented for SQL 2000.
And this problem arises because a trigger has three possible caching objects, but setopts can only use two. A multirow trigger har no special setopts, and a single row trigger has setopts & 1024 = 1024. This is clear.
The only time recompile occurs is when the two multi row trigger objects compete to get the one caching object for the trigger.
See my example.
Only updating the table with single row updates, does not cause any recompile. Adding updates of ten rows does not cause any recompile. But when a three row update occurs, then all objects are removed from cache and then recompiled. If you would then mix a three row update and a ten row update, you would get recompiles "all the time". How SQL server determines this difference between 2-4 row update versus 5 row or more, is not known to us, but must be somewhere (hidden) on the caching object.
As far as we can understand, ANSI settings does not affect this behaviour.
Leif haraldsson
June 5, 2003 at 9:04 am
Leif,
Better grab a fresh cup of coffee. This is going to be a lengthy post. Here is my interpretation of what is happening.
The setopts column in syscacheobjects is reflecting the option settings at the time the object (trigger in this case) is compiled. This is according to Kalen Delaney's excellent book Inside SQL Server 2000. The bitwise value of 1024 indicates the setting Ansi_Null_Dflt_On. This would not cause a recompile, so I was wrong in my earlier supposition about that. The other bitwise option values are described in Books On Line under “user options Option”. They are also available in the SQL Server system lookup table Master..spt_values, where type = ‘sop’. There is other interesting information in this table as well.
The setopts in this case as indicating the following settings.
0 1 disable_def_cnst_check
1 2 implicit_transactions
3 8 ansi_warnings
4 16 ansi_padding
5 32 ansi_nulls
6 64 arithabort
7 128 arithignore
10 1024 ansi_null_dflt_on
12 4096 concat_null_yields_null
Test Value 5371
I don’t have a clear understanding of why the 1024 setting is fluctuating, but I am confident the cause of recompilation is elsewhere, although some of these other settings would cause recompilations, if they were fluctuating.
So what is the cause of your recompilations? Inspect the EventSubClass value in your Profiler trace for the SP:Recompile event. The value is 2, when I tested your code on my system. The description for this is “Statistics changed”. The possible values are –
1 Schema or bindings to regular table or view changed.
2 Statistics changed.
3 Object did not exist in the stored procedure plan, but was created during execution.
4 SET options changed. Note-This is would you would see if my first hunch was correct.
5 Schema or bindings to temporary table changed.
6 Schema or bindings of remote rowset changed.
The only place I have seen these values is in the Sajal Dam’s book SQL Server Query Performance Tuning Distilled.
So the statistics (for the Inserted table) changed, causing SQL Server to prefer a different execution plan. This sounds plausible, considering you are affecting either one, three, or most the rows in the Inserted table. I’m not sure this is because the execution context changed or the query plan is different, but it does make sense.
How can we test this? If you omit all references to the Inserted table in your trigger, and instead do something innocuous, like Select Count(*) From Master..sysobjects, you don’t experience the recompilation. That’s not to say your solution will be much faster, but we can avoid the recompilation.
What can we do about the Inserted table, such as turn off automatic regeneration of statistics for Inserted with sp_autostats? Actually, nothing. The Inserted table is even more vaporous than the syscacheobjects table. Both are dynamic, thus exist only in Server memory. However, the syscacheobjects has permanent metadata in the Master database. According to Kalen Delaney in her book, the Inserted and Deleted tables are special “pseudotables” created dynamically from the Transaction Log during the execution of triggers. Its structure mirrors the structure of the table the trigger is on. Other than that, I’ve found no metadata referencing them.
Some possible solutions are this. And you probably thought I would never get to this.
Solution one. Like Clay suggested, call a second stored procedure from the trigger. You would need to use the same selection criteria used in the original procedure that activated the trigger, and an appropriate Isolation level (at least Repeatable Read, perhaps Serializable depending the trigger’s purpose). An explicit transaction may also be called for more control.
Solution two. Similar to solution one, but perform the action inside the trigger. This might experience recompilations too, depending on what the trigger action is.
Bottom line? You can avoid recompiling the trigger with solution one, but you may not actually improve performance much. I think this depends a lot on what the trigger action is.
Sorry for the lengthy post. I hope I’ve helped.
Larry Ansley
Larry Ansley
Atlanta, GA
June 5, 2003 at 3:03 pm
This Post is lengthy (it includes a long script)
We have a lot of different business logic in our triggers.
I first need to thank you for trying to find a solution. But need to correct your assumptions regarding the setopts column. This has NOTHING to do with @@OPTIONS which you seem to believe. The @@OPTIONS does NOT use the same bitmap as the setopts column. If Kalen Delaney suggests that, please notify the publisher of that error.
setopts & 1024 = 1024 means "single row trigger" and nothing else.
Use following script to find out which settings is used for the cached object.
select
SetOpts,
case when SetOpts & 1 = 1 then 'ON' else 'OFF' end as 'ANSI_PADDING',
case when SetOpts & 2 = 2 then 'ON' else 'OFF' end as 'Parallell query ???(2)',
case when SetOpts & 4 = 4 then 'ON' else 'OFF' end as 'FORCEPLAN',
case when SetOpts & 8 = 8 then 'ON' else 'OFF' end as 'CONCAT_NULL_YIELDS_NULL',
case when SetOpts & 16 = 16 then 'ON' else 'OFF' end as 'ANSI_WARNINGS',
case when SetOpts & 32 = 32 then 'ON' else 'OFF' end as 'ANSI_NULLS',
case when SetOpts & 64 = 64 then 'ON' else 'OFF' end as 'QUOTED_IDENTIFIER',
case when SetOpts & 128 = 128 then 'ON' else 'OFF' end as 'ANSI_NULL_DFLT_ON',
case when SetOpts & 256 = 256 then 'ON' else 'OFF' end as 'ANSI_NULL_DFLT_OFF',
case when SetOpts & 512 = 512 then 'ON' else 'OFF' end as 'NO_BROWSETABLE',
case when SetOpts & 1024 = 1024 then 'ON' else 'OFF' end as 'Single row trigger',
case when SetOpts & 2048 = 2048 then 'ON' else 'OFF' end as '2048',
case when SetOpts & 4096 = 4096 then 'ON' else 'OFF' end as 'ARITHABORT',
case when SetOpts & 8192 = 8192 then 'ON' else 'OFF' end as 'NUMERIC_ROUNDABORT',
* from master..syscacheobjects
where
sql like '%select * from %customers%'
--objid in (1180543985,1180543985)
You can easily verify this with following code in Northwind database.
First clear all options in Query Analyser Current Connection Properties
dbcc freeproccache
go
select * from customers
select @@options --should be 0, setopts below=2
select
SetOpts,
case when SetOpts & 1 = 1 then 'ON' else 'OFF' end as 'ANSI_PADDING',
case when SetOpts & 2 = 2 then 'ON' else 'OFF' end as 'Parallell query ???(2)',
case when SetOpts & 4 = 4 then 'ON' else 'OFF' end as 'FORCEPLAN',
case when SetOpts & 8 = 8 then 'ON' else 'OFF' end as 'CONCAT_NULL_YIELDS_NULL',
case when SetOpts & 16 = 16 then 'ON' else 'OFF' end as 'ANSI_WARNINGS',
case when SetOpts & 32 = 32 then 'ON' else 'OFF' end as 'ANSI_NULLS',
case when SetOpts & 64 = 64 then 'ON' else 'OFF' end as 'QUOTED_IDENTIFIER',
case when SetOpts & 128 = 128 then 'ON' else 'OFF' end as 'ANSI_NULL_DFLT_ON',
case when SetOpts & 256 = 256 then 'ON' else 'OFF' end as 'ANSI_NULL_DFLT_OFF',
case when SetOpts & 512 = 512 then 'ON' else 'OFF' end as 'NO_BROWSETABLE',
case when SetOpts & 1024 = 1024 then 'ON' else 'OFF' end as 'Single row trigger',
case when SetOpts & 2048 = 2048 then 'ON' else 'OFF' end as '2048',
case when SetOpts & 4096 = 4096 then 'ON' else 'OFF' end as 'ARITHABORT',
case when SetOpts & 8192 = 8192 then 'ON' else 'OFF' end as 'NUMERIC_ROUNDABORT',
* from master..syscacheobjects
where
sql like '%select * from %customers%'
go
set arithabort on
select * from customers
select @@options --64 , setopts below=4098
select
SetOpts,
case when SetOpts & 1 = 1 then 'ON' else 'OFF' end as 'ANSI_PADDING',
case when SetOpts & 2 = 2 then 'ON' else 'OFF' end as 'Parallell query ???(2)',
case when SetOpts & 4 = 4 then 'ON' else 'OFF' end as 'FORCEPLAN',
case when SetOpts & 8 = 8 then 'ON' else 'OFF' end as 'CONCAT_NULL_YIELDS_NULL',
case when SetOpts & 16 = 16 then 'ON' else 'OFF' end as 'ANSI_WARNINGS',
case when SetOpts & 32 = 32 then 'ON' else 'OFF' end as 'ANSI_NULLS',
case when SetOpts & 64 = 64 then 'ON' else 'OFF' end as 'QUOTED_IDENTIFIER',
case when SetOpts & 128 = 128 then 'ON' else 'OFF' end as 'ANSI_NULL_DFLT_ON',
case when SetOpts & 256 = 256 then 'ON' else 'OFF' end as 'ANSI_NULL_DFLT_OFF',
case when SetOpts & 512 = 512 then 'ON' else 'OFF' end as 'NO_BROWSETABLE',
case when SetOpts & 1024 = 1024 then 'ON' else 'OFF' end as 'Single row trigger',
case when SetOpts & 2048 = 2048 then 'ON' else 'OFF' end as '2048',
case when SetOpts & 4096 = 4096 then 'ON' else 'OFF' end as 'ARITHABORT',
case when SetOpts & 8192 = 8192 then 'ON' else 'OFF' end as 'NUMERIC_ROUNDABORT',
* from master..syscacheobjects
where
sql like '%select * from %customers%'
go
set quoted_identifier on
select * from customers
select @@options --320 , setopts below=4162
select
SetOpts,
case when SetOpts & 1 = 1 then 'ON' else 'OFF' end as 'ANSI_PADDING',
case when SetOpts & 2 = 2 then 'ON' else 'OFF' end as 'Parallell query ???(2)',
case when SetOpts & 4 = 4 then 'ON' else 'OFF' end as 'FORCEPLAN',
case when SetOpts & 8 = 8 then 'ON' else 'OFF' end as 'CONCAT_NULL_YIELDS_NULL',
case when SetOpts & 16 = 16 then 'ON' else 'OFF' end as 'ANSI_WARNINGS',
case when SetOpts & 32 = 32 then 'ON' else 'OFF' end as 'ANSI_NULLS',
case when SetOpts & 64 = 64 then 'ON' else 'OFF' end as 'QUOTED_IDENTIFIER',
case when SetOpts & 128 = 128 then 'ON' else 'OFF' end as 'ANSI_NULL_DFLT_ON',
case when SetOpts & 256 = 256 then 'ON' else 'OFF' end as 'ANSI_NULL_DFLT_OFF',
case when SetOpts & 512 = 512 then 'ON' else 'OFF' end as 'NO_BROWSETABLE',
case when SetOpts & 1024 = 1024 then 'ON' else 'OFF' end as 'Single row trigger',
case when SetOpts & 2048 = 2048 then 'ON' else 'OFF' end as '2048',
case when SetOpts & 4096 = 4096 then 'ON' else 'OFF' end as 'ARITHABORT',
case when SetOpts & 8192 = 8192 then 'ON' else 'OFF' end as 'NUMERIC_ROUNDABORT',
* from master..syscacheobjects
where
sql like '%select * from %customers%'
go
If you add ANSI_NULL_DFLT_ON as well the setopts will be 4290, this setting has 128 as bit value.
The eventsubclass values is publicly available from Microsoft, and you are correct, that Profiler records a 2. This does not indicate that it is correct.
Why would an update of two rows need a recompile because of statistics change, but not multiple updates of 10 rows. This must be an error in the cache manager. I fail to see how one would interpret this in favour of MS. I am not confident that Microsoft sees this as a great problem, as we have had discussions many times and more often than not MS believes that their solutions works for most applications.
This error in the cache manager will effect everbody that uses triggers. The effect depends on how large the triggers are, and if they use multi row updates.
We always uses inserted and deleted tables in our triggers. I do not see any possibility of moving from trigger logic to SP-logic even if you could reach exactly the same result. We have created logging procedures to find which triggers that gets this problem, and we have to change the updates, either to single row or consistenly updating > 4 rows, or perhaps updating 4 rows at a time (set rowcount 4).
Thank you for trying to understand this for us horrible problem and maybe find a way around it.
Leif Haraldsson
June 5, 2003 at 5:42 pm
If you can provide a documented source of your bitmap, I would be interested in researching it. In particular, I would like to learn more about the NO_BROWSETABLE, FORCEPLAN, Single row trigger, and Parallel query options you use.
Well anyway. Good luck.
Larry Ansley
Larry Ansley
Atlanta, GA
Viewing 15 posts - 1 through 15 (of 18 total)
You must be logged in to reply to this topic. Login to reply