March 2, 2014 at 11:29 pm
Hi All,
I need help please. A server I'm working on has a very unique situation, where user tables and production tables reside on the same database. Users update / create tables or populates these tables, so it can't be a table-specific trigger. However, they give a new meaning to "kamikaze pilots" as it's not uncommon for them to "accidentally" update / insert / delete 500,000,000 + records in a single statement. I've tried educating them to use batching, but to no avail, so now I'm forced to stop these statements BEFORE they execute, based on rowcount, as they fill up the database log so quickly that it goes into recovery mode (It has a 200GB log file - insane, I know).
I recon the mosts transactions allowed should be 1,000,000 records in a single statement. Can someone please help me with a database trigger to stop them from executing statements with large records?
March 4, 2014 at 6:06 am
Play around with this.
create trigger insert_trigger on trig
after insert as
if (select count (*) from inserted) > 1000000
begin
print 'some_message_here'
rollback
end
March 4, 2014 at 6:26 am
To be honest, no.
The kind of trigger you want isn't really an option. By the time a trigger which can tell the rowcount fires (like the one above), the data modification has already run to completion. If you were to create a trigger like the above one, that 500 million row modification would run completely (and potentially fill the log) and then spend as much or more time rolling back.
Triggers are also per-table, you can't create database-wide DDL triggers.
Can't offhand think of a valid option here.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
March 4, 2014 at 7:19 am
A server I'm working on has a very unique situation, where user tables and production tables reside on the same database.
However it seems that the whole basis for the issue is because you don't have good separation of data. Is there some reason you can't create a second database and move the "user tables" over there? That would at least keep your primary database from entering readonly because the log filled up.
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
March 4, 2014 at 7:43 am
I don't think that there is a way to do what you are requesting. Database triggers are only for DDL statements, not DML statements.
Sure the trigger that SQLSACT has posted will stop the large transactions from being committed, but, because it is an AFTER trigger, the insert has already taken place and has to rollback. Here's a quick example of that:
IF OBJECT_ID('dbo.test', 'U') IS NOT NULL
BEGIN;
DROP TABLE dbo.test;
END;
GO
CREATE TABLE test (id INT IDENTITY(1,1), i INT );
GO
INSERT INTO test
SELECT TOP 1000000
ROW_NUMBER() OVER ( ORDER BY ao.object_id )
FROM sys.all_objects AS ao
CROSS JOIN sys.all_objects AS ao2;
GO
CREATE TRIGGER test_trigger ON dbo.test
AFTER UPDATE
AS
IF ( SELECT COUNT(*)
FROM inserted
) > 1000
BEGIN
PRINT 'some_message_here'
ROLLBACK
END
GO
/* mark the transaction log so pages are committed and it can be shrunk */
CHECKPOINT;
GO
/* shrink the transaction log */
DBCC SHRINKFILE('snapshotDB2_log')WITH NO_INFOMSGS;
GO
/* verify the transaction log size */
SELECT 'Before Update with AFTER Trigger' AS whenRun ,
SIZE
FROM sys.database_files AS df
WHERE type_desc = 'LOg';
GO
DECLARE @startTime DATETIME = GETDATE();
BEGIN TRY
/* update all the rows */
UPDATE dbo.test
SET i = 10;
END TRY
BEGIN CATCH
SELECT DATEDIFF(MILLISECOND, @startTIme, GETDATE()) AS updateTimeWithAfterTrigger
END CATCH
GO
/* check the transaction log size to show growth because the transaction happened */
SELECT 'After Update with AFTER Trigger' AS whenRun ,
SIZE
FROM sys.database_files AS df
WHERE type_desc = 'LOg';
GO
Notice how long it takes to run and the size of the log file before and after the update. This just shows that the logging has to take place before the trigger fires and then the rollback takes place, so you aren't really saving any time or space.
You could try using an INSTEAD OF trigger because then the DML statement hasn't taken place yet, so no logging has taken place which should in theory save you time and log space, but totally tick off your users. Here's an example with that notice how the log doesn't grow and it happens about 5 times faster (on the VM on my laptop):
IF OBJECT_ID('dbo.test', 'U') IS NOT NULL
BEGIN;
DROP TABLE dbo.test;
END;
GO
CREATE TABLE test (id INT IDENTITY(1,1), i INT );
GO
INSERT INTO test
SELECT TOP 1000000
ROW_NUMBER() OVER ( ORDER BY ao.object_id )
FROM sys.all_objects AS ao
CROSS JOIN sys.all_objects AS ao2;
GO
CREATE TRIGGER test_trigger ON dbo.test
INSTEAD OF UPDATE
AS
IF ( SELECT COUNT(*)
FROM inserted
) > 1000
BEGIN
PRINT 'some_message_here'
ROLLBACK
END
GO
/* mark the transaction log so pages are committed and it can be shrunk */
CHECKPOINT;
GO
/* shrink the transaction log */
DBCC SHRINKFILE('snapshotDB2_log')WITH NO_INFOMSGS;
GO
/* verify the transaction log size */
SELECT 'Before Update with AFTER Trigger' AS whenRun ,
SIZE
FROM sys.database_files AS df
WHERE type_desc = 'LOg';
GO
DECLARE @startTime DATETIME = GETDATE();
BEGIN TRY
/* update all the rows */
UPDATE dbo.test
SET i = 10;
END TRY
BEGIN CATCH
SELECT DATEDIFF(MILLISECOND, @startTIme, GETDATE()) AS updateTimeWithAfterTrigger
END CATCH
GO
/* check the transaction log size to show growth because the transaction happened */
SELECT 'After Update with AFTER Trigger' AS whenRun ,
SIZE
FROM sys.database_files AS df
WHERE type_desc = 'LOg';
GO
This is one of those situations where I'd recommend that you handle this through a better security policy that doesn't grant users direct table access at least for modification, but only allows them to do modifications via stored procedures.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
March 5, 2014 at 3:25 am
Save yourself the administrative heartache, if you can, and offload that activity to another database or server instance.
March 6, 2014 at 5:34 pm
You'd have to use an INSTEAD OF trigger, naturally. Those are a pain to write and maintain, since you must write code yourself to complete the modification -- DELETE / INSERT / UPDATE -- if you decide to allow it. One other restriction (at least): you also can't put an INSTEAD OF DELETE/UPDATE trigger on a table with CASCADEd DELETE/UPDATE.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply