April 27, 2012 at 5:03 am
Elliott Whitlow (4/26/2012)
I can't under ANY circumstances endorse an sp_OA method to doing this. As a matter of fact I'm of the opinion that those should not be used anymore under any circumstances.. I strongly recommend going SQLCLR for most things you wold typically use sp_OA methods for..I also agree with Lynn, thats a LOT of plumbing to accomplish the same thing with Service Broker, which has some required plumbing of its own.
CEWII
+1 on that one.
sp_oa* Procedures are somewhat "kludgy". The SqlClr route offers a much more elegant solution.
April 29, 2012 at 4:29 pm
Elliott Whitlow (4/26/2012)
I can't under ANY circumstances endorse an sp_OA method to doing this. As a matter of fact I'm of the opinion that those should not be used anymore under any circumstances.. I strongly recommend going SQLCLR for most things you wold typically use sp_OA methods for..I also agree with Lynn, thats a LOT of plumbing to accomplish the same thing with Service Broker, which has some required plumbing of its own.
CEWII
Heh... I love sp_OA*. Wanna know why? So I don't have to use SQLCLR! :-D:-P
--Jeff Moden
Change is inevitable... Change for the better is not.
April 30, 2012 at 10:14 am
Oh Jeff you kill me man.. I have had way to many bad experiences with sp_OA methods, especially with in-process objects. out-of-process is still bad but not quite as bad..
I generally prefer SQLCLR. And in some cases like text manipulation it is just SCREAMING fast..
CEWII
May 1, 2012 at 5:10 am
I would also strongly discourage the use of the OLE Automation procedures (especially with a cursor as in the demo code). Service Broker may also not be the best option. Nevertheless, changing the original procedure to simply queue the request is a good plan. SQL Server Service Broker developer Remus Rusanu shows the correct way to do this, as well as discussing the limitations of Service Broker for this type of application in the blog post below:
http://rusanu.com/2010/03/26/using-tables-as-queues/
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 1, 2012 at 8:36 am
SQL Kiwi (5/1/2012)
I would also strongly discourage the use of the OLE Automation procedures (especially with a cursor as in the demo code). Service Broker may also not be the best option. Nevertheless, changing the original procedure to simply queue the request is a good plan. SQL Server Service Broker developer Remus Rusanu shows the correct way to do this, as well as discussing the limitations of Service Broker for this type of application in the blog post below:
Nice writeup. I am certain I will refer to the queue breakdown in the future. It's true, SB may not be the best (relative) choice for the OP. I am confident a homegrown queue could be built, and surely there are others...surely, not Shirley. However The Ace is the relevant use case here, in my opinion. From the article (underline added):
However Service Broker has one ace up its selves: Activation. Queue processing is often associated with event driven programming and the possibility to launch a procedure to handle incoming rows as they are enqueued in is always required with queues. Triggers don’t work as processing has to occur after the enqueue is committed. And scheduled SQL Agent jobs don’t adapt to the variable rates and spikes queue experience: if they are too aggressive they’ll burn CPU, but if they are too passive the latency increases even under no load. Is hard enough to tune it for a sweet spot under a constant load, but add a variable load with spikes and the task becomes impossible. Unfortunately there is no substitute for Activation, you have to handle the processing as a separate tasks that polls the queue for new rows. The only way to leverage activation, with it’s sweet mix of non-polling and self load balancing, is to use Service Broker Queues.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
May 1, 2012 at 2:37 pm
opc.three (5/1/2012)
However The Ace is the relevant use case here, in my opinion.
It might be; I don't know enough about the requirement to say. That's why I said SB may not be the best option; activation is certainly a benefit, but unless you are already familiar with SB, or want to use this as a learning exercise, many systems can be made to work well and quickly with a basic queue and a SQL Server Agent job. Hopefully, the article will provide enough information for the original questioner to decide which is best.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 2, 2012 at 9:00 am
SQL Kiwi (5/1/2012)
opc.three (5/1/2012)
However The Ace is the relevant use case here, in my opinion.It might be; I don't know enough about the requirement to say. That's why I said SB may not be the best option; activation is certainly a benefit, but unless you are already familiar with SB, or want to use this as a learning exercise, many systems can be made to work well and quickly with a basic queue and a SQL Server Agent job. Hopefully, the article will provide enough information for the original questioner to decide which is best.
I think its an honest question if SB is the right answer, in a comparison to the sp_OA method its a no brainer to me. But queues managed by SQL Agent jobs also works well. When you have to run that job every 1 minute or every 5 I'm thinking that method is less effective.
But like most things SQL, it depends.
CEWII
May 2, 2012 at 9:08 am
Elliott Whitlow (5/2/2012)
When you have to run that job every 1 minute or every 5 I'm thinking that method is less effective.
Perhaps. I think the last time I did this in production (and it was a while ago!) the job called a procedure that ran an infinite loop, checking the queue as often as necessary with a WAITFOR DELAY at an appropriate setting. There may also have been a second job that monitored the health of the first one, I don't recall.
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 2, 2012 at 9:13 am
Elliott Whitlow (5/2/2012)
But like most things SQL, it depends.
From what has been shared so far the use case presents itself squarely in SB's wheelhouse, but I will leave it there, as there is not much point in debating "it depends".
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
May 2, 2012 at 10:06 am
SQL Kiwi (5/2/2012)
Elliott Whitlow (5/2/2012)
When you have to run that job every 1 minute or every 5 I'm thinking that method is less effective.Perhaps. I think the last time I did this in production (and it was a while ago!) the job called a procedure that ran an infinite loop, checking the queue as often as necessary with a WAITFOR DELAY at an appropriate setting. There may also have been a second job that monitored the health of the first one, I don't recall.
That is one approach I've seen as well, I would likely use a feature of agent to make sure it is always running. I would add a schedule at say run every 5m, this way anytime the job stopped it would get restarted but agent would never have more than one running..
CEWII
May 2, 2012 at 12:01 pm
opc.three (5/2/2012)
Elliott Whitlow (5/2/2012)
But like most things SQL, it depends.From what has been shared so far the use case presents itself squarely in SB's wheelhouse, but I will leave it there, as there is not much point in debating "it depends".
Sure, and if the original questioner has the time and inclination to learn Service Broker, that's all good. I would just say that it's not quite as simple or easy to get right first time as perhaps one might think. Queues are another (perhaps simpler and quicker) option, that's all. I won't be upset either way 🙂
Paul White
SQLPerformance.com
SQLkiwi blog
@SQL_Kiwi
May 2, 2012 at 12:15 pm
Elliott Whitlow (5/2/2012)
SQL Kiwi (5/2/2012)
Elliott Whitlow (5/2/2012)
When you have to run that job every 1 minute or every 5 I'm thinking that method is less effective.Perhaps. I think the last time I did this in production (and it was a while ago!) the job called a procedure that ran an infinite loop, checking the queue as often as necessary with a WAITFOR DELAY at an appropriate setting. There may also have been a second job that monitored the health of the first one, I don't recall.
That is one approach I've seen as well, I would likely use a feature of agent to make sure it is always running. I would add a schedule at say run every 5m, this way anytime the job stopped it would get restarted but agent would never have more than one running..
CEWII
Running a job every minute, or even every 5, actually contributes to cache bloat (at least in SQL 2005 & 2008) because of how Agent adds to the msdb history tables. Running a job indefinitely solves that problem, but interferes with some monitoring tools since to some applied metrics it can appear as if the job were hung. So, I guess we're back to "it depends".
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
May 2, 2012 at 9:46 pm
opc.three (5/2/2012)
Elliott Whitlow (5/2/2012)
SQL Kiwi (5/2/2012)
Elliott Whitlow (5/2/2012)
When you have to run that job every 1 minute or every 5 I'm thinking that method is less effective.Perhaps. I think the last time I did this in production (and it was a while ago!) the job called a procedure that ran an infinite loop, checking the queue as often as necessary with a WAITFOR DELAY at an appropriate setting. There may also have been a second job that monitored the health of the first one, I don't recall.
That is one approach I've seen as well, I would likely use a feature of agent to make sure it is always running. I would add a schedule at say run every 5m, this way anytime the job stopped it would get restarted but agent would never have more than one running..
CEWII
Running a job every minute, or even every 5, actually contributes to cache bloat (at least in SQL 2005 & 2008) because of how Agent adds to the msdb history tables. Running a job indefinitely solves that problem, but interferes with some monitoring tools since to some applied metrics it can appear as if the job were hung. So, I guess we're back to "it depends".
I think you might have missed my point. If you have a process that is supposed to run all the time if you add a 5m schedule only one copy will ever be running, so if the job is still running it won't even TRY to start another one in 5m..
CEWII
May 2, 2012 at 10:05 pm
I got you, I was just pointing out that the always-running polling option has downsides, as does the polling option where the proc exits, as does SB. My original comment was that what was shared describes a common SB use case, possibly the use case at the forefront during its conception...not completely sure how we got to parsing this one apart at this level.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
May 3, 2012 at 10:13 am
SQL Kiwi (5/2/2012)
opc.three (5/2/2012)
Elliott Whitlow (5/2/2012)
But like most things SQL, it depends.From what has been shared so far the use case presents itself squarely in SB's wheelhouse, but I will leave it there, as there is not much point in debating "it depends".
Sure, and if the original questioner has the time and inclination to learn Service Broker, that's all good. I would just say that it's not quite as simple or easy to get right first time as perhaps one might think. Queues are another (perhaps simpler and quicker) option, that's all. I won't be upset either way 🙂
I don't know about this one, Paul. I see this type of comment a lot on the net...that the bar is simply too high to get started with SB so use SQL Agent or an infinite loop or some other home-brew that will somehow be easier to understand or support...and it plays well in the Forums but for my money it simply is not true.
Taking the sample code from rusanu's blog post on asynchronous proc execution[/url] I had an async proc working in about 10 minutes on my local AdventureWorks2008R2 database. Adding support for proc parameters is outlined in another blog post[/url] here.
Here is rusanu's code with formatting and some of my additions to have it run from top to bottom on any 2008R2 instance with an AdventureWorks2008R2 database:
-- http://rusanu.com/2009/08/05/asynchronous-procedure-execution/
USE AdventureWorks2008R2
GO
-----------------------------------------------------------------------------------------
-- configure the DB for this demo
ALTER AUTHORIZATION ON DATABASE::AdventureWorks2008R2 TO [sa]
GO
ALTER DATABASE AdventureWorks2008R2 SET SINGLE_USER WITH ROLLBACK IMMEDIATE
GO
ALTER DATABASE AdventureWorks2008R2 SET ENABLE_BROKER
GO
ALTER DATABASE AdventureWorks2008R2 SET MULTI_USER
GO
-----------------------------------------------------------------------------------------
-- cleanup
IF EXISTS ( SELECT *
FROM sys.objects
WHERE object_id = OBJECT_ID(N'AsyncExecResults')
AND type IN (N'U') )
DROP TABLE [AsyncExecResults]
GO
IF EXISTS ( SELECT *
FROM sys.objects
WHERE object_id = OBJECT_ID(N'usp_AsyncExecActivated')
AND type IN (N'P', N'PC') )
DROP PROCEDURE usp_AsyncExecActivated;
GO
IF EXISTS ( SELECT *
FROM sys.objects
WHERE object_id = OBJECT_ID(N'usp_AsyncExecInvoke')
AND type IN (N'P', N'PC') )
DROP PROCEDURE [usp_AsyncExecInvoke];
GO
IF EXISTS ( SELECT *
FROM sys.objects
WHERE object_id = OBJECT_ID(N'usp_MyLongRunningProcedure')
AND type IN (N'P', N'PC') )
DROP PROCEDURE usp_MyLongRunningProcedure;
GO
IF EXISTS ( SELECT *
FROM sys.objects
WHERE object_id = OBJECT_ID(N'usp_MyFaultyProcedure')
AND type IN (N'P', N'PC') )
DROP PROCEDURE usp_MyFaultyProcedure;
GO
IF EXISTS ( SELECT *
FROM sys.services
WHERE name = N'AsyncExecService' )
DROP SERVICE [AsyncExecService]
GO
IF EXISTS ( SELECT *
FROM sys.service_queues
WHERE name = N'AsyncExecQueue' )
DROP QUEUE [dbo].[AsyncExecQueue]
GO
-----------------------------------------------------------------------------------------
-- build objects
CREATE TABLE [AsyncExecResults]
(
[token] UNIQUEIDENTIFIER PRIMARY KEY,
[submit_time] DATETIME NOT NULL,
[start_time] DATETIME NULL,
[finish_time] DATETIME NULL,
[error_number] INT NULL,
[error_message] NVARCHAR(2048) NULL
);
GO
CREATE QUEUE [AsyncExecQueue];
GO
CREATE SERVICE [AsyncExecService] ON QUEUE [AsyncExecQueue] ([DEFAULT]);
GO
CREATE PROCEDURE usp_AsyncExecActivated
AS
BEGIN
SET NOCOUNT ON;
DECLARE @h UNIQUEIDENTIFIER,
@messageTypeName SYSNAME,
@messageBody VARBINARY(MAX),
@xmlBody XML,
@procedureName SYSNAME,
@startTime DATETIME,
@finishTime DATETIME,
@execErrorNumber INT,
@execErrorMessage NVARCHAR(2048),
@xactState SMALLINT,
@token UNIQUEIDENTIFIER;
BEGIN TRANSACTION;
BEGIN TRY;
RECEIVE TOP(1)
@h = [conversation_handle],
@messageTypeName = [message_type_name],
@messageBody = [message_body]
FROM [AsyncExecQueue];
IF (@h IS NOT NULL)
BEGIN
IF (@messageTypeName = N'DEFAULT')
BEGIN
-- The DEFAULT message type is a procedure invocation.
-- Extract the name of the procedure from the message body.
--
SELECT @xmlBody = CAST(@messageBody AS XML);
SELECT @procedureName = @xmlBody.value('(//procedure/name)[1]', 'sysname');
SAVE TRANSACTION usp_AsyncExec_procedure;
SELECT @startTime = GETUTCDATE();
BEGIN TRY
EXEC @procedureName;
END TRY
BEGIN CATCH
-- This catch block tries to deal with failures of the procedure execution
-- If possible it rolls back to the savepoint created earlier, allowing
-- the activated procedure to continue. If the executed procedure
-- raises an error with severity 16 or higher, it will doom the transaction
-- and thus rollback the RECEIVE. Such case will be a poison message,
-- resulting in the queue disabling.
--
SELECT @execErrorNumber = ERROR_NUMBER(),
@execErrorMessage = ERROR_MESSAGE(),
@xactState = XACT_STATE();
IF (@xactState = -1)
BEGIN
ROLLBACK;
RAISERROR(N'Unrecoverable error in procedure %s: %i: %s', 16, 10,
@procedureName, @execErrorNumber, @execErrorMessage);
END
ELSE
IF (@xactState = 1)
BEGIN
ROLLBACK TRANSACTION usp_AsyncExec_procedure;
END
END CATCH
SELECT @finishTime = GETUTCDATE();
SELECT @token = [conversation_id]
FROM sys.conversation_endpoints
WHERE [conversation_handle] = @h;
IF (@token IS NULL)
BEGIN
RAISERROR(N'Internal consistency error: conversation not found', 16, 20);
END
UPDATE [AsyncExecResults]
SET [start_time] = @starttime,
[finish_time] = @finishTime,
[error_number] = @execErrorNumber,
[error_message] = @execErrorMessage
WHERE [token] = @token;
IF (0 = @@ROWCOUNT)
BEGIN
RAISERROR(N'Internal consistency error: token not found', 16, 30);
END
END CONVERSATION @h;
END
ELSE
IF (@messageTypeName = N'http://schemas.microsoft.com/SQL/ServiceBroker/EndDialog')
BEGIN
END CONVERSATION @h;
END
ELSE
IF (@messageTypeName = N'http://schemas.microsoft.com/SQL/ServiceBroker/Error')
BEGIN
DECLARE @errorNumber INT,
@errorMessage NVARCHAR(4000);
SELECT @xmlBody = CAST(@messageBody AS XML);
WITH XMLNAMESPACES (DEFAULT N'http://schemas.microsoft.com/SQL/ServiceBroker/Error')
SELECT @errorNumber = @xmlBody.value ('(/Error/Code)[1]', 'INT'),
@errorMessage = @xmlBody.value ('(/Error/Description)[1]', 'NVARCHAR(4000)');
-- Update the request with the received error
SELECT @token = [conversation_id]
FROM sys.conversation_endpoints
WHERE [conversation_handle] = @h;
UPDATE [AsyncExecResults]
SET [error_number] = @errorNumber,
[error_message] = @errorMessage
WHERE [token] = @token;
END CONVERSATION @h;
END
ELSE
BEGIN
RAISERROR(N'Received unexpected message type: %s', 16, 50, @messageTypeName);
END
END
ELSE
BEGIN
RAISERROR(N'Unrecoverable error in procedure no @h', 16, 10);
END
COMMIT;
END TRY
BEGIN CATCH
DECLARE @error INT,
@message NVARCHAR(2048);
SELECT @error = ERROR_NUMBER(),
@message = ERROR_MESSAGE(),
@xactState = XACT_STATE();
IF (@xactState <> 0)
BEGIN
ROLLBACK;
END;
RAISERROR(N'Error: %i, %s', 1, 60, @error, @message) WITH log;
END CATCH
END
GO
ALTER QUEUE [AsyncExecQueue]
WITH ACTIVATION (
PROCEDURE_NAME = [usp_AsyncExecActivated],
MAX_QUEUE_READERS = 1,
EXECUTE AS owner,
STATUS = ON
);
GO
CREATE PROCEDURE [usp_AsyncExecInvoke]
@procedureName SYSNAME,
@token UNIQUEIDENTIFIER OUTPUT
AS
BEGIN
DECLARE @h UNIQUEIDENTIFIER,
@xmlBody XML,
@trancount INT;
SET nocount ON;
SET @trancount = @@trancount;
IF @trancount = 0
BEGIN TRANSACTION
ELSE
SAVE TRANSACTION usp_AsyncExecInvoke;
BEGIN TRY
BEGIN DIALOG CONVERSATION @h
FROM SERVICE [AsyncExecService]
TO SERVICE N'AsyncExecService', 'current database'
WITH ENCRYPTION = OFF;
SELECT @token = [conversation_id]
FROM sys.conversation_endpoints
WHERE [conversation_handle] = @h;
SELECT @xmlBody = (
SELECT @procedureName AS [name]
FOR
XML PATH('procedure'),
TYPE
);
SEND ON CONVERSATION @h (@xmlBody);
INSERT INTO [AsyncExecResults]
([token], [submit_time])
VALUES (@token, GETUTCDATE());
IF @trancount = 0
COMMIT;
END TRY
BEGIN CATCH
DECLARE @error INT,
@message NVARCHAR(2048),
@xactState SMALLINT;
SELECT @error = ERROR_NUMBER(),
@message = ERROR_MESSAGE(),
@xactState = XACT_STATE();
IF @xactState = -1
ROLLBACK;
IF @xactState = 1
AND @trancount = 0
ROLLBACK
IF @xactState = 1
AND @trancount > 0
ROLLBACK TRANSACTION usp_my_procedure_name;
RAISERROR(N'Error: %i, %s', 16, 1, @error, @message);
END CATCH
END
GO
CREATE PROCEDURE [usp_MyLongRunningProcedure]
AS
BEGIN
WAITFOR DELAY '00:00:05';
END
go
CREATE PROCEDURE [usp_MyFaultyProcedure]
AS
BEGIN
SET nocount ON;
DECLARE @t TABLE (id INT PRIMARY KEY);
INSERT INTO @t
(id)
VALUES (1);
INSERT INTO @t
(id)
VALUES (1);
END
go
-----------------------------------------------------------------------------------------
-- run demo code
DECLARE @token UNIQUEIDENTIFIER;
EXEC usp_AsyncExecInvoke
N'usp_MyLongRunningProcedure',
@token OUTPUT;
SELECT *
FROM [AsyncExecResults]
WHERE [token] = @token;
GO
DECLARE @token UNIQUEIDENTIFIER;
EXEC usp_AsyncExecInvoke
N'usp_MyFaultyProcedure',
@token OUTPUT;
SELECT *
FROM [AsyncExecResults]
WHERE [token] = @token;
GO
WAITFOR DELAY '00:00:10';
SELECT *
FROM [AsyncExecResults];
GO
I think SB is passed over with a simple wave of the hand far too often citing "complexity", perceived or real. SB is the right tool for many jobs, and the wrong one for many still. Of course the skillset of the implementer should be a consideration, it will depend (there is that word again) on a lot of human factors not just technical details, but it is sold-short too often IMHO.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
Viewing 15 posts - 16 through 30 (of 35 total)
You must be logged in to reply to this topic. Login to reply