August 14, 2009 at 7:51 am
John,
That was one of the best posts I've read on this forum. Great response.
If you ever had time, that would be a great subject for an article too.
Cheers
Kindest Regards,
Frank Bazan
August 14, 2009 at 7:57 am
Now, that is a brilliant idea!
I think with minimal changes to your long posts and maybe with a couple of "technical" details (as much as is possible given the highly secure context it has been done) regarding the change process and implementation, this could be a fantastic source of inspiration for many people here.
Greg
August 14, 2009 at 8:24 am
August 14, 2009 at 10:02 am
ducon (8/14/2009)
Hi John,One last thing though if I may... How is message reception/delivery guaranteed with the service broker? Can you for example set a retry count etc. and have a COM+ application sending the message performing a rollback using the COM+ transactional context (which uses DTC) on its whole transaction? I'm gonna look it up later on anyway but I'm curious about how overall this compares to a messaging service like jms.
Thanks,
Greg
I'm not overly familiar with COM+ application development, but Serivce Broker does guarantee message delivery. You'll see some of the details on that as you read through the BOL sections. There's a good deal of infromation in BOL on Service Broker.
Off the top of my head, I'm don't think you can interface with Service Broker directly from the COM+ application code. Since Service Broker is internal to SQL Server, it is meant for message oriented communication within SQL Server of from SQL Server to SQL Server. You can, however, create and read messages using stored procedures so your COM+ applications can still take advantage of what SB allows you to do. Another way to 'get' messages into a SB queue is what I discussed earlier where the application thinks that it is adding a row to a table and the table/view is just an interface into the SB queue via trigger code.
If fact, I desgined another application that needed the ability to BULK INSERT directly from .net code into the DB. All incomming rows needed to go through a matching process to identify payment items that were possibly fraud or accidental processing duplicates. The matching logic had to compare the incomming results to a large repository of payments so I didn't want to hold the transaction open for the import. I designed a view, with an instead of trigger, that the BULK INSERT inserted rows into. This gave me the ability to run split the incomming rows into parent/child tables as well as drop a message into a SB queue that triggered the matching logic. So the transaction was not held open to wait for the matching and the BULK INSERT was extremely fast. The activation procedure on the SB queue ran the matching logic on the incomming batches of payments.
Long story short, the .net code did not interact directly with SB, but through some crafty coding, we were able to leverage the asynchronous message processing capabilities of SB.
August 14, 2009 at 10:10 am
Very interesting. I remember the "table insertion" trick pointing to a view but the other tricks you describe here are interesting.
Thanks,
Greg
August 27, 2009 at 2:01 pm
Hi there,
Just a quick update to this post for the folks who'd be interested. I started a few quick tests to start replacing my XML source transform.
I tried to achieve that by creating a stored procedure that uses the xml datatype to parse the xml message's nodes and insert the required values in the destination staging table. This stored procedure is used as an oledbcommand inside a data flow task. So there is still an SSIS package used here but at least no more XML source transform.
I'm very pleased to report that in the first tests, we are able to process about 400 times the amount of messages using this method. It takes us about 1 minute to process 100 messages in the "old" way while we can process about 40,000 in the same amount of time with the new version. We are able to use the multithreading ability of the data flow and the stored procedure's XML queries just kick the XML source's a$$ 😛
Again, thanks a lot for the help John, thanks to your ideas, we will probably roll out this first thing fairly soon after looking at the improvements it's given us.
Cheers,
Greg
August 27, 2009 at 2:58 pm
Thanks for the update Greg. You should get a pretty good pat on the back for that improvement....from 100 to 40,000! That's impressive!
August 27, 2009 at 3:05 pm
Just an after-thought Greg, but I wanted to point out that you can still do schema validations with the incomming xml using schema collections. I'm not sure if that is part of your requirements or not, but that is one thing that the XML source was doing for you behind the scenes that may or may not be important to you.
August 27, 2009 at 3:41 pm
Hi John,
Yes I have read about XML schema collections but thanks for the hint. I don't know if I'll be using them though since we control both ends of the pipe in this case. My tests do not make use of them, but that would have only affected the speed of inserts for the XML messages in my table from what I have understood, it should not have affected the speed of the XML "explosion" into the staging tables (which is the main issue I have). This would actually be better as my message would not be inserted in my message table and cause failure but would rather be rejected at the previous stage.
Does the XML source actually perform XSD validation at run time regarding mandatory/optional/min and max occurrences for fields? I thought it did not really do much more than create an initial mapping that is used at run time, but I could be wrong but the note here (http://msdn.microsoft.com/en-us/library/ms140277.aspx) seems to confirm that.
I think that once everything is taken into account, the speed improvement will not be as big as what I've reported. But there will still be some great improvements from the current way we are doing it and the 400 times figure pushed the decision to do this as early as possible :).
Greg
August 27, 2009 at 3:55 pm
I see from your link that :
The XML source does not validate the data in the XML file against the XSD.
It must just do a basic structural validation. You'll notice that if you send an XML file into the XML source that does not match the XSD, it will not process the file. From that, I gathered that some validating was happening.
August 27, 2009 at 4:38 pm
John Rowan (8/27/2009)
I see from your link that :The XML source does not validate the data in the XML file against the XSD.
It must just do a basic structural validation. You'll notice that if you send an XML file into the XML source that does not match the XSD, it will not process the file. From that, I gathered that some validating was happening.
Yes, I think that's what's happening. I mean if you even change the size of a string the XML source will display a warning to get a refresh on its metadata so there is some sort of validation performed, I'm just not sure it's a "real" schema-conformance validation as you pointed out.
Greg
Viewing 11 posts - 16 through 25 (of 25 total)
You must be logged in to reply to this topic. Login to reply