December 7, 2006 at 12:52 am
Just wondering: How fast is string-processing really in SQL Server (2005)? If I inserted 5000 records at once, would building up this string really be faster than just using a cursor?
December 7, 2006 at 1:49 am
I don't think that this row by row operation could really help some one ..
because this code does not give the clear view of row by row operation ...
Instand of using this thing u can use the Table variable while loop which help lot compare to Cursor...
Here is the Logic.....
--Create a Table Variable and Variable Counter
DECLARE @InitialTable TABLE(RowNo INT IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED, Field1 int, Field2 Int)
DECLARE @IncrementalCounter INT 'This variable is declare for Loop
--All values are inserted into Your Table Variable ........
Insert into @InitialTable (field1, Feld2)
Select A1, A2 from Your_Table Where Your_Condition
----------------------------------------------------------
--Initialize the Counter
SET @IncrementalCounter = 1
--Condition of Checking the No of records to be traverse
while @IncrementalCounter <= ISNULL((SELECT COUNT(RowNo) FROM @InitialTable),0)
BEGIN
--Here we set the variable value to the @variable
SELECT @Variable = Field1 FROM @InitalTable where RowNo = @IncrementalCounter
SET @IncrementalCounter = @incrementalCounter +1
END
December 7, 2006 at 2:49 am
Better still, rewrite usp_Process to optionally accept tabular data, either via a table variable or XML. Okay, not always possible, but when it is it will be far more efficient than repeated calls to the stored procedure
December 7, 2006 at 3:54 am
How can it work on the Select?
There's no any trigger to proccess SELECTs.
December 7, 2006 at 5:27 am
Of course, dynamic SQL assumes a level of security that we may not want to enable.
The alternative is a traditional procedural while loop, when set operations can not be designed up front.
December 7, 2006 at 5:47 am
Anyone know if SQL Server plans on designing triggers in the future to include the FOR EACH ROW clause like they do in Oracle that allows you to treat your trigger as if you are dealing with atomic values, rather than a data set?
December 7, 2006 at 6:07 am
Another "we all hate cursors" article... I have to agree with Martijn's post on this one. Cursors, table variables, temporary tables, and so on... they are tools to be used on certain situations. We shouldn't disregard an option because "it's just bad". Cursors have their use, even if it is to write a non-critical, very small (in terms of data processing) procedure easily.
December 7, 2006 at 6:33 am
I agree with that. We use cursors a lot, not for iterating through result sets and performing operations on the data with normal cursors, but for executing business rules. Here is an article some may find interesting that details how we execute dynamic business rules... http://articles.techrepublic.com.com/5100-9592_11-6123199.html
December 7, 2006 at 7:04 am
> Another "we all hate cursors" article
Not necessarilly. I'm not disputing that cursors have their uses, and when you need one then you need one. However it's certainly worth being aware of the cost, and particularly so when writing a trigger. I don't think I'd recommend the route that the author described, as I think it could hide the fact that the real cost is in executing a complex operation n seperate times, and the cost of the cursor is small in comparison. I've certainly come across code like that. However if it's in a time critical piece of code (and unless I knew unequivocably otherwise I'd automatically assume this applied to a trigger) I'd devote some effort to making the stored procedure (to use the example in the article) accept a set of input items and process them in a single pass.
December 7, 2006 at 7:09 am
I don't think the string processing is a problem, but the query would a) not work properly or b) fall over due to the 8K varchar limit (as the author in fact mentioned) which only allows 1.6 characters per insert so the dynamic sql would be incorrect.
So the advantage of using cursors is that they would work.
Cursors get a bad press for some reason, but they're much nicer than setting the rowcount to 1 and doing some arcane while loop just to get one row back at a time which I had to resort to in SQL 4.2 and earlier. I think a lot of this is due to them being used where they shouldn't be - I saw one example where someone had used a cursor to get some variables instead of a singleton select.
Cursors are essential in circumstances where row by row processing is necessary. I often have to use cursors on the inserted and deleted tables to update higher level totals.
This avoids the use of multiple subquery statements to sum quantities to the higher levels since transact SQL only returns a single variable from a subquery (a feature like oracle to return several variables would be useful) and the update statements got too complex for the compiler besides being very long winded.
Cursors are also useful where different updates are required depending on the row data or different calculations may have to be used before updating.
I'm not an expert, but cursors on the inserted and deleted tables should perform well since most of the trigger tables should be in memory.
Table variables are an option, but the while loop on a table variable seems pretty similar to using a cursor and there's the overhead to create and populate the table variable, so I'm not sure that they offer a significant performance advantage. From a maintenance/development point of view, it's easier to skip the table creation by using the cursor definition as there's less chance of error.
I'll keep on using them and liking it.
December 7, 2006 at 7:13 am
Here, here, Kevin! Any process other than single record GUI support that uses RBAR (pronounced "ree-bar" and is a "Modenism" for "Row By Agonizing Row") either has an incorrect database design or someone simply doesn't understand the methods used to make a process setbased. And, if you write things to handle more than one row at a time, you'll be amazed at how fast just one row will process and the kind of "concurrency" you can get out of your GUI procs, as well.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 7, 2006 at 7:49 am
Actually, in 2005, the solution would probably work if a VARCHAR(MAX) data type was used, which supports up to 2GB of data.
December 7, 2006 at 8:01 am
I had speed example of using cursor and select:
In my previous project "data migration". using cursor would spend more then TWO days, with select, it just 30 min. There were about 200K total customer records
December 7, 2006 at 8:18 am
I would need to test this in order to verify (not sure if it would work).
Perhaps having a timestamp on the table? When a SELECT is initiated this would change. Then again I believe this is only when data is changed. With that in mind you could incorporate a change to a field (a meaningless field used only for this purpose) which would fire the trigger. You could also have the trigger fire only for a change to THIS field which means you could have a custom process fire for a query that would not be included in a record update / delete / add.
With this in mind I was going to post, after reading the article (and still will) the concept of TABLE VARIABLE. I use these often, when a set based operation is simply not possible.
Upon entering this forum I see the first post is regarding the very concept that I planned on writing about. I believe I would change the logic just a bit however as the poster only mentions the Identity column. I don't know that this would work as the complete record is not present until the trigger has completed (committed).
That said I would implement the TABLE VARIABLE as follows (keeping the logic in the trigger rather than a trigger calling an external procedure – Eliminates the dynamic portion as well as the parsing of XML, Comma string, ect.).
Lets say we are inserting simple customer data (multiple records / bulk insert). Another assumption is that we are not inserting more than something like 5,000 records at a time.
DECLARE @CustomerID as varchar (20)
,@CustFirstName as varchar (30)
,@CustLastName as varchar (40)
,@CustPhone1 as varchar (15)
,@RecordCount as int
DECLARE @ProcessTable as TABLE
(
CustomerID varchar (20)
,CustFirstName varchar (30)
,CustLastName varchar (40)
,CustPhone1 varchar (15)
,Processed bit 0
)
INSERT INTO @ProcessTable
(CustomerID, CustFirstName, CustLastName, CustPhone1, Processed)
SELECT CustomerID, CustFirstName, CustLastName, CustPhone1, Processed
FROM Inserted
SELECT @RecordCount = count(*)
FROM @ProcessTable
WHILE @Count > 0
BEGIN
SELECT TOP 1 @CustomerID = CustomerID, @CustFirstName = CustFirstName
,@CustLastName = CustLastName, @CustPhone1 = CustPhone1
FROM @ProcessTable
WHERE Processed = 0
[
Perform your logic here such as duplicating the record somewhere, checking the data, ect
]
SET @Count = (@Count – 1)
UPDATE @ProcessTable
SET Processed = 1
WHERE CustomerID = @CustomerID
CONTINUE
END
This is a very basic example however I think you see the point. Now another consideration is that there could be more than one update to a given CustomerID within this bulk insert. With that in mind I would add field to my table variable, lets call it ( InsertedID int PK). I would then update my processed flag based upon this Identity column so that multiple records affecting the same CustomerID could be handled.
Any thoughts?
Jeff
<hr noshade size=1 width=250 color=#BBC8E5> Regards,Jeffery Williams http://www.linkedin.com/in/jwilliamsoh
Viewing 15 posts - 1 through 15 (of 70 total)
You must be logged in to reply to this topic. Login to reply