July 23, 2006 at 5:24 pm
Absolutely agree on the proper use of a correctly indexed temp table.
--Jeff Moden
Change is inevitable... Change for the better is not.
July 25, 2006 at 2:28 am
As another option to consider, the reason that the cursor was used in the example that started this thread was that the interface to usp_generate_purchase_order accepted one record at a time (via the parameters).
You could consider creating a temporary table that contains the details and call usp_generate_purchase_order. This would, of course, require usp_generate_purchase_order to expect the temporary table but it does eliminate the need for the cursor. This would also require some adjustment to usp_generate_purchase_order to accomodate the set based processing.
July 25, 2006 at 6:25 am
Agreed. I don't know why 3rd part vendors do that to us. In those cases, though, I'll still do bulk processing to fill a staging table... then, I'll (trigger) run a proc to doggedly send the vendor the records, one at a bloody time, on a FIFO basis until the table is empty. I set it up so that both the insert process and the send process can run at the same time without any mutual blocking or locking. Was pretty easy to do since one is doing a "SELECT/INSERT/DELETE" (xfer then archive) and the other is doing high speed inserts (new records). Still, I didn't use a cursor but, since it wouldn't be the gating factor, don't see a real problem in using one for this type of thing (single record transmission) although the read/send-one-until-there's-no-more method I wrote does empty the table without having to write an additional check to see if there's more after a cursor has executed.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 5, 2006 at 1:19 am
I try all these code of using while loop instand of cursor.
but there is a problem i feel that when u apply the while loop in your stored procedure it work better and give fast result as compare to Cursor. But when i run my stored procedure and see the Execution Plan in the query analyzer. i see a red color in the temp Variable Table Scan .
It give the informtion Bad use of Statistics in the Table. and the table scan cost become 100% in each round. for this i use # table and create a Clustered Index on this table and i solve this problem. but my question is that why table scan cost goes to 100% and come a red color message bad use of Statistics for table scan......
Is there any other alternative of using table variable and how should i avoide such table scan problem....
December 5, 2006 at 6:19 am
That happens when the server has no statistics or outdated statistics. As you did, using a temp table with a clustered index created statistics for that table, hence removing the problem .
December 5, 2006 at 8:23 am
You might want to turn auto-create statistics option and auto-update statistics option on for the database, if possible. If not, create your statistics on that table and I would recommend setting up a scheduled job to UPDATE STATISTICS regularly. You can right-click on the red table scan (If it's really a Table Scan, you probably want to look into that and see if it can be turned into an Index Scan or Index Seek at some point), and choose the menu option to create missing statistics. You might have to do this a few times if SQL needs statistics for several columns.
December 5, 2006 at 8:31 am
I thaught we couldn't explicitly create stats on table variables?
December 5, 2006 at 8:58 am
Ahh I thought he was using a regular table, and not a table variable. Of course you're right, you can't manually create indexes or statistics on a table variable. You can declare a primary key on it at creation time, but that's about it.
December 5, 2006 at 9:05 am
OK, glad to know I'm not going crazy .
No if only those voices could shut up .
December 18, 2006 at 12:03 am
Ah, the good ol' cursor debate.
So, for all those Cursor Daleks I have a problem. We have an existing loop-based process to copy rows of data from one system to another. But if a row has dud data, the process can crash, and continue to crash until manual intervention fixes the dud data.
Now I need to write a another row-copy process for another 2 systems. I can use a set-based solution fine, but if an error crashes the process I'm back to the manual intervention bit. The idea was: if set-based fails, loop the rows until the dirty culprit is found, then go back to set-based.
Testing showed a "static forward_only" cursor beats a while loop, but can I overcome my process-crashing problem with set logic?
Simon.
December 18, 2006 at 5:25 am
Yep... you're supposed to know what the bad rows are BEFORE you begin the transfer.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 18, 2006 at 6:34 am
Or actually >>
1 - Load data into working table
2 - Find bad rows
3 - Clean / delete / move them out of the way
4 - Begin transfer and behold, works everytime ... in theory .
Then you can manually try to edit the bad rows if any are found, update your cleaning scripts and be ready for the next import.
December 18, 2006 at 8:38 am
Or you might even be able to use the WHERE clause to combine steps 2, 3, and 4 into one step
What constitutes a "good row"? With that information you can use a statement like this:
INSERT INTO <target_table> ( <columns> )
SELECT <columns>
FROM <temp_table>
WHERE <row_is_good>
<row_is_good> would be replaced with the "good row" criteria, or it can be replaced with a NOT <row_is_bad> criteria.
December 18, 2006 at 8:58 am
Obviously... but my point was that validation is not an option and should always be performed before the error occurs not after the user comes in and says something doesn't look right to me .
December 18, 2006 at 2:33 pm
I figured That post was for Simon, and just built a little on your most excellent work, sir
I could see nothing but for the giants' shoulders on which I stand
Viewing 15 posts - 91 through 105 (of 296 total)
You must be logged in to reply to this topic. Login to reply