November 5, 2012 at 10:32 am
Peter Di (11/5/2012)
Jeff, I agree with you. Having thousands or millions of calls to a procedure like this will be a problem. However stored procedures with multiple parameters are not used widely and usually have much slower part after the parsing. Also, as you can see in the article, there are other ways to send values instead of using comma separated strings ... so optimizing the parsing is not the thing I will start with.Anyway , you are right that there is a faster way to parse the values and it will be stupid to ignore it ( especially if you already tested and prepared the functions for an easy implementation) . If you don't mind I will update my article to include one of the methods in your test and to include a link to the full article.
Thanks for the feedback, Peter.
We must work in different worlds, though. Most of our stored procedures do contain multiple parameters and round trip time for the GUI has been greatly improved by redacting hfow they're currently handled both in their parsing and post-parse processing. In fact, such has been the case for the last 4 companies I've worked at.
The other thing about parsing is that if someone finds a parsing "routine" that works well for one row, they might try to use it on thousands of rows without knowing the performance ramifications. So I bring that fact up even when something is "guaranteed" to only be used for one row.
It's all positive proof that "It Depends". 🙂
--Jeff Moden
Change is inevitable... Change for the better is not.
November 5, 2012 at 10:36 am
N.Johnson (11/5/2012)
Just at the time when we're trying to find a way to improve our data import this article comes along. Perhaps it is a God-send.While your article does not answer all of the questions it may provide a jumping-off point to leap to a solution -- after we put our heads to our challenge. Our current method ties up the computer for an hour and a half bringing in the data and we'd like something faster.
Here is our challenge:
We receive multiple tables of data in a pipe-delimited text file.
Each table begins with 2 rows that identify the columns (one row with a column "name", the second row with a "field-code" for the column).
Thereafter come the data rows until we encounter the next TABLE headers.
The method in this article may pave the way to (1) parse the table headers to build temporary tables, and then (2) insert the table data into the temporary table. The method may also lead us to a way to extract selected columns for insertion into the final data tables.
Thanks, again, for this timely article (at least for us it's timely).
Hi Norm,
I recommend that you actually write a separate post about this subject. Parsing input files usually is quite different than shredding a line or parameters. For example, the use of BULK INSERT would probably do much better for you. Once you start the separate thread, please feel free to PM me with the URL for the thread and I'll try to take a look at it. For sure, there will be many others to respond with some great ideas, as well.
--Jeff Moden
Change is inevitable... Change for the better is not.
December 24, 2012 at 12:20 pm
Nice article.
December 25, 2012 at 8:12 am
Nice article!
September 25, 2013 at 7:26 pm
It fails when it contains a valid parameter. I changed Set @sParameter = 'AZ,XX,YY,ZZ'
In this case XX,YY,ZZ are classified as valid when they're not.
Viewing 5 posts - 121 through 124 (of 124 total)
You must be logged in to reply to this topic. Login to reply