Most efficient way to insert 20 records via a sproc?

  • I have to insert ~ 20 records each time a procedure in a program runs.

    The table just has 4 columns, PK ID, FK ID, varchar(128) and a varchar(50).

    I was wondering what the most efficient way to handle these inserts would be.

    I would like to allow the program to pass off the data to the stored procedure and for the operation to finish as quickly as possible, return success, and allow the program to continue on.

    One catch is that it may not ALWAYS be 20 records, it could be 0, or it could be 5, or it could be 20.

    Just depends on the options the user selects before they submit their data via the program.

    Any suggestions?

    Thanks!

  • It's only four columns? I'd suggest you look up a method written by Jeff Moden that uses a tally table to break down a comma delimited list. That's probably going to be the fastest method.

    You could use XML, but that's pretty expensive, especially for small amounts of data like this.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I assume you mean other than calling a sproc 5, 10, or 20 times? I'd go with a comma list then, but test it against just 20 calls.

  • Thinking about it a bit more... do the 20 rows come from the results of the procedure? You might want to look at the OUTPUT clause... I'm guessing here.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • You haven't provided enough information. Please read the following link about asking questions.

    http://weblogs.sqlteam.com/jeffs/archive/2008/05/13/question-needed-not-answer.aspx

    Grant has already guessed at the problem you are facing, but he shouldn't need to guess what you are facing. The more information you provide up front, the more likely you'll get a response that is on target rather than shots in the dark.

    Drew

    J. Drew Allen
    Business Intelligence Analyst
    Philadelphia, PA

  • Here is the whole process.

    User enters a LOT of data in a survey, nearly 200 some questions.

    Some of the responses are validated against some business logic, things like:

    Age = 18 years

    Years worked in field = 30 years

    That is illogical, so we assume that must have been some weird data entry error. However, instead of just prompting the user and saying "Are you SURE you were working in this field 12 years before you were born?" We just erase that data.

    So if that piece of data was erased (or rather marked as "bogus").

    Now this all happens BEFORE the survey app submits any data to the database.

    What I would want to do is take those bogus fields, which the application basically erases and store them in this "bogus responses" table.

    1. User clicks "Submit"

    2. Application reviews some fields and does some business logic

    3. 0 to ~ 20 of those fields could be marked as "bogus"

    4. Data for the 200 or so survey questions is passed to a sproc and that sproc writes the data to the database.

    5. All those "bogus" fields would THEN be passed to a different stored procedure and the ORIGINAL values of those fields marked "bogus" would be written to this table.

    For reasons not worth going into because it can't be changed due to business reasons, we don't store the value of those "bogus" fields, they just get set to NULL.

    However, I don't want to just toss that data, I want to store it in a table "off to the side" so that I can review the original values the user selected to look for patterns, other weird stuff.

    That table stores the PK ID, the FK ID to link back to the survey submitted.

    A varchar(128) to hold the name of the field that was marked BOGUS and another varchar(50) to hold their response to that question.

    What concerns me is how to best write that data.

    I could call the sproc 20 times and pass the data to it once for each record it needs, but I wonder if that is very inefficient?

    Maybe best to just call it once and pass it ALL the data I have and then the sproc write the 20 records (or perhaps only 1 record, and if 0 that sproc just doesn't get called)

    I don't want to hold up "the process" so I want to find a nifty fast way to dump these records into the table as fast as possible so the rest of the application can get on with its life and this new "bogus fields" table doesn't slow things down.

  • e

  • OK. So how are the bogus fields identified? In client code or TSQL? If on the client side... 20 calls might not be that expensive. If on the TSQL side... again, maybe that 20 calls isn't that bad. However, with either one you could make it into a single call. I'd strongly suggest you track down Jeff Moden's use of the Tally Table as a way of breaking down comma-delimited lists. I think that would be your best option. But remember, testing anything you do is the key to be sure what you're doing is the right thing within your environment.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • So where I'm sitting right now is thinking (from what was suggested above):

    Pass that data to a sproc, the sproc builds a tally table (from that suggested article, thanks btw) by accepting the input (csv, or something, whatever).

    The tally table is built up and then I INSERT INTO the destination table and SELECT the source out of the tally table.

    I'm playing around with that now to get a feel for how it works, learn more about the tally table, and then do some time tests.

    Thoughts on that approach? Suggestions, etc...?

  • Grant Fritchey (7/30/2009)


    OK. So how are the bogus fields identified? In client code or TSQL? If on the client side... 20 calls might not be that expensive. If on the TSQL side... again, maybe that 20 calls isn't that bad. However, with either one you could make it into a single call. I'd strongly suggest you track down Jeff Moden's use of the Tally Table as a way of breaking down comma-delimited lists. I think that would be your best option. But remember, testing anything you do is the key to be sure what you're doing is the right thing within your environment.

    All the work is done client side, only a single call is made to the database at the end to write the data. (Well obviously it calls stuff at the start to get "stuff", but as far as processing goes ALL processing is client side in this app)

    I think I posted my post while you were typing this one. 🙂

    Following up on that tally table approach like I outlined above.

    Thanks!

  • The submission to the database is pretty short. Why not just shoot the records over to the db after each question? That way you've saved data if the app crashes, user gets tired, walks away, etc.

  • Crazy business rules, we have to get the ENTIRE thing, or toss it all.

    Also we can't allow users to come back and pickup where they left off as we don't track who the users are in any way.

    (It complicates things more than one would think at first glance sadly)

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply