Network performance on large inserts...

  • Hello All,

    Looking for advice on inserting data into a table using a SP called from a VB front end. The table in question has 40+ columns.

    I am wondering about the network performance when I insert rows into the table. Should I pass all 40+ parameters to the SP at once or should I break the SP into smaller SP's which update only say 10 columns at a time.

    Just wondering if any of you have had a similar situation and which method you found most effective?

    Guess I'll start writing test scripts for each situation now (one SP & segmented SP's) but would still welcome some feedback.

    Thanks,

    Clive

  • I like to reduce Round Trips. Andy Warren has a few articles on this subject. I'd make one sp with 40 parameters. Not that the network overhead is big, but it does exist and it's better to combine things if possible.

    Steve Jones

    steve@dkranch.net

  • Updating separate batches of columns doesn't sount too good - you'll have to use COM+/MTS and create a transaction to ensure ACIDity. COM+ objects work with having a master object which calls child objects within a transaction - the arguments to the master object must include all the columns, so the network traffic to the COM+ server is the same as before and you're back where you started. Rather than columns, to minimise rows returned is quite typical - ususally in our client-server apps we will insert/update/delete a single record and don't use batches and I would say if it's one row, then 40+ parameters still sounds viable. Having said that, we don't use BLOB datatypes, and if you have some text/image fields, then granted a different approach might be needed.

    Paul Ibison

    Paul.Ibison@btinternet.com


    Paul Ibison
    Paul.Ibison@replicationanswers.com

  • Hey!

    Well I wrote the large SP and included all columns and as you suggested Steve, there was little network overhead so I'll stick with that approach! Thanks Paul for your info too!

    Clive

  • your welcome and thanks for the followup.

    Steve Jones

    steve@dkranch.net

  • Just for the sake of overkill...

    For each time you're making a new trip for say a stored procedure, it's not the data being transmitted that's the issue so much as the additional overhead. Basically, the client has to talk to SQL Server to start the process for SQL Server to start processing a new stored procedure. After all the data is transmitted back and forth, there's the final processing result that gets transmitted back from the server and acknowledged. Simple way to think of it.

    - Start Stored Procedure

    - Transmit Data Back and Forth

    - Server Report success or failure

    So if you're sending data through 1 or 4 stored procedures, you're sending about the same amount of data (a little more since you are passing 4 instead of a single stored procedure name). However, you've gone from one initiation and one completion report to 4.

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply