what is the best way to transfer around 100 millions of rows from one table to another on the same server sothat transaction log file doesnot get full?

  • Is someone know the Script to move very large rows around 100 million from one table to another on the same server sothat transaction log file doesnot get full?

    Thanks in advance.

    --sntiwary

     

  • You can try putting the destination database into bulk-logged recovery then doing a SELECT INTO from the source database/table.  If the destination table already exists you may have to do a BCP out of the required data and then BCP in (or BULK INSERT) of that database, again with bulk-logged recovery in effect.


    Cheers,
    - Mark

  • The fastest way to transfer data is to bcp out and then bulk insert in.

    Before bulk insert, you must create the destination table.

    To BCP you can use the -N parameter to especify WIDENATIVE character. It will create

    a bigger file, but SQL can process it faster.

    Then to import, use BULK INSERT with the DATAFILETYPE = 'WIDENATIVE' option.

  • AS BCP has not been evoluated for long time, I've experienced a much faster data load with custom written ODBC based C file then BCP.

    So if you really want to do it fast write your own BCPOUT and BCPIN apps which are specific to your tables



    Bye
    Gabor

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply