Copy Massive table to Different DB without a stoppage

  • Hi!

    I need to copy a table from one DB to another. I dont have any issues with permissions or indexing, but I do have a problem with copying 120 million rows from the first table to the second table. I can't do a straight SELECT * AND INSERT INTO, as that will cause blocking on the DB, and we have to stay live. The table is continually processing data, so I'm looking for any advice on how anyone has managed a process like this before.

    I was thinking of running through something like: SELECT TOP 500 * INTO SECOND_DB_TABLE then DELETE TOP 500 from FIRST_DB_TABLE, and then nesting that in a while look, so it scrapes and copies the top 500 records until it has moved through all 121 million.

    Any help?

  • Are the databases on different servers?

    If so, you might want to export the data into a text file (or something similar), then bulk import it. You can bulk import in X-rows-at-a-time fashion pretty easily, and it's usually quite fast.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • They're on the same server, so that isn't much of an issue.

    Can I drop the data from the FIRST_DB_TABLE into a .csv without it blocking?

  • Can you run a backup of the source database, restore that under a different name, and do your export from that? (Either directly to the target server, or to an intermediate file.) Alternately, if you're on Enterprise Edition, can you create a snapshot copy of the database and export from that?

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • That is the idea that I was gravitating towards, GSquared. I think we have the space for it, but since the table is constantly having rows inserted into it (50,000+ a day), there is always going to be a gap between the file creation and the upload/insert. But then I would only have to deal with 50,000 rows, so that probably wouldn't be as much of an issue.

  • Ideally, if you can turn on SNAPSHOT ISOLATION it will resolve your blocking issues, but it may cause you some performance issues and bloat your TempDB pretty badly if you have enough transactions running at the same time.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply