moving filegroups/physical files from one database to another

  • I have a 2008 R2 instance where there is a table on database A that contains billions of rows (around 50 to be exact). I created another database (database B) and copied the table over to redo the table architecture and to pare down the number of rows and to create proper filegroups and partitions for this table. The other reason I did this on a separate database was to not affect the size of our diff backups. Rather than bulk-copying the table back into database A (primarily because we are space and CPU-constrained and plus it would take at least a couple of days), I was wondering if there was any way possible to move the physical files for the new table or DB back into database A? I tried doing a filegroup restore from the 2nd back onto database A, but that didn't work.

  • anyone? surely someone else must have this issue?

  • No, you can't do this.

    You mention moving them back into database A, they aren't and were never part of that database and even if they were they stil wouldn't work.

    You can add and remove files but they can't contain data.

    50M rows is a fair amount of data but not outrageous. The size of the data is more relevant to me. I have a database with roughly 4M rows, but its 1.3TB. I have bulk-loaded ALL the data before when it was about 1.1TB in about 18 hours from server to server.

    CEWII

  • it's actually 50 billion rows....cut down to about 12 billion rows. Table is not wide though, it's basically like a key value pair with an ID, a datetime, and a value (13 bytes each row). Previous, the data for the one table was around 700G, now cut down to around 175G. For me, it does take about a day to move back over, and then after that I would have to figure out a way how to capture back all the updates/inserts that have happened since the beginning of the bulk copy. Not very pretty.

  • Ah, my bad, 50B. 700GB is a lot but no completely outrageous.. 175GB is big but not outrageous..

    Those are really narrow rows.. Makes sense how 50B is only 700GB..

    Wouldn't the updates/inserts have a date after the start of your copy?

    CEWII

  • well, this is a time series table, so the datetime field is a timestamp. However, the timestamp is not a physical representation of when the row gets inserted into the table, but rather the time that a reading from the field device is taken. We also have situation where data comes in late, and all kinds of other exceptions. Let's just say that I inherited the design and leave it at that.

  • Ah, roger... For the load it might make sense to add a trigger to capture the IDs of the updated or changed records and then use that to handle the additional feed. At some point you are just going to have to cutover..

    CEWII

  • Have you found a solution to this problem?

    I having exactly the same problem, but with 100M of bigger rows.

    I need to move that data back for production database after partitioning by the date field.

    Best regards,

    Y.

  • Files/filegroups cannot be moved from one database to another.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Gail is correct. You cannot move a file or filegroup between databases.

    SSIS can help you, or T-SQL, in transferring data over. If you have Enterprise and can snapshot, or just take a good backup, I'd truncate the data back on the source and then transfer over the data from the database where you've made changes.

  • So... i have to move all the data over the partitioned table in production database?

    This will take days, lower performance and interfere backup process.

    What would you do ?

    Best regards.

  • yurdik (3/5/2014)


    So... i have to move all the data over the partitioned table in production database?

    Yes

    What would you do ?

    SSIS, bcp or T-SQL scripts to transfer the data over. Probably in chunks, bcp and SSIS are good for that.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • +1 to what Gail mentioned. I would't have moved the data to another instance in the first place because of the hassles of moving it back.

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply