Can I copy ONLY the .mdf file to another server, to move a database? Can I make the data file consistent w/o the T-Log?

  • I know this seems like a crazy question at first, but here's the deal: we have over 1 TB of SharePoint SQL database .MDF files. We can use EMC MirrorView to mirror SAN LUNs from one data center to another. We can ONLY mirror the .MDF files, because the .LDF files change too often and would fill up our connection between the data centers.

    So here is what I can do: in Data Center A, I can take SharePoint down, so the app that is using SQL is off. Then I can set all the SQL Server databases to SINGLE_USER mode, and then I can take them offline. My thinking is that now my data file has all the data -- every transaction will now be complete, and the T-Log file doesn't hold anything that isn't already in the data file.

    Then in Data Center B, I wait for the last mirror synchronization to finish, and then I can just attach to the the mirrored copy of the .MDF files. I think the databases may then be marked as Suspect, but they should in fact be fine. Or maybe if I do a something like this, SQL will bring the database online without incident...

    CREATE DATABASE [AdventureWorks] ON ( FILENAME = N'M:\SQLData\AdventureWorks.mdf' ) FOR ATTACH

    ...although maybe I need to specify the name and location of the .ldf as well.

    I am about to test this, but even testing between these data centers is non-trivial so I was hoping for some feedback.

    Thank you very much!

  • If for attach does not work use sp_attach_single_file_db.

    Are you setting up another SharePoint App in DCB?

    Alex S
  • AlexSQLForums (7/13/2011)


    Are you setting up another SharePoint App in DCB?

    We are actually moving to DCB, but we only have a limited maintenance window to do the move. So we have to get the data there ahead of time, copying a TB over our WAN would take way too long.

    We could theoretically do backups, copy them over ahead of the move, then just do the last TLog backup and restore during the window, but there are over 130 (!) databases on this server. Scripting all that up would be non-trivial.

  • http://sqlinthewild.co.za/index.php/2009/06/09/deleting-the-transaction-log/

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • jpSQLDude (7/13/2011)


    So here is what I can do: in Data Center A, I can take SharePoint down, so the app that is using SQL is off. Then I can set all the SQL Server databases to SINGLE_USER mode, and then I can take them offline. My thinking is that now my data file has all the data -- every transaction will now be complete, and the T-Log file doesn't hold anything that isn't already in the data file.

    Then in Data Center B, I wait for the last mirror synchronization to finish, and then I can just attach to the the mirrored copy of the .MDF files. I think the databases may then be marked as Suspect, but they should in fact be fine. Or maybe if I do a something like this, SQL will bring the database online without incident...

    That'll work and the DBs will attach clean and without error, however in a hard failure, when Data Center A goes offline suddenly and unexpectedly (ie in a disaster) you pretty much have a snowball's chance in hell of bringing those databases online at all.

    In other words, you have a disaster recovery setup that will almost certainly only work with a controlled failover not in a disaster.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Scripting restores for 130 odd databases isn't that much trouble since all the information is in tables you can create a single script that generates the restore scripts for all databases. Alternately use the database name as a parameter to generate scripts for each database. I did a similar process a while back and moved about 10 databases like this.

    Here you are:

    use msdb

    set nocount on

    declare @DB sysname, @LstRes varchar(50)

    set @lstRes = 'LastTranLogThatWasRestored.TLS'

    set @DB = 'MyDatabase'

    select 'RESTORE LOG ' + rtrim(@db) + char(13)

    + 'FROM DIKS = ' + char(39)

    + RTRIM(SUBSTRING(physical_device_name,1,LEN(physical_device_name))) + char(13)

    + 'WITH NORECOVERY' + char(13)

    + 'select ' + char(39) + substring(physical_device_name,charindex('TLog',physical_device_name),50) + char(39) + char(13)

    from msdb..backupset a join msdb..backupmediaset b on a.media_set_id = b.media_set_id

    join msdb..backupmediafamily c on a.media_set_id = c.media_set_id

    where type='L'

    and database_name = @DB

    and substring(physical_device_name,charindex('TLog',physical_device_name),50) > @LstRes

    order by backup_start_date ASC

    The two paramaters are so you only start restoring after a specific log file, and only for a specific database.

    All you need to add is a RESTORE LOG WITH RECOVERY when you are ready.

    As far as testing goes, why don't you just create a small database, do some transactions on it, then go through your move process on a test environment. The size of the DB won't make much difference in testing.

    Leo

    Leo
    Nothing in life is ever so complicated that with a little work it can't be made more complicated.

  • GilaMonster (7/13/2011)


    jpSQLDude (7/13/2011)


    So here is what I can do: in Data Center A, I can take SharePoint down, so the app that is using SQL is off. Then I can set all the SQL Server databases to SINGLE_USER mode, and then I can take them offline. My thinking is that now my data file has all the data -- every transaction will now be complete, and the T-Log file doesn't hold anything that isn't already in the data file.

    Then in Data Center B, I wait for the last mirror synchronization to finish, and then I can just attach to the the mirrored copy of the .MDF files. I think the databases may then be marked as Suspect, but they should in fact be fine. Or maybe if I do a something like this, SQL will bring the database online without incident...

    That'll work and the DBs will attach clean and without error, however in a hard failure, when Data Center A goes offline suddenly and unexpectedly (ie in a disaster) you pretty much have a snowball's chance in hell of bringing those databases online at all.

    In other words, you have a disaster recovery setup that will almost certainly only work with a controlled failover not in a disaster.

    Gail, if I am reading this correctly - the OP is not setting this up for disaster recovery (my first though also). Rather, this sounds like a one time operation to move the databases to a new data center.

    I would be concerned about the integrity of the databases once they are attached to the new system. Not sure what will happen as SQL Server records transactions in the log file attached to the system in the old data center - but, those transactions are never recorded in the new data center.

    Basically, what I see happening is that the mdf file is going to be updated through SAN replication - but the log file will not be. Even if the data file is not brought up on the new system right away I don't see how SQL Server could bring it up with the changes that were replicated into the file.

    Thoughts?

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Jeffrey Williams-493691 (7/13/2011)


    I would be concerned about the integrity of the databases once they are attached to the new system. Not sure what will happen as SQL Server records transactions in the log file attached to the system in the old data center - but, those transactions are never recorded in the new data center.

    Basically, what I see happening is that the mdf file is going to be updated through SAN replication - but the log file will not be. Even if the data file is not brought up on the new system right away I don't see how SQL Server could bring it up with the changes that were replicated into the file.

    This'll only work if the DB is shut down cleanly before the mirror is broken. Otherwise inconsistent and unusable database.

    Honestly, I'd be taking backups and mirroring the backups. This is something that'll work if everything goes exactly right and as expected. Otherwise it'll be a start from scratch thing

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (7/13/2011)


    Honestly, I'd be taking backups and mirroring the backups.

    I'm with Gail here, I think this would be the best solution

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • GilaMonster (7/13/2011)That'll work and the DBs will attach clean and without error...

    So what is the best way to assure a totally clean shutdown of your database? Where all transactions have been written to the .mdf file? This is assuming I have a scheduled maintenance window, I can turn the application that is using SQL off, then I can do whatever I need to do to SQL itself.

    Can I just stop the SQL service? Or should I first do other things like put the database into single user mode, then take it offline? Do I need to do checkpoint at some point?

  • Even a shut down of SQL won't do it. (actually, that should be 'especially' a shut down won't do it.)

    You'd need to make sure that there are no active transactions, that all rollbacks have 100% completed and that all dirty pages have been written to disk and then that nothing new starts up (including system cleanup tasks)

    Single user, checkpoint, offline with checks that there's nothing outstanding or active should be enough, however I'd still be a bit hesitant... 99% of the time that combo would be fine, but there's that 1% and Murphy works in IT.

    What's wrong with the backup/restore method? Using a combination of full, diff and log backups you can get very near to 0 downtime for the switch over.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • jpSQLDude (7/13/2011)


    AlexSQLForums (7/13/2011)


    Are you setting up another SharePoint App in DCB?

    We are actually moving to DCB, but we only have a limited maintenance window to do the move. So we have to get the data there ahead of time, copying a TB over our WAN would take way too long.

    We could theoretically do backups, copy them over ahead of the move, then just do the last TLog backup and restore during the window, but there are over 130 (!) databases on this server. Scripting all that up would be non-trivial.

    If you're moving SP databases to a new DB server you'll need to follow ***Moving Sharepoint Databases***

    We ended up calling Microsoft because we didn't know that the config databases needed to be rebuilt.

    Alex S

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply