Minimal Downtime Storage Migration of Large Databases

  • Thanks for the post.

    We're actually doing it right now, and our approach is very similar to yours, however due to HA restrictions we couldn't even have this 30 minute downtime.

    In addition, the main DB server have 7 drives, which makes it a long process to switch the data drive letters, while the server is down.

    We decided that we can live with a different drive letter for the DBs.

    /****This is very much not best practice, and don't do it without understanding the implications.******/

    We created a clone DB that is synced with LOG SHIPPING as you suggested.

    This DB is on the new SAN.

    On the cut-over moment:

    1. Making one last LS backup/ LS restore and disable the LS jobs.

    2. Stopping the replication agent jobs.

    3. Putting the source DB in single_user.

    4. Backing up The source DB's tail and restoring the tail to the clone DB - restoring with WITH RECOVERY,KEEP_REPLICATION.

    5. Putting the clone DB offline (it is now up to date with the source).

    6. Changing the pointer of the DB files of the original DB, to point the new path of the clone DB.

    7. Restoring the original DB WITH RECOVERY (as it was in restoring state since the tail log backup).

    8. Putting the original DB to multi_user mode.

    9. Starting the replication agent jobs.

    Now the source DB is live and looking at the DB files that are placed on the new SAN.

    The replication wasn't impacted.

    At most cases, the downtime was ~1 minute.

    In one application with high throughput, the application caught the single user session and we had to troubleshoot it, which made the downtime ~15 minutes.

    Use [DBNAME]

    GO

    ALTER DATABASE [DBNAME] SET SINGLE_USER WITH ROLLBACK AFTER 10;

    GO

    USE [Master]

    GO

    BACKUP LOG [DBNAME] TO DISK = N'\\Backup\DBNAME_Tail.trn' WITH NO_TRUNCATE , NOFORMAT, INIT, NAME = N'DBNAME tail backup', SKIP, NOREWIND, NOUNLOAD, NORECOVERY, COMPRESSION, STATS = 10;

    GO

    RESTORE LOG [DBNAME_Clone] FROM DISK = N'\\Backup\DBNAME_Tail.trn' WITH FILE = 1, NOUNLOAD, STATS = 10 , RECOVERY, KEEP_REPLICATION;

    GO

    ALTER DATABASE [DBNAME_Clone] SET OFFLINE

    ALTER DATABASE [DBNAME] MODIFY FILE ( NAME = [DBNAME], FILENAME = '<NEW PATH>\DBNAME.mdf')

    ALTER DATABASE [DBNAME] MODIFY FILE ( NAME = [DBNAME_log], FILENAME = '<NEW PATH>\DBNAME_log.ldf')

    RESTORE DATABASE [DBNAME] WITH RECOVERY, KEEP_REPLICATION;

    GO

    ALTER DATABASE [DBNAME] SET MULTI_USER WITH NO_WAIT;

    The same can be done for SYSTEM DBs, but it's much more work (registry parameters, etc).

    Thanks again and have a nice day.

  • Thanks @roni.vered for the comments. Yes if we are good with new drive letters, it just need 1 minute of downtime.

    Dinesh Karunarathna

  • I recently migrated a 1.3 TB database to SQL2019 and also had very little downtime. Instead of logshipping, these were my steps:

    1. restore a full backup with norecovery
    2. stop the websites
    3. backup/restore a diff backup with recovery
    4. change website settings to point to new server
    5. restart website

    Downtime was less than 10 minutes

    Wilfred
    The best things in life are the simple things

  • I manage a big DWH database (about 12 TB). Of course (for this size) the database is partitioned. The biggest tables have their own set of annual filegroups (e.g. FG_ORDERS_2019, FG_ORDERS_2020 ...), "smaller" tables (10-50 GB) are usually combined into more general filegroups (e.g. FG_INPUT_2019, FG_INPUT_2020)

    When I do the annual "maintenance" (moving old data to the slow disks) or had to move files around for other reasons (e.g. new disks / drives), I do usually the following:

    • set the filegroup readonly (remind - it are usually older data and will usually no more be changed in a DWH):
      ALTER DATABASE <db_name> MODIFY FILEGROUP <fg_name> READONLY?

    • copy the files of this filegroup to the new disks (can take several hours)
    • executing the following command to tell the SQL server that he will find the file on the new location after the next time he takes the DB online
    ALTER DATABASE <my_db> MODIFY FILE (NAME='input_file_2019', FILENAME='d:\LW_K\SQL\<my_db>\input_file_2019.ndf')

    • take the database offline
    • take it online again -> it uses now the file(s) on the new drive / destination
    • delete the original file (this will fail, if the SQL-Server is still using it, e.g. because you forgot the ALTER DATABASE MODIFY FILE for one file
    • set the filegroup to READWRITE again (same command as above just with replaced keyword)

    This way my downtime is only a few seconds, drawback is of course, that the file is read-only for a while, this should be no problem for datawarehouses but may be not possible for some OLTP-databases (depending on the data / file structure). And of course it will not work for the PRIMARY filegroup, except you can / want to set the whole database to read only.

    God is real, unless declared integer.

  • Setting up / tearing down log shipping just for a one-time move seems like a lot of overhead to me

    Wilfred van Dijk wrote:

    Instead of logshipping, these were my steps:

    1. restore a full backup with norecovery
    2. stop the websites
    3. backup/restore a diff backup with recovery
    4. change website settings to point to new server
    5. restart website

    Downtime was less than 10 minutes

    That has been my approach too, but I also used the LOG backups. We have a scheduled LOG Backups task which runs every 5 minutes

    • restore a full backup with norecovery
    • backup/restore a diff backup with norecovery
    • restore any LOG BACKUPS which have been made (by scheduled task) since the DIFF (with norecovery)
    • (If that RESTORE takes a while and there are already more LOG BACKUPS then restore those before proceeding)
    • Put up holding page for the websites
    • take a final log backup
    • restore any LOG BACKUPS which have been made (by scheduled task) since the last LOG Restore (with norecovery) - probably none or only one
    • restore the "final log backup" (with norecovery)
    • Assuming everything OK then RESTORE WITH RECOVERY (no actual restore file needed)
    • change website settings to point to new server
    • remove holding page

    When I have done this all web sessions retained their Session ID so anyone who had hung around (e.g on ecommerce with a basket ready to checkout) can just carry on. My holding page explained that the outage would be less than one minute

  • Could you add to the conclusion the time it would take to have done the migration without log shipping?

    It took us 30 minutes (versus X minutes)

    412-977-3526 call/text

  • Hi everyone, quick question for my understanding: Starting from SQL 2016, couldn't we use Availability Groups, even basic AG, to do the equivalent automatically?

    Thank you !

    • This reply was modified 3 years, 9 months ago by  Broniz.
  • Robert Sterbal wrote:

    Could you add to the conclusion the time it would take to have done the migration without log shipping?

    It took us 30 minutes (versus X minutes)

    Good point. The reverse would be true for me ... I have only ever had one database with log shipping, and we are not using that any more, so my confidence level with log shipping would be low, and as such I would proceed with a lot of caution and careful checking.

    Most of the time (when I have done this sort of live-migration in the past) has been to make sure that the steps between STOP the website and START it again are all scripted / rehearsed / quick-to-do / never-ever-going-to-fail 🙂 or at least "have good error checking/reporting" in order to minimise the downtime.

  • Hi there,

    I can recommend these approaches to migrate databases to new storage.

    https://www.sqlphilosopher.com/wp/?s=filegroup

    and

    https://www.bobpusateri.com/archive/2013/03/moving-a-database-to-new-storage-with-no-downtime/

  • Sure @robert Sterbal.

    Dinesh Karunarathna

  • Comments posted to this topic are about the item Minimal Downtime Storage Migration of Large Databases

    Dinesh Karunarathna

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply