February 23, 2021 at 9:18 am
Thanks for the post.
We're actually doing it right now, and our approach is very similar to yours, however due to HA restrictions we couldn't even have this 30 minute downtime.
In addition, the main DB server have 7 drives, which makes it a long process to switch the data drive letters, while the server is down.
We decided that we can live with a different drive letter for the DBs.
/****This is very much not best practice, and don't do it without understanding the implications.******/
We created a clone DB that is synced with LOG SHIPPING as you suggested.
This DB is on the new SAN.
On the cut-over moment:
1. Making one last LS backup/ LS restore and disable the LS jobs.
2. Stopping the replication agent jobs.
3. Putting the source DB in single_user.
4. Backing up The source DB's tail and restoring the tail to the clone DB - restoring with WITH RECOVERY,KEEP_REPLICATION.
5. Putting the clone DB offline (it is now up to date with the source).
6. Changing the pointer of the DB files of the original DB, to point the new path of the clone DB.
7. Restoring the original DB WITH RECOVERY (as it was in restoring state since the tail log backup).
8. Putting the original DB to multi_user mode.
9. Starting the replication agent jobs.
Now the source DB is live and looking at the DB files that are placed on the new SAN.
The replication wasn't impacted.
At most cases, the downtime was ~1 minute.
In one application with high throughput, the application caught the single user session and we had to troubleshoot it, which made the downtime ~15 minutes.
Use [DBNAME]
GO
ALTER DATABASE [DBNAME] SET SINGLE_USER WITH ROLLBACK AFTER 10;
GO
USE [Master]
GO
BACKUP LOG [DBNAME] TO DISK = N'\\Backup\DBNAME_Tail.trn' WITH NO_TRUNCATE , NOFORMAT, INIT, NAME = N'DBNAME tail backup', SKIP, NOREWIND, NOUNLOAD, NORECOVERY, COMPRESSION, STATS = 10;
GO
RESTORE LOG [DBNAME_Clone] FROM DISK = N'\\Backup\DBNAME_Tail.trn' WITH FILE = 1, NOUNLOAD, STATS = 10 , RECOVERY, KEEP_REPLICATION;
GO
ALTER DATABASE [DBNAME_Clone] SET OFFLINE
ALTER DATABASE [DBNAME] MODIFY FILE ( NAME = [DBNAME], FILENAME = '<NEW PATH>\DBNAME.mdf')
ALTER DATABASE [DBNAME] MODIFY FILE ( NAME = [DBNAME_log], FILENAME = '<NEW PATH>\DBNAME_log.ldf')
RESTORE DATABASE [DBNAME] WITH RECOVERY, KEEP_REPLICATION;
GO
ALTER DATABASE [DBNAME] SET MULTI_USER WITH NO_WAIT;
The same can be done for SYSTEM DBs, but it's much more work (registry parameters, etc).
Thanks again and have a nice day.
February 23, 2021 at 9:49 am
Thanks @roni.vered for the comments. Yes if we are good with new drive letters, it just need 1 minute of downtime.
Dinesh Karunarathna
February 23, 2021 at 9:55 am
I recently migrated a 1.3 TB database to SQL2019 and also had very little downtime. Instead of logshipping, these were my steps:
Downtime was less than 10 minutes
Wilfred
The best things in life are the simple things
February 23, 2021 at 10:30 am
I manage a big DWH database (about 12 TB). Of course (for this size) the database is partitioned. The biggest tables have their own set of annual filegroups (e.g. FG_ORDERS_2019, FG_ORDERS_2020 ...), "smaller" tables (10-50 GB) are usually combined into more general filegroups (e.g. FG_INPUT_2019, FG_INPUT_2020)
When I do the annual "maintenance" (moving old data to the slow disks) or had to move files around for other reasons (e.g. new disks / drives), I do usually the following:
ALTER DATABASE <db_name> MODIFY FILEGROUP <fg_name> READONLY?
ALTER DATABASE <my_db> MODIFY FILE (NAME='input_file_2019', FILENAME='d:\LW_K\SQL\<my_db>\input_file_2019.ndf')
This way my downtime is only a few seconds, drawback is of course, that the file is read-only for a while, this should be no problem for datawarehouses but may be not possible for some OLTP-databases (depending on the data / file structure). And of course it will not work for the PRIMARY filegroup, except you can / want to set the whole database to read only.
God is real, unless declared integer.
February 23, 2021 at 11:41 am
Setting up / tearing down log shipping just for a one-time move seems like a lot of overhead to me
Instead of logshipping, these were my steps:
- restore a full backup with norecovery
- stop the websites
- backup/restore a diff backup with recovery
- change website settings to point to new server
- restart website
Downtime was less than 10 minutes
That has been my approach too, but I also used the LOG backups. We have a scheduled LOG Backups task which runs every 5 minutes
When I have done this all web sessions retained their Session ID so anyone who had hung around (e.g on ecommerce with a basket ready to checkout) can just carry on. My holding page explained that the outage would be less than one minute
February 23, 2021 at 11:48 am
Could you add to the conclusion the time it would take to have done the migration without log shipping?
It took us 30 minutes (versus X minutes)
412-977-3526 call/text
February 23, 2021 at 12:55 pm
Could you add to the conclusion the time it would take to have done the migration without log shipping?
It took us 30 minutes (versus X minutes)
Good point. The reverse would be true for me ... I have only ever had one database with log shipping, and we are not using that any more, so my confidence level with log shipping would be low, and as such I would proceed with a lot of caution and careful checking.
Most of the time (when I have done this sort of live-migration in the past) has been to make sure that the steps between STOP the website and START it again are all scripted / rehearsed / quick-to-do / never-ever-going-to-fail 🙂 or at least "have good error checking/reporting" in order to minimise the downtime.
February 24, 2021 at 7:25 am
Hi there,
I can recommend these approaches to migrate databases to new storage.
https://www.sqlphilosopher.com/wp/?s=filegroup
and
https://www.bobpusateri.com/archive/2013/03/moving-a-database-to-new-storage-with-no-downtime/
February 24, 2021 at 7:54 pm
Comments posted to this topic are about the item Minimal Downtime Storage Migration of Large Databases
Dinesh Karunarathna
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply