SQL Server 2005 Cluster moving to new data center

  • We have two SQL 2005 Clusters that we need to physically move to another data center. This will involve a new back end SAN of course and new IP Addresses. I have been reading a little bit on how to do this. Has anyone ever done this before?

    Thoughts at a high level:

    Backup all databases .ldf and .mdf files and database backups and all fines on the SAN and Quorum drive.

    Using SQL install evict the passive node.

    Shutdown hardware and physically move them to new location.

    Copy all ldf / mdf and all files from SAN to exact same drives and Quorum drive

    Start SQL Server.

    Bring the passive node back into the cluster.

    IP address change not sure how that figures into this.

  • I did this once, but the server names didn't need to stay the same. We migrated an entire data center, including the web servers. There was configuration information that was changed regarding the server names.

    In order to complete the switch quickly for the larger databases, we had log shipping running. We had about 1-2 hours of downtime for the databases. We were able to set up replication without snapshots because all of the databases at the new data center were in sync.

  • IP change may affect your Foundation agents / hubs and your cluster virtual name (may need to refresh the link). Other than that, it shouldn't affect much. (I think.).

    However, this is a dangerous change to do. Your options include 1) Breaking the cluster to keep SQL Server up during the move or 2) bringing SQL Server down and not getting in any business during the move.

    In my workplace, though we have small dbs, we don't like SQL Server to be unavailable at any time. We just create a new cluster at the new place and treat the whole thing like a parallel upgrade. It's a PITA, but it's less of a PITA than doing what your workplace is doing.

    On the other hand, if you don't have 24 x7 business, you may be good. Just make sure you have an up and running standby server for SQL in case the old servers just don't come back up for whatever reason.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Yea, I wish we had the budget for new hardware. I would rather build all new servers and restore the databases to them and would have a fallback. However, the hardware is only 1 year old and they don't want to buy four new servers for the two clusters. I can understand this but with the size of the databases this is going to cause a long down time.

  • Markus (8/15/2011)


    Yea, I wish we had the budget for new hardware. I would rather build all new servers and restore the databases to them and would have a fallback. However, the hardware is only 1 year old and they don't want to buy four new servers for the two clusters. I can understand this but with the size of the databases this is going to cause a long down time.

    Then all you can do is backup multiple times (make sure to have copies on hardware and on tape drive, etc.), warn the users, and hope everything comes back online.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Markus (8/11/2011)


    Using SQL install evict the passive node.

    Why do you think this is necessary?

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • I was wondering the same thing actually. Somewhere else someone suggested that and I am starting to think why should I have to do that?

    If I stop SQL Server and file copy everything off of the drives, make the drive letters the same in the new data center, copy the contents exactly, edit in Cluster Admin the IP addresses, start SQL Server then SQL Server should start OK.

  • Just bring one node up at a time if you have both up remove the passive node from the possible owners list on the cluster resources.

    Are you assigning new LUNs to the servers as you'll need to connect these to all nodes.

    Make sure to restore the NTFS ACLs too!

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • Markus (8/15/2011)


    I was wondering the same thing actually. Somewhere else someone suggested that and I am starting to think why should I have to do that?

    If I stop SQL Server and file copy everything off of the drives, make the drive letters the same in the new data center, copy the contents exactly, edit in Cluster Admin the IP addresses, start SQL Server then SQL Server should start OK.

    THe more I think about it I am wondering if I keep it clustered I won't be able to copy the files on the Q Quorum drive to the new drive. I bet if you break the cluster you can then....

  • I would swap to a Majority Node Set quorum and remove the disk drive dependency to allow the cluster to start

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • I think you need to review the risks and issues of what you want to do. It is very easy for technical folk to focus on how to solve the technical problem and loose sight of the business risks.

    IMHO you should report back to your manager about the potential risks you are running. You need to quantify the risk of the cluster not working at the new site, and have a plan on how you would deal with this. This needs to range from the server not arriving at the scheduled time or in a workable condition, through to software issues getting the cluster to work.

    You can mitigate the transport risks by insisting each server in the cluster is shipped in a different truck following a different route, so that a problem in one place is unlikely to wipeout the entire cluster. However, you still have to factor in the downtime.

    Your manager should be able to factor in the busines risks (that is what he is paid for!), and come to a decision about whether it is safe to try to move the live cluster or build new at the new location. It is very rare that a 1-year old server could not be repurposed for some other task within its lifetime, so the arguement about saving the cost of new servers is fairly weak.

    In your manager's situation, I would look to use new kit to bring up essential services in the new site. This would include the Domain Controllers, DNS and the critical business servers. If your SQL cluster is essential to the business then it should be part of the new build. For most servers the production levels of disk storage would not be needed - the key point is to ensure the services will work., When the new site is working, then move the rest of the kit, including the disk storage. This maximises the likelyhood that essential services will work correctly at the next business day after the move.

    If I took a management decision to move the lot and hope everything worked, I would not be surprised to have a difficult conversation with the company principals about the risk taken, even if everything did work OK.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • For my company, when we move data centers, we usually don't have the option to add back in to the existing cluster, and I'm not sure how you're going to do the same unless you have geo-clustering available to you; though I'm not sure how that would be possible if you're moving to a new SAN.

    My thought process would be to rebuild each node and ultimately maintain the SQL Server network name:

    1. Decom one of the nodes in your current data center.

    2. Ship it, rebuild it with a fresh OS.

    3. Install SQL server as a new failover cluster with all new disk, and, temporarily, a new network name.

    4. Script all of your logins, settings, jobs, etc as you normally would move to a new server.

    5. Log ship your databases for the cutover.

    6. Downtime here while flipping the DB's over:

    a. Bring the databases online on the new location.

    b. Take SQL Server offline on the old cluster in the old data center.

    c. Rename the SQL Server network name in the new cluster.

    7. Once confirmed that all is running well and as expected, decom the second node.

    8. Rebuild the second node with a fresh OS.

    9. Add the node to the cluster.

    Unless I misunderstood the OP's initial goal, that would be how I would personally approach the move.

    Best regards,

    Steve

Viewing 12 posts - 1 through 11 (of 11 total)

You must be logged in to reply to this topic. Login to reply