Convert from stand-alone SQL instance to clustered on new hardware?

  • I have a SQL 2008R2 server (Let's call it PR001).

    It is a stand-alone installation using iSCSI SAN data volumes, though the master/model/msdb are located on the C: drive (which is a local physical volume on the server).

    I have just obtained two new servers to build a cluster for higher availability.

    These two cluster nodes currently have naked OS's, let's call them CL001 and CL002.

    I have several other SQL Servers all over the place... data warehousing, SSIS package runners, SSRS installations, beta and dev environments, etc. Each of them have linked server connections to PR001, SSIS packages have server names programmed into them, etc. Then there's all the in-house apps that have either config files or hard-coded connection strings looking for PR001. I want to maintain the PR001 host name and SQL instance name.

    I also have extensive logins to migrate.

    I've been looking for a means to migrate a stand-alone SQL instance to a clustered instance. Doesn't seem to be documented by Microsoft anywhere. So I'm trying to come up with a plan that involves minimal downtime.

    So, far, I have come up with:

    1. Declare an outage.

    2. Change IP address for PR001 to new address.

    3. Rename Windows OS name of PR001 to PROLD

    --4. Rename SQL instance name from PR001 to PROLD (Nevermind, not doing this)

    5. Reboot

    6. Stop SQL, move master MDF/LDF files to iSCSI volume rather than C:

    7. Use the -d and -l startup parameters to tell SQL Services where the MDF/LDF is located now for the master db.

    8. Move msdb/model databases with this statement: ALTER DATABASE database_name MODIFY FILE ( NAME = logical_name , FILENAME = 'new_path\os_file_name' ) and restart SQL services

    9. Stop SQL, dismount all iSCSI volumes associated with the PR001 resource.

    10. Build a failover SQL cluster called PR001, give the cluster IP the old PR001 IP address. Use the old iSCSI volumes holding user databases as well as old master/model/msdb from old PR001.

    11. Stop SQL Services.

    12. Use the -d and -l startup parameters to tell SQL Services where the MDF/LDF is located for the master db, but be sure to use the files from the old PR001 files.

    13. Ensure the paths on the new cluster are identical to those of the previous stand-alone environment.

    14. Bring SQL online and cross fingers that the stand-alone master DB will function in the clustered environment.

    Has anyone done this before? Or found a different way to preserve all SQL Agent jobs, SSIS packages, logins, linked servers, log shipping/mirroring/replication jobs, etc and convert a stand-alone installation to a clustered install?

    I'm going to test this out on a smaller scale in a test lab (obviously) prior to doing this operation to prod. The only step I really worry about is bringing master online after it coming from a stand-alone to clustered environment.

    ETA: Step 4 is unnecessary and counterproductive because the intent is to use the live MDF/LDF files for master, and renaming the SQL instance (via sp_dropserver 'PR001' followed by sp_addserver 'PROLD', 'local') affects the master database in a way I don't want to deal with in the cluster.

  • don't move the databases, extract all required objects and import them to your new clustered instance.

    -----------------------------------------------------------------------------------------------------------

    "Ya can't make an omelette without breaking just a few eggs" 😉

  • This is how I would move to a new cluster:

    1) Build New Cluster instance as PR002

    2) Migrate logins, ssis packages, agent jobs (script out of PR001 - execute into PR002).

    3) Restore all user databases from PR001 to PR002

    4) Test new system thoroughly

    5) Declare outage

    6) Lock users out of PR001 (alter databases to restricted users or single user)

    7) Backup databases on PR001 - restore on PR002 (note, this downtime can be shortened by using mirroring, log shipping, differential and transaction log backups, etc...)

    8) Synchronize logins/users if needed (use sp_help_revlogin to build logins and this won't be needed)

    9) Shut-down PR001 - or rename the host if server is still needed

    10) Update DNS to redirect PR001 to PR002

    11) Release system

    This gives you the opportunity to test and evaluate - and gives your users the ability to test and evaluate the new system before switching over.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Thanks for jotting down steps, its easy to redone steps that way...made changes see if this works for you...

    Pre steps:

    1. Have both nodes configured and failover cluster feature installed

    2. Take all db backup including systemdb's

    After outage:

    3. Declare outage

    4. Take all log backups

    5. shutdown server

    6. Install cluster on new nodes with origional name and IP on shared drives, sql with right virtual name and IP

    7. Stop cluster along with all resources

    8. Start SQL server in single user mode and restore master db

    9. Restore msdb and other db's as usual

    THATS IT...

    it will work, though worst case scenerio you can start old server and continue as ususal

    Prakash Heda
    Lead DBA Team - www.sqlfeatures.com
    Video sessions on Performance Tuning and SQL 2012 HA

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply