April 15, 2015 at 8:00 am
Thank you for your feedback. If I first create windows cluster, then how I would add storage of the existing nodes to the cluster if the drives are owned by the two nodes.
Create new LUNs for the cluster that both servers can see. Don't re-use the old LUNs. Let them be used by the existing instances and remove them after your migration is done, as well as uninstalling the single instances.
These two nodes are production servers, and that is what we want to make them clustered. Also, how much of downtime is involved? As you stated, off hours of course. Is it possible if you please briefly list the steps for presenting storage to the cluster. We are using SAN.
Would have been nice to have two new separate servers, right? 😉 It's a pain re-designing to a cluster afterwards. I would hope you SAN admin can help you with presenting the new LUNs to both servers . Different strokes for FC/iSCSI etc.
But basicly, present LUNs to the servers and connect them in the OS. They should show up on both servers i disk admin, but look kind of wierd since the cluster hasn't been set up yet. This should be pretty safe to do daytime.
Turn off any running SQL services and run the validation tests off hours. This could be somewhat problematic if you're unlucky. Depending on what kind of technology you use, you could end up swapping some drive letters around. Had that problem with iSCSI setups. If so, you just have to sort out what LUNs go where so your production instances can continue to run.
Sizing the new LUNs diffrently might help, eg. instance 1 has disks 21,31,41GB, instance 2 22,32,42GB. Validation checks usually takes everything from 5-20 minutes to run, sorting out stuff, well, give it a couple of hours to be safe. 🙂
Set up the windows part of the cluster and, if your production servers still are running safe and sound and you have time, just continue to install the SQL-part of the cluster instances. Give this another couple of hours. And then testing failovers etc for another couple of hours. Hope it helps somewhat. Good luck!
April 15, 2015 at 3:15 pm
I think there's some confusion between Clustering on an OS level and a SQL Clustered instance.
Yes, you can install standalone instances on a Windows Server that is part of a cluster. But there is no way to upgrade a standalone instance to a cluster instance. The clustering option must be selected (and all clustering prerequisites met) at the time SQL Server is installed.
If you need the benefits of a failover cluster you can install a Clustered SQL instance on your Windows Cluster (how to present network disks to the cluster is probably outside of the scope of this forum - please talk to your SAN/Windows Server people regarding that).
OR
If your standalone instances are SQL 2012 or 2014, you may be able to set up one or two Always-On Availability Groups. This may be set up to run all databases from a single instance should one of the servers fail. It will however require both instances to have copies of all databases , since each instance has its own separate storage.
April 16, 2015 at 1:51 am
Downside with AlwaysOn AG = the robbers emtying your license budget account with Enterprise Ed though...
Not to mention if you have to run the "Active/Active". :crying:
Viewing 3 posts - 16 through 17 (of 17 total)
You must be logged in to reply to this topic. Login to reply