June 30, 2009 at 11:04 pm
i have been asked to recommend a cluster design to host numerous databases that are to be migrated from old servers. i have designed and built a cluster before but it was 3 years ago using sql 2000 and it looks like a lot has changed since then.
there are 30 servers in scope and i have been asked to ensure the solution is scalable for future servers that are to be retired. the obvious choice is to recommend a huge box with as many cpus i can cram into it running sql 2008 enterprise. however, the large one-off cost is likely to get the project scrapped.
i'm thinking that a better solution at present is to suggest 2 nodes with 1 x 4-core cpus in each and one sql 2008 standard license. as its cpu performance capacity is reached, all that will be required to scale it up will be to add 1 x 4-core cpu into each node, more memory, more disk, and another sql 2008 standard license to create another instance.
here's my questions:
(1) is this doable? ie can i install multiple instances on a cluster, and have them use specific cpu cores? eg let's say i have installed 3 instances on a cluster whose nodes have 3 x 4-core cpus giving 12 cores in each node. instance a1 uses cores 1-4, instance b2 uses core 5-8 and instance c3 uses cores 9-12. this means i only have to purchase 3 sql 2008 standard licenses yea?
(2) is the cluster smart enough to know that when it fails an instance over, it must keep using only the cores allocated to it? ie instance a1 uses cores 1-4, instance b2 uses core 5-8 and instance c3 uses cores 9-12
(3) i understand that i can have a maximum of 16 instances on the same node. that's fine. however, i must be careful with the way the drives are designated as there are only 25 letters to use. unless i use mount points. um, what's a mount point in this context? my research has only found info that is either too generic or assumes too much knowledge of network storage devices.
(4) let's say i choose not to use mount points and stick with drive letters. with network storage being what it is, is it the case i do not really need to put my data, log & tempdb files on different 'drives' since it's the mirroring & striping of the network storage itself that provides the safety & avoids disk contention?
(5) my solution must be compatible with an as-yet unknown future disaster recovery project. my thinking is that the most direct way to do this will be to physically separate the cluster nodes. to make the most of this, i should also separate the disk mirrors. is this doable? ie is there such a thing as a network storage device that allows you to have one set of its disks in one place and the mirror disks in another place?
any responses much appreciated.
rob
July 1, 2009 at 1:16 am
(1) is this doable? ie can i install multiple instances on a cluster, and have them use specific cpu cores? eg let's say i have installed 3 instances on a cluster whose nodes have 3 x 4-core cpus giving 12 cores in each node. instance a1 uses cores 1-4, instance b2 uses core 5-8 and instance c3 uses cores 9-12. this means i only have to purchase 3 sql 2008 standard licenses yea?
Yes this is workable by using affinity masking in SQL Server (this is an option in sp_configure).This will restrict the instance to use specifis CPUs.For 3 active instances you will purchase 3 licenses but since there is also an option of per CPU based you will be spending less[for 1 QUAD core you need to spend 1 license].
(2) is the cluster smart enough to know that when it fails an instance over, it must keep using only the cores allocated to it? ie instance a1 uses cores 1-4, instance b2 uses core 5-8 and instance c3 uses cores 9-12
Yes .the changes in sp_configure are stored in master database and all the databases are on the shared drives .So after the failover same settings will prevail.
(3) i understand that i can have a maximum of 16 instances on the same node. that's fine. however, i must be careful with the way the drives are designated as there are only 25 letters to use. unless i use mount points. um, what's a mount point in this context? my research has only found info that is either too generic or assumes too much knowledge of network storage devices.
Well , you will not be reaching the 16 instance limit i think .Microsoft's NTFS 3 supports Volume Mount Points through the use of NTFS reparse points, which allows volumes to be mounted at arbitrary locations in the file system in addition to the standard drive letters (e.g. C:, E:).Cluster supports the mount points.But there can be better options that others can offer .
(4) let's say i choose not to use mount points and stick with drive letters. with network storage being what it is, is it the case i do not really need to put my data, log & tempdb files on different 'drives' since it's the mirroring & striping of the network storage itself that provides the safety & avoids disk contention?
Putting the Data,log,backups and tempdb on different physical drives will give you better performance .
(5) my solution must be compatible with an as-yet unknown future disaster recovery project. my thinking is that the most direct way to do this will be to physically separate the cluster nodes. to make the most of this, i should also separate the disk mirrors. is this doable? ie is there such a thing as a network storage device that allows you to have one set of its disks in one place and the mirror disks in another place?
For disaster recovery you would be having the Raid drives on SAN .But you can also use the options like log shipping , mirroring and Replication and create a DR site (i case both nodes goes down ).
Abhay Chaudhary
Sr.DBA (MCITP/MCTS :SQL Server 2005/2008 ,OCP 9i)
July 6, 2009 at 9:19 am
I agree with Abhay.
July 6, 2009 at 11:31 am
(1) is this doable? ie can i install multiple instances on a cluster, and have them use specific cpu cores? eg let's say i have installed 3 instances on a cluster whose nodes have 3 x 4-core cpus giving 12 cores in each node. instance a1 uses cores 1-4, instance b2 uses core 5-8 and instance c3 uses cores 9-12. this means i only have to purchase 3 sql 2008 standard licenses yea?
Yes this is workable by using affinity masking in SQL Server (this is an option in sp_configure).This will restrict the instance to use specifis CPUs.For 3 active instances you will purchase 3 licenses but since there is also an option of per CPU based you will be spending less[for 1 QUAD core you need to spend 1 license].
I am not sure whether or not you are stating that you are referring to licensing or referring to processor allocation. In a cluster I would not consider processor affinity settings at all - since during a failover I could not guarantee those processors would be available on the other node.
And besides, licensing for SQL Server 2008 under the processor licensing is by machine. You have to license each processor that is available to the OS. You can't have a system with 2-quad core processors and only purchase a single processor license, unless you disable the second process at the OS.
You can install as many instances under a processor license as you have the resources to support and the edition supports.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
July 6, 2009 at 4:31 pm
I am not sure whether or not you are stating that you are referring to licensing or referring to processor allocation. In a cluster I would not consider processor affinity settings at all - since during a failover I could not guarantee those processors would be available on the other node.
i am referring to processor allocation. we are happy with the information we have on licencing.
why is it that during a failover the processors might not be available on the other node? eg if instance b2 has affinity settings that limit it to using cpus 5-8 on one node, why would it not use processors 5-8 when it fails over to the other node? isn't that the sort of thing that the affinity settings are for? both nodes will of course have identical physical builds.
July 6, 2009 at 5:11 pm
rob mcnicol (7/6/2009)
i am referring to processor allocation. we are happy with the information we have on licencing.why is it that during a failover the processors might not be available on the other node? eg if instance b2 has affinity settings that limit it to using cpus 5-8 on one node, why would it not use processors 5-8 when it fails over to the other node? isn't that the sort of thing that the affinity settings are for? both nodes will of course have identical physical builds.
You would have to make sure every instance, on every node, has the right processors allocated. If you want to go through that overhead, that is fine - but it has absolutely no effect on licensing.
If you have 2-quad core processors, you would need to license each SQL Server instance by CAL or both processors. You can't purchase a single processor license and tell SQL Server to only use one physical processor and be okay with the licensing.
In a multi-node cluster with multiple instances running on every node - you have to purchase the appropriate CAL licensing for every instance in the cluster, or you purchase processor licenses for every processor on all nodes.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply