October 18, 2011 at 9:11 am
We are getting ready to build a new SQL Server 2008 R2 cluster on Windows Server 2008 R2, and I am looking for feedback about using SAN-based mount points in the failover cluster. Unfortunately, the following link has been be altered to remove its content. I was hoping it would be a good "how to" article.
http://www.sqlservercentral.com/articles/Clustering/65968/
I've read about the positive aspect of conserving server drive letters by using mount points, and that would be helpful to us because, as a hosting facility, we would like to stretch a given cluster configuration as far as possible before building an additional cluster.
I am looking for real world experience. Does anyone have any strong opinion, yea or nay, about using SAN-based mount points for failover clustering? Are they worth the trouble in learning how to implement them? Are there any gotchas that we would not likely think about until we get bitten?
October 18, 2011 at 9:55 am
Having done this recently...
I like mount-points, they help with the problem of running out of drive letters for sure.
However, there are some things to keep in mind.
You can only have one default instance PER CLUSTER, so I have gone with entirely named instances on clusters, YMMV.
I format ALL drives containing SQL data with 64K allocation units. When SQL reads data it is my understanding that it reads 64K at a time, by default the allocation is usually 4K, this way it reads all it wants in one read cycle instead of 16.
You will need a drive to place the shared components, this drive is not shared in the context of clustering, it is a local drive to each node, I like to use D:
The drive that the mount points are mounted into must be a "shared" drive as well.
Since I don't allow SQL to run with elevated rights (such as the login used by the service being in the admin group) SQL may have trouble getting access to the drives. I got around this by initially mounting the drives at discrete drive letters, gave the service login full rights to the drives then removed the drive letter and mounted them at the correct spot. This is because that even though I grant the service login permissions on the mount drive they do not carry forward into the mounted drives.
Only after all that is done do I add them as cluster drives.
Here is an example of what I like:
D:\ (local disk to each instance but will contain "shared" components of SQL)
E:\ (Shared disk, will contain ALL of this instances data)
E:\VirtualSQLServerName01_InstanceName\Instance (Where I install the actual components of SQL.
E:\VirtualSQLServerName01_InstanceName\SQLData01
E:\VirtualSQLServerName01_InstanceName\SQLData02
E:\VirtualSQLServerName01_InstanceName\SQLData03
E:\VirtualSQLServerName01_InstanceName\SQLData04
E:\VirtualSQLServerName01_InstanceName\SQLData05
E:\VirtualSQLServerName01_InstanceName\SQLData06
E:\VirtualSQLServerName01_InstanceName\SQLData07
E:\VirtualSQLServerName01_InstanceName\SQLLogs01
E:\VirtualSQLServerName01_InstanceName\SQLLogs02
E:\VirtualSQLServerName01_InstanceName\SQLLogs03
E:\VirtualSQLServerName01_InstanceName\SQLLogs04
E:\VirtualSQLServerName01_InstanceName\SQLLogs05
E:\VirtualSQLServerName01_InstanceName\SQLLogs06
E:\VirtualSQLServerName01_InstanceName\SQLLogs07
E:\VirtualSQLServerName01_InstanceName\SQLTempDBData
E:\VirtualSQLServerName01_InstanceName\SQLTempDBLogs
When I install SQL I replace the C:\ at the begining to D:\ this causes most of the shared components to end up on D:\ this still causes a little to be installed with the OS but I am trying to prevent as much as possible from ending up on the OS drive.
When I install an instance I chose the install point of E:\VirtualSQLServerName01_InstanceName\Instance
I then set default data to: E:\VirtualSQLServerName01_InstanceName\SQLData01
I then set default logs to: E:\VirtualSQLServerName01_InstanceName\SQLLogs01
I then set the tempdb data location to: E:\VirtualSQLServerName01_InstanceName\SQLTempDBData
I then set the tempdb log location to: E:\VirtualSQLServerName01_InstanceName\SQLTempDBLogs
I then set the backup location to a network share, I don't do local backups.
After they are all in the resource group I make sure that all the mounted volumes are dependent on the mount drive, so that it will always bring that drive online first then all the sub drives.
I also use a methodology of using more smaller LUNS for the volumes. I'm trying to spread the load accross LUNS. So if I have a DB that has high utilization I may move it to its own volumes. This strategy has worked well for me.
I want to make another point, you need to keep in mind that drive letters are not your only potential problem, you need to keep in mind that your HBAs for your SAN has limits too and that if ALL sql instances are hammering away at disk then the HBAs can become saturated and performance can suffer. IF you intend to have a number of SQL instances you need to keep in mind CPU and memory, with memory being probably the most important of the two.
I hope all that was clear.
CEWII
October 18, 2011 at 10:05 am
I'm all for mount points for the main reason that I can conserve drive letters. We're running a couple 2k8 R2 clusters, each with 5/6 instances, with separate drives for system dbs, tempdb, user dbs and backups.
The only downside I can think of, is that you need an additional drive for each instance to host the mount points, which if failed, would in turn take the instance offline. Saying that though, a failed system db drive would also take your instance offline as well!
They're straightforward to setup (assign the drive a mount point in disk management), however I can't point you in the direction of any documentation. The key thing is to make sure the disk hosting the mount point and any disks are a resources for the SQL service.
Cheers
Vultar
October 18, 2011 at 10:21 am
Elliott,
Thanks for the quick response. It sounds as if we already configure our SQL Server installations in a fashion similiar to yours.
In your post, if I am understanding correctly, all of your database files for the instance reside in various subdirectories (which are actually mount points) of the E: drive.
What happens if a given mount point (say, SQLData03) runs out of space?
Is that a single point of failure, meaning that the offending database that needs to grow will shutdown until that mount point has some free space? Or, will SQL Server try to expand file space in a different subdirectory?
Will Windows Explorer continue to show the E: drive as having free space when one of its mount points fills up?
If so, how do you chase down which mount point is full?
October 18, 2011 at 10:41 am
The mount point won't "shut down" it would just run out of space. Now I try to only allocate to about 50% meaning I should generally get a lot of notice before this happens. Also, keep in mind that you could add a file in another mount point for a database and set the max size of the one in the other volume. In many cases your SAN admin can extend the drive and then the underlying volume. Databases consist of files that are explicitly placed, if one runs out of space and there are no other files (on different volumes) exist then that database will effectively become read-only, but it isn't pretty and I would recommend that you not allow it to happen.
I saw allow because in few cases will a drive just fill-up, it most often happens over time. You should have monitoring in place that says X drive is less than 10GB free, and most are smart enough to handle mount points. The one I am writing is. Don't depend on xp_fixeddrives it won't give you a true picture.
Yes, E will still show space available. You will see in the errorlog in SQL that so and so file ran out of space. you can also go to Computer Management and open the Disk Management and see all the volumes, there is a column there that indicates % full..
Any other questions?
CEWII
October 18, 2011 at 11:01 am
Elliott Whitlow (10/18/2011)
Don't depend on xp_fixeddrives it won't give you a true picture.
Hmmm... Really? What causes xp_fixeddrives to yield a faulty picture?
We use xp_fixeddrives to monitor all of our drives for all database servers via a PowerShell script that runs as a Scheduled Task. The Windows admins have expensive third-party monitoring installed as well, but the DBAs never get the desired disk space warnings until it is too late to take action. So, we wrote some home grown scripts that appear to do a better job, assuming that we aren't missing something here. From your experience, what do we need to look for regarding xp_fixeddrives?
you can also go to Computer Management and open the Disk Management and see all the volumes, there is a column there that indicates % full..
Is there a way to keep tabs on all of the mount points for all of the servers from a remote machine?
October 18, 2011 at 11:16 am
shew (10/18/2011)
Elliott Whitlow (10/18/2011)
Don't depend on xp_fixeddrives it won't give you a true picture.Hmmm... Really? What causes xp_fixeddrives to yield a faulty picture?
We use xp_fixeddrives to monitor all of our drives for all database servers via a PowerShell script that runs as a Scheduled Task. The Windows admins have expensive third-party monitoring installed as well, but the DBAs never get the desired disk space warnings until it is too late to take action. So, we wrote some home grown scripts that appear to do a better job, assuming that we aren't missing something here. From your experience, what do we need to look for regarding xp_fixeddrives?
you can also go to Computer Management and open the Disk Management and see all the volumes, there is a column there that indicates % full..
Is there a way to keep tabs on all of the mount points for all of the servers from a remote machine?
Yes, xp_fixeddrives dows not show you anything about mount points. There for you have an E drive with 50GB and 40GB Free but you have 10 mount points that have 50GB on each, you will only see the 40GB free and the fact that one of the mount points is full will not be shown. So the short answer is that if you use NO mount points then xp_fixeddrives can work for you, however, I like solutions that work in ALL cases.
You can monitor them remotely, the easiest, depending on your perspective, is to use WMI. I have been working on exactly this for the last 2 weeks. With any luck I will release something on codeplex by the end of november. I have been working on a framework for collecting and monitoring basic information about SQL instances and the machines they sit on. I have been working on the collection aspects, the reports and alerting is next. However at this point the only alerting I am planning is for drive space.
CEWII
October 18, 2011 at 11:26 am
Elliott Whitlow (10/18/2011)
So the short answer is that if you use NO mount points then xp_fixeddrives can work for you, however, I like solutions that work in ALL cases.
Good to know. Looks like I will need to revisit our monitoring script if we go with mount points.
You can monitor them remotely, the easiest, depending on your perspective, is to use WMI. I have been working on exactly this for the last 2 weeks. With any luck I will release something on codeplex by the end of november. I have been working on a framework for collecting and monitoring basic information about SQL instances and the machines they sit on. I have been working on the collection aspects, the reports and alerting is next. However at this point the only alerting I am planning is for drive space.
Cool! I am very interested--I would love to get a link to your CodePlex project when you are finished.
By the way, since PowerShell has a WMI interface, maybe I can add that functionality to my script fairly simply. Can you post the syntax you are using to monitor the mount points?
October 18, 2011 at 12:01 pm
To look over what is available in WMI you can look for a tool called WMIExplorer. The namespace is Root\CIMV2 (which is the most common NS) and the class is Win32_Volume. I use a couple others but for space this is basically the place. You'll have to look up the class to translate the DriveType and others.
CEWII
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply