May 31, 2018 at 11:32 am
Hi there,
Is there a disadvantage of having filegroups of a database on different physical drives? My Database is very large and I do lot of computations and also deal with wide tables and making it into tall tables. Right now I have all ndf in one physical drive and log files on different. But i am running out of space. I was not sure whether there are any pros/cons to having them in different physical drives
Thank you
MR
May 31, 2018 at 11:43 am
Not at all. Usually dbs perform better with more drives (more spindles really).
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
May 31, 2018 at 11:48 am
But , would more drives on a single physical drive versus different physical drive have any adverse effect?
May 31, 2018 at 11:58 am
rash3554 - Thursday, May 31, 2018 11:48 AMBut , would more drives on a single physical drive versus different physical drive have any adverse effect?
More logical drives on the same physical drive? That depends. If the drive can sustain the I/O ratio, then more logical drives could still help, because of the way SQL (Windows) uses drive letters and queues activity for them. Separate drive letters would still help I/O if the single drive can keep up.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
May 31, 2018 at 1:14 pm
Is the disk provisioned from a SAN? If so, it's likely all from the same "physical drives" anyway regardless of what the Windows OS sees.
If you have multiple logical drives on the same physical drive then you are basically forcing all reads and all writes to be random. You won't get much sequential I/O as the disk heads will keep seeking to different parts of the disk to satisfy I/O to the different logical drives.
June 1, 2018 at 7:11 am
Thanks ScottPletcher and Chris. Understand it much better now.
Chris, I checked with IT and it is on SAN
June 1, 2018 at 8:14 am
With the size of modern drives, and the number of files on each drive, I don't know that you truly get a stationary head sitting there waiting for the next sequential write anyway, even if the drive is dedicated to, say, just log files. Every db that has any modification activity going is on is writing to its own log file, and those writes must be confirmed committed, i.e. the actual I/O must occur, before the transaction can complete.
The idea of the sequential writes originated a long time back, when drives were much smaller and, therefore, had far fewer files. I'm not sure it still holds today, although I don't claim to be a disk expert at all.
SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".
June 1, 2018 at 10:26 am
rash3554 - Friday, June 1, 2018 7:11 AMThanks ScottPletcher and Chris. Understand it much better now.
Chris, I checked with IT and it is on SAN
It also depends which SAN vendor are you using. We have a 12 TB database,80% read and 20% write activity. Backend storage is all flash , however the way vendor has it designed there is max throughput on each LUN, so we have two filegroups and each filegroup is split into many files, each file sits on a dedicated volume pinned to a dedicated LUN, btw i hate this SAN architecture..lol. I know there are many other SAN vendors who do not have such cap..
June 4, 2018 at 12:11 pm
curious_sqldba - Friday, June 1, 2018 10:26 AMrash3554 - Friday, June 1, 2018 7:11 AMThanks ScottPletcher and Chris. Understand it much better now.
Chris, I checked with IT and it is on SANIt also depends which SAN vendor are you using. We have a 12 TB database,80% read and 20% write activity. Backend storage is all flash , however the way vendor has it designed there is max throughput on each LUN, so we have two filegroups and each filegroup is split into many files, each file sits on a dedicated volume pinned to a dedicated LUN, btw i hate this SAN architecture..lol. I know there are many other SAN vendors who do not have such cap..
You may hate it, but they are probably just making sure you can't possibly saturate your SAN fabric with I/O. When you do that, you'll have plenty of reason to bug them, and they either don't sell a faster fabric or they charge big bucks for it. Not surprising behavior for a vendor, but at least they gave the SAN fabric some thought. Not the best solution, by any means, but better than nothing. Just be aware that SAN fabric speeds are typically just 1 gigabit and are often grossly inadequate for much in the way of disk I/O. After all, modern SATA drives are capable of 4 to 5 Gb/sec transfer rates, PER drive. Now you get a bunch of those into a single piece of SAN hardware, and try to funnel all that I/O through a single gigabit connection... Yeah, that's a good idea... NOT!!! That's the primary bottleneck with SANs... the network connections are just not fast enough until you reach at least 10 GB/sec with multi-pathing, and it's not cheap....
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
June 4, 2018 at 5:31 pm
sgmunson - Monday, June 4, 2018 12:11 PMcurious_sqldba - Friday, June 1, 2018 10:26 AMrash3554 - Friday, June 1, 2018 7:11 AMThanks ScottPletcher and Chris. Understand it much better now.
Chris, I checked with IT and it is on SANIt also depends which SAN vendor are you using. We have a 12 TB database,80% read and 20% write activity. Backend storage is all flash , however the way vendor has it designed there is max throughput on each LUN, so we have two filegroups and each filegroup is split into many files, each file sits on a dedicated volume pinned to a dedicated LUN, btw i hate this SAN architecture..lol. I know there are many other SAN vendors who do not have such cap..
You may hate it, but they are probably just making sure you can't possibly saturate your SAN fabric with I/O. When you do that, you'll have plenty of reason to bug them, and they either don't sell a faster fabric or they charge big bucks for it. Not surprising behavior for a vendor, but at least they gave the SAN fabric some thought. Not the best solution, by any means, but better than nothing. Just be aware that SAN fabric speeds are typically just 1 gigabit and are often grossly inadequate for much in the way of disk I/O. After all, modern SATA drives are capable of 4 to 5 Gb/sec transfer rates, PER drive. Now you get a bunch of those into a single piece of SAN hardware, and try to funnel all that I/O through a single gigabit connection... Yeah, that's a good idea... NOT!!! That's the primary bottleneck with SANs... the network connections are just not fast enough until you reach at least 10 GB/sec with multi-pathing, and it's not cheap....
Out SAN from back plane all the way to switches is 8Gb. I can name 3 SAN vendors atleast who do not have such limitations at LUN level, this is based on my POC not what vendors just say.
June 5, 2018 at 1:51 pm
curious_sqldba - Monday, June 4, 2018 5:31 PMsgmunson - Monday, June 4, 2018 12:11 PMcurious_sqldba - Friday, June 1, 2018 10:26 AMrash3554 - Friday, June 1, 2018 7:11 AMThanks ScottPletcher and Chris. Understand it much better now.
Chris, I checked with IT and it is on SANIt also depends which SAN vendor are you using. We have a 12 TB database,80% read and 20% write activity. Backend storage is all flash , however the way vendor has it designed there is max throughput on each LUN, so we have two filegroups and each filegroup is split into many files, each file sits on a dedicated volume pinned to a dedicated LUN, btw i hate this SAN architecture..lol. I know there are many other SAN vendors who do not have such cap..
You may hate it, but they are probably just making sure you can't possibly saturate your SAN fabric with I/O. When you do that, you'll have plenty of reason to bug them, and they either don't sell a faster fabric or they charge big bucks for it. Not surprising behavior for a vendor, but at least they gave the SAN fabric some thought. Not the best solution, by any means, but better than nothing. Just be aware that SAN fabric speeds are typically just 1 gigabit and are often grossly inadequate for much in the way of disk I/O. After all, modern SATA drives are capable of 4 to 5 Gb/sec transfer rates, PER drive. Now you get a bunch of those into a single piece of SAN hardware, and try to funnel all that I/O through a single gigabit connection... Yeah, that's a good idea... NOT!!! That's the primary bottleneck with SANs... the network connections are just not fast enough until you reach at least 10 GB/sec with multi-pathing, and it's not cheap....
Out SAN from back plane all the way to switches is 8Gb. I can name 3 SAN vendors atleast who do not have such limitations at LUN level, this is based on my POC not what vendors just say.
Understand, but that doesn't change the fact that even multi-pathed, and assuming a Raid-5 config on the drives within a LUN, you'd be lucky to have more than 2 LUNs going at full speed I/O at any one point in time. Start accessing 10 or 15 or 20 LUNs at the same time and your I/O slows to a crawl, and not because of Raid 5, but just the sheer volume of data trying to be written. Not trying to defend your SAN vendor, but I do think they get some credit for recognizing and potentially preventing I/O storms.
Steve (aka sgmunson) 🙂 🙂 🙂
Rent Servers for Income (picks and shovels strategy)
Viewing 11 posts - 1 through 10 (of 10 total)
You must be logged in to reply to this topic. Login to reply