October 8, 2007 at 11:19 pm
Hi,
We have a database which is around 120 GB and growing rapidly around a GB per week.
Currently we have an mdf file around 1GB which never grows and
have 2 ndf files under the same filegroup which is growing at a tremendous pace.
Its hosted on a HP DL360 with 16GB RAM & with 6 spindles of 146GB capacity each
and configured as 3 sets of RAID 1+0.
The first set we have two logical partitions C:\ (30 GB )& D:\(110GB).
The OS & SQL binaries along with the mdf file which is not growing is installed on C:\.
On the D:\ we have placed the tempdb and Transaction Log file i.e., ldf.
Each of the remaining two sets contains the 2 ndf files which is around 60GB each
but under same filegroup.
So far so good now with database growing so rapidly we have decided to port it to server
with 16GB RAM and which is having 4 spindles of 146GB capacity each and configured as 2 sets of RAID 1+0 and
which is connected to SAN Box which has 10 spindles of 500 GB each with fibre connectivity
having a controller cache of 1GB. Its a Sun Storage Box.
Now the question.
How do i configure this? I am bit confused as do i take all 10 disks and configure as RAID 1+0?
Or do I create 2 sets of RAID 1+0 like I did above and continue with placing datafiles on diffrent
sets and creating some more empty datafiles?
What is better ?
According to vendors they say when i configure the whole 10 disks the data would be scattered across
very fast and read also very fast in terms of speed and performance. I am confused.
What I want to achive is fast performance in terms of I/O's.
Guys your inputs would be appreciated.
Prashant
October 9, 2007 at 6:24 am
with a small database I'd go for all disks in raid 10 and put all the data files on this array. you need the tlog on a seperate raid 1.
do not create logical drives on your san array.
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
October 9, 2007 at 8:09 am
Go with R10 if you can afford it. Separate logs and data.
October 9, 2007 at 9:58 pm
Rule #1 all SAN's are not created equal. Different SAN's treat disks very differently (e.g. EMC vs. 3PAR)
What brand & model of SAN are you connecting to?
October 9, 2007 at 11:49 pm
Its a Sun Storage Box 6140
October 10, 2007 at 3:03 am
SUN SANs typically inherit the way Storage Tek did things before Sun bought STK.
This means you typically get dynamic provisioning for your data. With dp, if you define a LUN attached to a Windows server with a capacity of 100 GB and have 90 GB free space, only 10 GB space is used in the SAN. As your files grow, more space is allocated in the SAN. The free space is not allocated anywhere - it is only used to show the upper limit of how big the LUN (i.e. drive letter) can grow to.
Also with dp, a disk track in the SAN is never updated. If your application issues an UPDATE, a new track of space is allocated in the SAN from the free space pool, your data written to it, and the old track put into the free space pool. (this is a simplified explaination...)
This approach means the concepts of RAID 5, RAID 10, etc all get virtualised. You can define a volume as RAID 5 or RAID 10, but the physical implementation with dp is vastly different to what it would be with directly attached storage.
You should ask Sun for advice on how best to exploit the SAN and follow their advice.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
October 10, 2007 at 6:26 am
don't you just love SAN's
[font="Comic Sans MS"]The GrumpyOldDBA[/font]
www.grumpyolddba.co.uk
http://sqlblogcasts.com/blogs/grumpyolddba/
October 10, 2007 at 8:03 am
EdVassie thanx for that ...i will explore the same with my sun vendors.
October 10, 2007 at 9:38 am
And I hate SANs
Take the script I provide and execute it several times on any database. Take the best result (it returns the elapsed time in ms)
On my old laptop I get about 600ms. On the best PROD server I had 430ms - but it was with a standard RAID drives.
On any SAN EMC system it was a real nightmare. 800, 900, in some cases 1800ms!
Yes, I know why: because SAN EMC and other companies like it have a very good marketing. They manipulate with throughput numbers, but for SQL server, especially for the OLTP systems, reactivity is much more important.
They claim that 'our IO systems are so smart, that you can put LOG and FILES on the same drive'. And usually, there is no choice.
People who buy servers are hosting listen to that marketing blah-blah-blah and we, DBAs, need much later deal with high Disk Queue values.
If you can , provide the execution times of my script here. May be you have a good IO system and I was just so unlucky?
---------------------
set nocount on
GO
create table _Ptest (n int identity, k int not null, v varchar(128))
GO
insert into _Ptest (k,v) select 1, 'this is a test'
GO
declare @t datetime, @n int
set @t=getdate()
set @n=16
while @n>0 begin
insert into _Ptest (k,v) select k+n,v from _Ptest
set @n=@n-1
end
checkpoint
select datediff(ms,@t,getdate())
GO
drop table _Ptest
October 10, 2007 at 9:57 am
One of the biggest problems with SANs is that the DBA and Storage Admin people hardley ever talk to each other, and often company politics prevents them talking to each other.
The next biggest problem is the Storage Admin folks often do not have the tools they need to identify SAN capacity problems, especially data access hotspots.
The DBA and Storage Admin people NEED to talk regularly to each other. They should report to the same management structure. This would help in getting the databases in the optimal place in the SAN, and help build the case for good tools for the Storage Admin folks. It is too easy to blame the SAN for bad performance, when the real culprit is politics.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
October 10, 2007 at 10:54 am
Ed's comments on the specific behaviors of a SUN SAN are pretty much what I was after/hinting at.
All SANs are NOT CREATED EQUAL and the specific SAN (and fabric) in use will dictate the bast way to utilize the SAN in question. In the old days, particularly with EMC SANs it was important to map specific drives within the SAN to specific uses, however; many newer SANs including SUN (really LSI), 3PAR, etc. use aggregate volumns in which RAID level is really nothing more than a QOS setting (e.g. treat this bunch of data as if it was on a higher performance RAID set than that other set of data) -- all of the data is really spread across all of the available spindles in the arrray anyway which is usually great for performance as long as the number of IOPS for the SAN as a whole are within the performance envelope of the "head" and attached disks.
One thing to watch for is a mismatch/impedance between the capabilities of the "head" and the number of drives in the array - you can but a higher end head that is capable of hundreds of thousands of IOPs per second and then cripple it by not putting in enough disks to support that level of performance.
Joe
October 10, 2007 at 11:50 am
EdVassie is right again. Whilst I have always been lucky in a number of ways. Usually Hitachi or EMC SANs to work with and more importantly having reasonable and open SAN Administrators. Two additional suggestions for you regarding this subject that has not been mentioned.
First, SQL Server reads Extents (8 pages or 64Kb) and writes Pages (8Kb). This is something your SAN Administrator can configure with 'blocking' (not spids) on the disks to help make things more efficient.
Second, within Windows and the Disk Administrator you have the choice on your SAN disks, when attached and made available to Windows, to make them 'Basic' disks or 'Dynamic' disks.
'Basic' disks cannot be expanded on the fly. In other words if you have a 100Gb LUN and you want to make it 200 Gb on the SAN you can but Windows will only know about the first 100 Gb. So to expand it on Windows you need to present a different drive with your 200 Gb and then copy the 100 Gb drive data to the 200 Gb drive. Yes it is a pain.
'Dynamic' disks can be grown 'on the fly'. Sounds like the way to go doesn't it ?
Here is the gotcha though. If you are ever going to 'cluster' the database server then you must have 'Basic' disks .
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
October 11, 2007 at 4:56 am
Just to add a bit to Rudy's reply... You need to avoid Dynamic disks in any situation where the disks are put offline and online. This is because all Dynamic disks are registered in the Dynamic Disk Database (a component of Windows, probably hosted using Jet), and the logic around this easily gets upset and can prevent your disk coming back online until you reboot.
This obviously applies to clusters, but equally applies if you use the Scalable Shared Database feature (see BOL) or any other technique where disks go offline and online.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
October 11, 2007 at 1:01 pm
Thanks Ed ! Not a day goes by when I do not learn something new from this forum !!!
RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."
October 12, 2007 at 3:15 am
And a bit more, slightly off topic... If you use Dynamic disks, then you must also have the 'Bypass traverse checking' windows right. This is because the root point for authority checking with dynamic disks is the Danamic Disk database, NOT the drive letter. Therefore is you do not have this Windows right and have no explicit authorities at the drive letter level (having user authorities at drive letter level is normally bad practice), you cannot access any folders on the drive even if you have full rights to them!
If you have the btc right, then authority checking is able to skip the drive letter level and let you access your folders. Typically the btc right should be given to the \\servername Users group, so that anyone with rights to access the machine can access the data to which they have permissions. This issue does not occurr with Basic disks.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply