January 7, 2008 at 2:35 pm
Capacity planning.
If I have over 100 GB database. Can anybody help and advise on secondary file group.
How many GB should I assign for the second, or third filgroup.
Enclosed is my T-sql script:
Create database SQLDev01
ON PRIMARY
(Name= 'SQLDev01',
Filename = 'D:\SQL_Data\SQLDev01.mdf',
Size= 120000MB,
Maxsize= 200000MB,
Filegrowth= 50MB)
LOG ON
(Name= 'SQLDev01',
Filename= 'E:\SQL_Log\SQLDev01_log.ldf',
Maxsize= 30000MB,
Filegrowth= 20MB)
GO
January 8, 2008 at 1:23 am
Why do you want to add another filegroup? Do you experience any problems? Do you want to create a read-only filegroup to shorten maintenance times?
Just because your database is 100 Gb is not really a reason to add extra filegroups. Maybe it 's better to add one or more files to your existiong filegroup.
While it might be usefull to add one or more extra filegroups, it only makes sense when it's combined with correct placement of tables and indexes.
So tell us more about what you hope to achieve.
[font="Verdana"]Markus Bohse[/font]
January 8, 2008 at 9:08 am
Markus,
I review Kimberly Tripp webcast.
She suggested to create filegroup for DB > 100 GB.
For best practice, what is your advise.?
Also, this db is going to use for replication.
THank you for your help and advise.
January 8, 2008 at 9:22 pm
With a large database (>100GB) you probably need more than one filegroup but need to think through what you're trying to accomplish when creating your filegroups. There are several things to consider:
(1) Are you fully utilizing all of the available I/O paths on the machine? One common strategy is to seperate data and indexes (not including clustered indexes) onto seperate filegroups on seperate I/O paths which allows SQL Server to utilize parallelism for most/many queries.
(2) Another strategy is to seperate "static" and "volatile" data into different filegroups for ease of backup & restore.
(3) There is some argument that even with a limited number of I/O paths (e.g. 1) that there should be at least one filegroup per CPU/core which strikes me as somewhat suspect in an I/O bound system but... I'll let others comment on this one. There is however no argument that in a perfect world where you have multiple I/O paths available there is a substantial benefit to having as many filegroups as you have CPUs/Cores and/or I/O paths (e.g. with 4 cores and 4 I/O paths, you would put a filegroup on each I/O path).
J
January 9, 2008 at 1:35 am
I totally agree with Joe, that you first need to determine what you try to achieve by using more than one filegroup. Also the placement of the files / filegroups on the IO Subsystem is important. If you use a SAN, are the filegroups placed on different LUNs? With local storage, how many physical drives do you have?
About point 3 in Joe's reply, I think that recommendation is meant that you should have one file per CPU, not filegroup. At least thats the recommendation for tempdb.
[font="Verdana"]Markus Bohse[/font]
January 9, 2008 at 8:46 am
I am not sure if that is the practice, but the 3rd comment usually apply to tempdb.
I would look at your table size and try to find out which is the largest in size and try to move it to a different file or file group depending on the needs that was specify earlier by poster.
sopheap
January 10, 2008 at 3:01 pm
Joe Clifford (1/8/2008)
With a large database (>100GB) you probably need more than one filegroup but need to think through what you're trying to accomplish when creating your filegroups. There are several things to consider:(1) Are you fully utilizing all of the available I/O paths on the machine? One common strategy is to seperate data and indexes (not including clustered indexes) onto seperate filegroups on seperate I/O paths which allows SQL Server to utilize parallelism for most/many queries.
(2) Another strategy is to seperate "static" and "volatile" data into different filegroups for ease of backup & restore.
(3) There is some argument that even with a limited number of I/O paths (e.g. 1) that there should be at least one filegroup per CPU/core which strikes me as somewhat suspect in an I/O bound system but... I'll let others comment on this one. There is however no argument that in a perfect world where you have multiple I/O paths available there is a substantial benefit to having as many filegroups as you have CPUs/Cores and/or I/O paths (e.g. with 4 cores and 4 I/O paths, you would put a filegroup on each I/O path).
J
I believe on number 3 it should be data file and not filegroup.
January 15, 2008 at 8:55 am
Many thanks and appreciated for Joe, Markus, Sophead, and Edogg valuables advise and input.
My SQL 2005 is running in Window Server 2003 box that has a 8-port SATA RAID controllers.
There 4 500GB drives. The O/S configured as
C = 50 GB
D = 681 GB
E = 200 GB
The SQLDev02 db size is over 100 GB. Some tables have size over 10 GB.
Table A = 14 GB ++
Table B = 13 GB ++
Table C = 12 GB ++
so on...
Try to accomplish:
1. Performances
2. Ease to manage on huge tables
January 15, 2008 at 12:02 pm
given that you have only two drives I doubt that separating tables in different filegroups is going to buy you anything in terms of "preformance". Recovery time is probably a different issue but you have not mention it as part of your goals.
* Noel
January 15, 2008 at 12:10 pm
Hmm...
Any idea of how your 4 drives are configured under the hood? Based on the numbers I suspect that you've got a raid 10 (or 1+0 depending on the vendor) going (e.g. 2 striped drives mirrored) which would give you a TB+/- which has then been sliced up by creating multiple partitions on top of the mirror?
How many physical controllers in the machine? I would suspect that you've got only 1? Unless you're using O/S mirroring?
Joe
January 15, 2008 at 12:40 pm
This is tuff that you only have 2 disk. i assume that one of the disk hold data and back up and the other one hold log. Because you probably have one controller it is not much help even though I understand your need to place table into different file or file group. It probably make sense to move data into a different file or file group so that they don't mix with other table... maybe this will give you a few percent more on performance of your query but I doubt that you will reap the benefit that you are looking for in the beginning.
sopheap
January 15, 2008 at 1:18 pm
Joe,
This server was hosting outsite our company domain. The server configuration was proposed by our Senior DBA and IT hosting teams.
Not sure how many physical controllers in this machine.
Since seperating the tables wont do any goods. I will drop this issue.
Thank you for your kind advise and it was valuable lessons.
January 15, 2008 at 2:56 pm
Just one additional comment, if you're responsible for that database server be sure to backup everything including system and user db's religiously. If my assumptions on the raid 10 configuration are correct if something goes wrong (e.g. you lose more than one drive or the raid controller) you're going to lose the entire machine including all of the partitions (O/S, etc.) and have to rebuild from bare metal which is never fun/fast.
Joe
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply