January 25, 2009 at 4:01 pm
Hi,
could you plz tell me how to create secondary data files in sql server 2005? Because we have databases with size of 30gb, we are planning to make multiple files and for log files also we would like to create multiple secondary log files.
Can we take individual file group backups? and also can we restore individual file groups?
could you provide me the steps to create multiple secondary data files and log files and the steps to take individual file group backups and restore
Thanks for your help
January 26, 2009 at 2:34 am
>>how to create secondary data files in sql server 2005?
-- Create a new file group
ALTER DATABASE test_filegroup
ADD FILEGROUP NonClustIndexes
go
-- Add a file to the file group, we can now use the file group to store data
ALTER DATABASE test_filegroup
ADD FILE (
NAME = NonClustIndexes,
FILENAME = 'E:\MSSQL\DEV\NonClustIndexes.ndf',
SIZE = 5MB,
MAXSIZE = 100MB,
FILEGROWTH = 5MB
)
TO FILEGROUP NonClustIndexes
go
code snippet from Tony R's post.
>>Can we take individual file group backups? and also can we restore individual file groups?
Yes & Yes. You need to read this post. http://support.microsoft.com/kb/281122
--Backup the files in the SalesGroup1 secondary filegroup.
BACKUP DATABASE Sales
FILE = 'SGrp1Fi2',
FILE = 'SGrp2Fi2'
TO DISK = 'G:\SQL Server Backups\Sales\SalesGroup1.bck'
GO
--Back up the files in SalesGroup1.
BACKUP DATABASE Sales
FILEGROUP = 'SalesGroup1',
FILEGROUP = 'SalesGroup2'
TO DISK = 'C:\MySQLServer\Backups\Sales\SalesFiles.bck'
GO
RESTORE DATABASE MyDatabase
FILE = 'MyDatabase_data_1',
FILE = 'MyDatabase_data_2',
FILEGROUP = 'new_customers'
FROM MyDatabaseBackups
WITH
FILE = 9,
NORECOVERY;
GO
-- Restore the log backups.
RESTORE LOG MyDatabase
FROM MyDatabaseBackups
WITH FILE = 10,
NORECOVERY;
GO
RESTORE LOG MyDatabase
FROM MyDatabaseBackups
WITH FILE = 11,
NORECOVERY;
GO
RESTORE LOG MyDatabase
FROM MyDatabaseBackups
WITH FILE = 12,
NORECOVERY;
GO
--Recover the database:
RESTORE DATABASE MyDatabase WITH RECOVERY;
GO
These might help.
http://technet.microsoft.com/en-us/library/ms186858.aspx#restoring_using_FILE_n_FG
http://technet.microsoft.com/en-us/library/ms179401.aspx
[font="Courier New"]Sankar Reddy | http://SankarReddy.com/[/url][/font]
January 26, 2009 at 3:07 am
There are three things to consider with this. You should read what Books Online (BOL) has to say about all of these, as well as the posts in this thread.
1) Use of multiple file groups
2) Use of multiple data files within a file group
3) Use of multiple log files
Use of multiple file groups
The main use of multiple file groups is to physically segment your database so that portions of it can be managed differently. Often, files belonging to a given file group are placed on different disks to files in other filegroups. Typical segmentation strategies are:
a) Fact tables in filegroupA, dimension tables in filegroupB, aggregate tables in filegroupC. b) Business-critical tables in filegroupZ, less important tables in filegroupY, archive tables in filegroupX.
Multiple filegroups are also used if you have partitioned tables. This situation allows you to treat the filegroups associated with different partitions in a different manner.
You can back up and recover each filegroup separately, or they can all be included in a normal database backup. For a large database, using filegroup backups allows you to run multiple backup jobs at the same time, decreasing the overall time needed to run the backup.
However, before moving to filegroup backups you should look at using multiple files in a normal database backup. At my old place, a Litespeed backup of a ~300GB database using 1 file took over 12 hours to run, but when I told Litespeed to use 4 threads to 4 backup files it took under 90 minutes to run.
Use of multiple files in a filegroup
This is done for performance reasons only. You need to have a disk subsystem that can handle the increased I-O load that will result from multiple files in a filegroup. If you have multiple files in a filegroup, they should all be the same size and all allow zero growth. This will allow SQL Server to use a 'proportional fill' algorithm to balance I-O load across the files. If you allow growth on these files, a table will effectively be pinned to that file, and if that file cannot grow you will get an error, regardless of if there is space in the other files.
Use of multiple log files
There is no benefit to be gained from using multiple log files. The main use of multiple log files is if you do not have enough space on the original disk drive to cope with your log file size. In this case a second file can be created on another disk.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
January 26, 2009 at 11:22 am
Thank You,
We have a database wss_content and its primary data file is located in E:\Microsoft SQL Server\MSSQL.1\MSSQL\Data\wss_content.mdf and log file is located in F:\SQLLogs\wss_content_log.ldf.
The database size is 30GB. Now, how can I create secondary datafile?
If I create secondary file how the data from the 1st mdf file is distributed to secondary datafile?
The reason we are considering splitting the lardge mdf file into multiple files is
when the large mdf file is corrupted, we will lose all the data. If we have multiple file, if one of the filees is corrupted, then we have a chance loosing less data. This is what I Understood.
Correct me If Iam wrong
January 26, 2009 at 11:49 am
madhu.arda (1/26/2009)
when the large mdf file is corrupted, we will lose all the data. If we have multiple file, if one of the filees is corrupted, then we have a chance loosing less data. This is what I Understood.
Um, no.
Data loss in the case of corruption is not a valid reason to consider splitting the filegroup up. The way to ensure that you do not lose data when the DB gets corrupted is to have regular backups, a tested backup and restore strategy and regular database integrity checks.
If you have 4 files in the primary filegroup and one of those files is damaged to the point where it's completely unreadable, the entire database will still be offline, because it's part of the primary filegroup and if any part of the primary filegroup is lost, the entire database will be offline.
I don't know if WSS can use multiple filegroups. Have you checked the sharepoint documentation?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 26, 2009 at 12:30 pm
Thanks Gail,
I just checked the Sharepoint documentation in http://support.microsoft.com/kb/932744/.
I have question here about rebuid an index.
As per the link, I did not understand whether I can rebuild the indexes for sharepoint databaes.
Iam using the following script to reorganize/rebuild indexes for sharepoint databases by weekly.
Could you plz advise me am I doing wrong thing? Iam doing this from past 1 month.How this will effect sharepoint databases? if anything happens, will MS support us? and they said rebuild index task is corrected in SP2, that means if we have SP2 or higher can we do rebuild index task?
Shoul I use only reorganize indexes?
From yesturayonward we have Sp3 on all production boxes.
Plz clarify me....
-- Ensure a USE statement has been executed first.
SET NOCOUNT ON;
DECLARE @objectid int;
DECLARE @indexid int;
DECLARE @partitioncount bigint;
DECLARE @schemaname nvarchar(130);
DECLARE @objectname nvarchar(130);
DECLARE @indexname nvarchar(130);
DECLARE @partitionnum bigint;
DECLARE @partitions bigint;
DECLARE @frag float;
DECLARE @command nvarchar(4000);
-- Conditionally select tables and indexes from the sys.dm_db_index_physical_stats function
-- and convert object and index IDs to names.
SELECT
object_id AS objectid,
index_id AS indexid,
partition_number AS partitionnum,
avg_fragmentation_in_percent AS frag
INTO #work_to_do
FROM sys.dm_db_index_physical_stats (DB_ID(), NULL, NULL , NULL, 'LIMITED')
WHERE avg_fragmentation_in_percent > 10.0 AND index_id > 0;
-- Declare the cursor for the list of partitions to be processed.
DECLARE partitions CURSOR FOR SELECT * FROM #work_to_do;
-- Open the cursor.
OPEN partitions;
-- Loop through the partitions.
WHILE (1=1)
BEGIN;
FETCH NEXT
FROM partitions
INTO @objectid, @indexid, @partitionnum, @frag;
IF @@FETCH_STATUS < 0 BREAK;
SELECT @objectname = QUOTENAME(o.name), @schemaname = QUOTENAME(s.name)
FROM sys.objects AS o
JOIN sys.schemas as s ON s.schema_id = o.schema_id
WHERE o.object_id = @objectid;
SELECT @indexname = QUOTENAME(name)
FROM sys.indexes
WHERE object_id = @objectid AND index_id = @indexid;
SELECT @partitioncount = count (*)
FROM sys.partitions
WHERE object_id = @objectid AND index_id = @indexid;
-- 30 is an arbitrary decision point at which to switch between reorganizing and rebuilding.
IF @frag < 30.0
SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REORGANIZE';
IF @frag >= 30.0
SET @command = N'ALTER INDEX ' + @indexname + N' ON ' + @schemaname + N'.' + @objectname + N' REBUILD';
IF @partitioncount > 1
SET @command = @command + N' PARTITION=' + CAST(@partitionnum AS nvarchar(10));
EXEC (@command);
PRINT N'Executed: ' + @command;
END;
-- Close and deallocate the cursor.
CLOSE partitions;
DEALLOCATE partitions;
-- Drop the temporary table.
DROP TABLE #work_to_do;
GO
January 26, 2009 at 12:37 pm
madhu.arda (1/26/2009)
How this will effect sharepoint databases? if anything happens, will MS support us? and they said rebuild index task is corrected in SP2, that means if we have SP2 or higher can we do rebuild index task?
No idea.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 26, 2009 at 5:03 pm
Hi,
anybody experienced with maintaining sharepoint databases in sql server 2005, please advise me on rebuilding indexed and as well as the other important things to consider
Thank you
January 27, 2009 at 2:52 am
A SharePoint database is just a database. You need to do all the normal maintenance that is needed on any database.
The KB article mentioned above confirms that normal DB maintenance is required, and that if you are on SQL 2005 SP2 or above there are no problems in doing index rebuilds.
I sometimes see advice that SharePoint databases should be shrunk. Please ignore this advice unless you want to hurt your DB performance. A database should never be shrunk unless you expect a permanent reduction in size. Repeated shrinking and growing of a DB will give you bad NTFS fragmentation that can only be cured by a NTFS defrag. If you expect your database to grow again within 3 months then do not shrink it.
Adding multiple files to improve I-O performance is a key tool in the DBA's toolbox. Microsoft has recently done some benchmarks and found that having about 1 file every 1.5 processor cores up to a maximum of 8 files can give a useful performance benefit. If you add multiple files to a small database (under 50GB), you will only see performance decline as the database grows. If you wait until the database is over 100 GB then you can add additional files over time to keep performance stable.
If your users are happy with performance and your DB size is 30GB then there is no need to add extra files. If they are not happy, the problem is almost certainly something that will not be cured by adding extra files. You need to find the cause of the problem instead of spending time implementing a solution that does not address the problem.
If you are not doing index rebuilds, then running these will have much more impact than using multiple files. If you regularly shrink your database (make sure autoshrink is turned off) then doing a NTFS defrag will have much more impact than adding multiple files. Beware that DB performance is likely to be bad while the NTFS defrag is running, but will improve afterwards.
Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.
When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply