February 18, 2009 at 8:19 am
Hi ,
I am supporting 1 Tb database with 4 datafiles and auto growth is 100 MB . If I increase the auto growth to 500 in each file. how does it impact my performence and my drive space.
what is best practice to do changes to improve the performance.
Thanks
ichbinraj
February 18, 2009 at 8:27 am
My recommendation would be not to let it Autogrow. It is better to do it manually. Keep a monitor in place to find out if it is nearing the threshold and then you Grow it manually. By doing this, it can be done at a time when there in not much activity.
Just my .0002 cents 😉
-Roy
February 18, 2009 at 8:34 am
Can you give more details on this ?
Shall I crease 500 Mb ? or shall I keep 100 MB ?
Thanks
ichbinraj
February 18, 2009 at 8:41 am
That depends on the rate of growth and the number of transactions it handles. If it is a High Transactional DB, then 500 would be better since it does not have to do another growth in a short period of time.
-Roy
February 18, 2009 at 9:09 am
You should know what to do here. What is the growth over the last 3 months? You want to track the size on a regular basis for two reasons. The first is to know how much space you want to add every month or few months to keep things running smoothly.
The second is so you know if something changes. If there's an unusual event or if the database is suddenly being used more. Tracking space helps you notice this.
The only way to do this is to add xx space and then monitor it. see how long it takes for things to be used up and then you'll have an idea of how often and how much space you need to add.
February 18, 2009 at 9:10 am
500MB is not a lot for a 1TB database.
You should have an Idea how much time it takes on your hardware to do that.
Hopefully you are taking advantage of instant file initialization of SQL 2005.
* Noel
February 18, 2009 at 9:23 am
I did the changes in my database. but I am getting following alerts
DATE/TIME: 2/18/2009 10:51:11 AM
DESCRIPTION: The SQL Server performance counter 'Data File(s) Size (KB)' (instance 'PEP') of object 'SQLServer:Databases' is now above the threshold of 1310720000.00 (the current value is 1549455360.00).
Any suggestion how to solve this ?
Thanks
ichbinraj
February 18, 2009 at 9:45 am
It appears that the aggregate amount of space used by all four data files is now greater than 1.5 TB. Which particular file(s) are growing and how full are they?
Below is a script that will give you the space utilized by file. I cannot take credit for this script, I received it second hand and I believe the ultimate source is SQLServerCentral.
Ken
SET NOCOUNT ON
declare @dbname1 VARCHAR(256)
SET @dbname1 = 'KenDEV'
IF OBJECT_ID('tempdb..#DBSTATS') IS NOT NULL
BEGIN
DROP TABLE #DBSTATS
END
CREATE TABLE #DBSTATS (
dbname sysname,
lname sysname,
usage varchar(20),
decimal(9, 2) NULL ,
[used] decimal(9, 2) NULL
)
IF OBJECT_ID('tempdb..#temp_log') IS NOT NULL
BEGIN
DROP TABLE #temp_log7
END
CREATE TABLE #temp_log
(
DBName sysname,
LogSize real,
LogSpaceUsed real,
Status int
)
IF OBJECT_ID('tempdb..#temp_sfs') IS NOT NULL
BEGIN
DROP TABLE #temp_sfs
END
CREATE TABLE #temp_sfs
(
fileid int,
filegroup int,
totalextents int,
usedextents int,
name varchar(1024),
filename varchar(1024)
)
DECLARE @dbname sysname
,@sql varchar(8000)
IF OBJECT_ID('tempdb..#temp_db') IS NOT NULL
BEGIN
DROP TABLE #temp_db
END
SELECT name INTO #temp_db
FROM master.dbo.sysdatabases
WHERE DATABASEPROPERTY(name,'IsOffline') = 0
AND has_dbaccess(name) = 1
AND Name = ISNULL(@dbname1, name)
ORDER BY name
WHILE (1 = 1)
BEGIN
SET @dbname = NULL
SELECT TOP 1 @dbname = name
FROM #temp_db
ORDER BY name
IF @dbname IS NULL
GOTO _NEXT
SET @sql = ' USE ' + @dbname + '
TRUNCATE TABLE #temp_sfs
INSERT INTO #temp_sfs
EXECUTE(''DBCC SHOWFILESTATS'')
INSERT INTO #DBSTATS (DBNAME, LNAME, USAGE, , [USED])
SELECT db_name(), NAME, ''Data''
, totalextents * 64.0 / 1024.0
, usedextents * 64.0 / 1024.0
FROM #temp_sfs
INSERT INTO #DBSTATS (DBNAME, LNAME, USAGE, , [USED])
SELECT db_name(), name, ''Log'', null, null
FROM sysfiles
WHERE status & 0x40 = 0x40'
EXEC(@sql)
DELETE FROM #temp_db WHERE name = @dbname
END
_NEXT:
INSERT INTO #Temp_Log
EXECUTE ('DBCC SQLPERF(LOGSPACE)')
UPDATE #DBSTATS
SET SIZE = B.LogSize
, USED = LogSize * LogSpaceUsed / 100
FROM #DBSTATS A
INNER JOIN #Temp_Log B
ON (A.DBNAME = B.DBNAME)AND(A.Usage = 'LOG')
SELECT dbname AS [database name],
lname AS [logical data name],
usage,
AS [space allocated (MB)],
used AS[space used (MB)],
- used AS [free space (MB)],
cast(used/*100 AS numeric(9,2)) AS [space used %],
cast(100-(used/*100) AS numeric(9,2)) AS [free space %]
FROM #DBSTATS
ORDER BY dbname, usage
DROP TABLE #DBSTATS
DROP TABLE #temp_db
DROP TABLE #temp_sfs
DROP TABLE #temp_log
SET NOCOUNT OFF
February 18, 2009 at 9:51 am
I guess I should have included that script as an attachment :D.
February 18, 2009 at 10:00 am
Hi
Thanks for your information .
my production database information:
Total 4 datafiles size 867 GB (931,638,804,408 bytes)
on I: drive
Data1 file =257 GB
Data2 file =255 GB
Data3 File= 256 GB
Data 4 file= 97.6 Gb
Primay =DATA 1 .mdf
Currently Allocated Space: 263820.00MB
Available Free Space : 8613.88 MB (3%)
Primay =DATA 2 .ndf
Currently Allocated Space: 262000.00MB
Available Free Space : 8368.50MB (3%)
Primay =DATA 3 .ndf
Currently Allocated Space: 2662660.00MB
Available Free Space : 8083.38 MB (3%)
Primay =DATA 4
Currently Allocated Space: 100000.00MB
Available Free Space : 69157.38 MB (69%)
Todays morning backup size =781GB
Please let me know how to configure my database to get the best results of it
Appreciate your information
February 18, 2009 at 10:19 am
I think you need to get the free space % on files 1-3 up to at least 10% to give yourself some breathing room. If you have the storage capacity (and I hope you do), you should probably get the free space on those three data files >= 30%. Again, this will give you some breathing room, and you can monitor space consumption trends by your database and adjust your sizing strategy accordingly.
The means adding at least 2.5 GB per file and preferably 5-10 GB per file on data files 1-3.
I agree with Roy's earlier post stating that manual file growth is preferable. Growing these files in 500 MB increments may lead to serious file fragmentation depending on your storage configuration.
You may want to use the space usage script and output those results to a Maintenance database to track your space consumption over time.
Also, are you using a compressed backup tool? A 787 GB backup seems very large if you are using a tool like Quest Litespeed or Red Gate SQL Backup.
Ken
February 18, 2009 at 12:03 pm
Hi ,
how to increase the space on files ? is it
DBproperties >files>select Datafile1 >click on initial size then increase the 5 Gb in each datafiles?
I Planing to increase the auto growth to 300 MB .
Script I ran its showing me 0 results:
DBCC execution completed. If DBCC printed error messages, contact your system administrator.
database name logical data name usage space allocated (mb) space used (mb) free space (mb) space used % free space %
-------------------------------------------------------------------------------------------------------------------------------- ------------------------------------------------------------------
Please let me know is it right way to do ?
Thanks
ichbinraj
February 18, 2009 at 12:14 pm
See BOL topic ALTER DATABASE. Make sure that you specify a maximum size if you enable autogrowth.
I do not mean to be rude, but how is it that you are administering a 1TB database and have to ask questions about ALTER DATABASE?
Are you in over your head? If so, you may want to come out and ask for help.
Ken
February 18, 2009 at 12:29 pm
You must definitely should start using backup compression!
I agree with Ken, 1TB DB is a serious deal, get some help.
A sample:
USE master;
GO
ALTER DATABASE
MODIFY FILE
(
NAME = Datafile1,
FILEGROWTH = 1GB
);
GO
* Noel
February 18, 2009 at 1:19 pm
Thanks guys,
I am Planing to implement Litespeed,
Can you suggest me any other tool which can help to find out performance issues in DB and CPU level...details information ...
-Ichbinraj
Viewing 15 posts - 1 through 14 (of 14 total)
You must be logged in to reply to this topic. Login to reply