May 9, 2011 at 1:27 pm
I've done some searching on SQL Server Central forums but haven't come across an answer to my question, so here goes....
SQL 2005 Server SP3
Database is compatibility level 80
We monitor disk space and manually grow the database and log files as we don't want any performance interruptions that are possible using autogrow. Currently the initial size for the db is 100GB and it is at 75% capacity. Autogrow is set, however, we don't let the files fill so it never executes.
Question(s): is there a best practice for when to grow the files? Like at 90% capacity for example? Or since it could autogrow anyway, technically you could wait until 99%? Is there any performance impact in allowing the initial size to get to 99% capacity?
Although I've worked as a SQL Operational Admin for a few years now, I still feel like a newbie as there's so much to it!
May 9, 2011 at 1:42 pm
When is not so important (though it can impact index fragmentation). What's more important is how frequently are you growing the data files and by how much? Too frequent grows, even if controlled, can cause issues.
May 9, 2011 at 2:21 pm
My take on this is to have enough space in the data file for at least 6 months growth, plus enough to rebuild the largest index in the database. Generally, having at least 6 months growth will cover the largest index - but not always.
I also recommend using a fixed growth size - and when manually growing the file grow it in increments of that auto growth size. For example, on my TB+ database I have set the autogrow to 20000MB - and will manually grow the file when needed in 20000MB increments.
You should also make sure you have instant initialization turned on, as this will reduce the impact on the system when growing the file.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
May 9, 2011 at 2:24 pm
Thanks for the reply. We have resized the file twice in 7 years.
May 9, 2011 at 2:26 pm
Thanks - I have grown the file twice, once by 20000MB and once by 50000MB. I'll take a look at table sizes and review the initialization setting. I should be able to plan ahead from there, as you suggest.
Thanks again!
May 9, 2011 at 3:35 pm
Kathy Cowens (5/9/2011)
Thanks - I have grown the file twice, once by 20000MB and once by 50000MB. I'll take a look at table sizes and review the initialization setting. I should be able to plan ahead from there, as you suggest.Thanks again!
I would also suggest taking a look at:
http://msdn.microsoft.com/en-us/library/ms175935(v=sql.90).aspx
In SQL Server 2005, data files can be initialized instantaneously. This allows for fast execution of the previously mentioned file operations. Instant file initialization reclaims used disk space without filling that space with zeros. Instead, disk content is overwritten as new data is written to the files. Log files cannot be initialized instantaneously.
Note:
Instant file initialization is available only on Microsoft Windows XP Professional or Windows Server 2003 or later.
Instant file initialization is only available if the SQL Server (MSSQLSERVER) service account has been granted SE_MANAGE_VOLUME_NAME. Members of the Windows Administrator group have this right and can grant it to other users by adding them to the Perform Volume Maintenance Tasks security policy. For more information about assigning user rights, see the Windows documentation.
May 9, 2011 at 5:36 pm
It really depends on the size of the database and the rate of growth. I wouldn't get hung on specific numbers. If a database is growing very slowly, it can sit at 95% with no issues and another database should probably be grown out when it hits 70%. There's just no sure formula beyond the need to avoid letting it auto-grow in really small increments or in the middle of a heavy production period when it can lead to blocking.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 9, 2011 at 7:50 pm
Grant Fritchey (5/9/2011)
It really depends on the size of the database and the rate of growth. I wouldn't get hung on specific numbers. If a database is growing very slowly, it can sit at 95% with no issues and another database should probably be grown out when it hits 70%. There's just no sure formula beyond the need to avoid letting it auto-grow in really small increments or in the middle of a heavy production period when it can lead to blocking.
Yes, but - if you are monitoring the growth rate you can trend it. This is why I stated 6 months of growth available. This will help you with your capacity planning so you can either grow the LUN, purchase new drives or plan for new equipment if needed.
If you know your growth rate is 10GB per month and you only have 50GB of space available in the data file and 50GB of additional space available on the drive you really need to plan on addressing the storage issue. Either implementing an archive/purge process or purchasing additional storage.
With the above scenario, I know that in less than 5 months I will need to grow the file. Since I only have 50GB of space available on the drive and I want to make sure I don't fill the drive completely - I need to purchase additional storage and have it available in no more than 5 months. That might not happen if I don't get it into the budget early enough - and if that happens we have a serious problem.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
May 10, 2011 at 5:43 am
This is for datafiles, but it would be an easy tweak for logs.
IF OBJECT_ID('tempdb..#dbs') > 0
DROP TABLE #dbs
IF OBJECT_ID('tempdb..#Drives') > 0
DROP TABLE #Drives
IF OBJECT_ID('tempdb..#Results') > 0
DROP TABLE #Results
CREATE TABLE #dbs
(
DBNAME sysname
, DBID INT
, [Total Size in MB] INT
, [Available Space In MB] INT
, DriveLetter CHAR(1)
)
INSERT INTO
#dbs
(
DBNAME
, DBID
, [Total Size in MB]
, [Available Space In MB]
, DriveLetter
)
EXEC sp_MSforeachdb '
USE [?];
SELECT
DB_NAME() As DBNAME
, DB_ID() AS DBID
, SUM(size / 128) AS ''Total Size in MB''
, SUM(size / 128 - CAST(FILEPROPERTY(name , ''SpaceUsed'') AS int) / 128) AS ''Available Space In MB''
, LEFT(physical_name, 1) AS DriveLetter
FROM
[?].sys.database_files
WHERE
type_desc = ''ROWS''
GROUP BY LEFT(physical_name, 1) '
CREATE TABLE #Drives
(
DriverLetter CHAR(1) PRIMARY KEY CLUSTERED
, FreeMBs INT NOT NULL
, FreeGBs AS CONVERT(DECIMAL(18 , 2) , FreeMBs / 1024.0)
)
INSERT INTO
#Drives ( DriverLetter , FreeMBs )
EXEC xp_fixeddrives
--
--SELECT
-- DB_NAME() As DBNAME
-- , DB_ID() AS DBID
-- , SUM(size / 128) AS 'Total Size in MB'
-- , SUM(size / 128 - CAST(FILEPROPERTY(name , 'SpaceUsed') AS int) / 128) AS 'Available Space In MB'
--FROM
-- sys.database_files
--WHERE
-- type_desc = 'ROWS'
--Rémi : I deleted 4 logging tables I had build on March 25th, hence the ±350 MB drop.
;
WITH CTE_Backups ( database_name, BackupDate, MinutesForBackup, GB_backup_size, seqFirst, seqLast )
AS (
SELECT
bs.database_name
, DATEADD(D , 0 , DATEDIFF(D , 0 , bs.backup_start_date)) AS BackupDate
, CONVERT(DECIMAL(18 , 1) , DATEDIFF(s , bs.backup_start_date ,
bs.backup_finish_date)
/ 60.0) AS MinutesForBackup
, CONVERT(DECIMAL(18 , 3) , bs.backup_size / 1024 / 1024 / 1024) AS GB_backup_size
, ROW_NUMBER() OVER ( PARTITION BY bs.database_name ORDER BY bs.backup_start_date ) AS seqFirst
, ROW_NUMBER() OVER ( PARTITION BY bs.database_name ORDER BY bs.backup_start_date DESC ) AS seqLast
FROM
msdb.dbo.backupset bs
WHERE
name IS NULL
AND bs.[type] = 'D'
)
SELECT
CONVERT(INT , dtBackups.[Available Space In GB]
/ CASE WHEN dtBackups.GB_ExpectedDailyGrowth <> 0
THEN dtBackups.GB_ExpectedDailyGrowth
ELSE 0.0001
END) AS DaysUntillDBGrowth
, *
-- INTO
-- #Results
FROM
(
SELECT
a.database_name
, dbs.DriveLetter
, drv.FreeGBs AS FreeGBs_Drive
, CONVERT(DECIMAL(18 , 1) , ( drv.FreeGBs * 0.85 )
/ CONVERT(DECIMAL(18 , 3) , ( b.GB_backup_size - a.GB_backup_size )
/ DATEDIFF(dd , a.BackupDate , b.BackupDate) * 30.468)) AS FreeGBs_Drive_InMonths_WithExpected_DB_Growth
, a.BackupDate AS BackupDate_First
, b.BackupDate AS BackupDate_Last
, DATEDIFF(dd , a.BackupDate , b.BackupDate) AS DaysPeriod
, a.MinutesForBackup AS MinutesForBackup_First
, b.MinutesForBackup AS MinutesForBackup_Last
, b.MinutesForBackup - a.MinutesForBackup AS MinutesForBackup_Delta
, a.GB_backup_size AS GB_backup_size_First
, b.GB_backup_size AS GB_backup_size_Last
, b.GB_backup_size - a.GB_backup_size AS GB_BackupGrowth
--, a.seqLast - a.seqFirst AS QtyofBackups
, CONVERT(DECIMAL(18 , 3) , ( b.GB_backup_size - a.GB_backup_size )
/ DATEDIFF(dd , a.BackupDate , b.BackupDate)) AS GB_ExpectedDailyGrowth
, CONVERT(DECIMAL(18 , 3) , ( b.GB_backup_size - a.GB_backup_size )
/ DATEDIFF(dd , a.BackupDate , b.BackupDate) * 365.256) AS GB_ExpectedAnnualGrowth
, CONVERT(DECIMAL(18 , 3) , dbs.[Total Size in MB] / 1024.0) AS [Total Size in GB]
, CONVERT(DECIMAL(18 , 3) , dbs.[Available Space In MB] / 1024.0) AS [Available Space In GB]
FROM
CTE_Backups a
INNER JOIN CTE_Backups b
ON a.seqFirst = b.seqLast
AND a.seqLast = b.seqFirst
AND a.database_name = b.database_name
INNER JOIN #dbs dbs
ON b.database_name = dbs.DBNAME
INNER JOIN #Drives drv
ON dbs.DriveLetter = drv.DriverLetter
WHERE
a.seqFirst = 1
AND a.seqFirst < a.seqLast
) dtBackups
ORDER BY
database_name
--
--IF EXISTS ( SELECT
-- *
-- FROM
-- #Results R
-- WHERE
-- R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth >= 0 AND R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth < 1
-- OR R.DaysUntillDBGrowth >= 0 AND R.DaysUntillDBGrowth <= 30 )
-- BEGIN
-- INSERT INTO
-- dbo.RPT_Space_Warnings
-- (
-- [DaysUntillDBGrowth]
-- , [Warning_Description]
-- , [database_name]
-- , [DriveLetter]
-- , [FreeGBs_Drive]
-- , [FreeGBs_Drive_InMonths_WithExpected_DB_Growth]
-- , [BackupDate_First]
-- , [BackupDate_Last]
-- , [DaysPeriod]
-- , [MinutesForBackup_First]
-- , [MinutesForBackup_Last]
-- , [MinutesForBackup_Delta]
-- , [GB_backup_size_First]
-- , [GB_backup_size_Last]
-- , [GB_BackupGrowth]
-- , [GB_ExpectedDailyGrowth]
-- , [GB_ExpectedAnnualGrowth]
-- , [Total Size in GB]
-- , [Available Space In GB]
-- )
-- SELECT
-- [DaysUntillDBGrowth]
-- , CASE WHEN R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth < 1 THEN 'HD IS FULL' ELSE 'AUTOGROWTH WARNING' END AS Warning_Description
-- , [database_name]
-- , [DriveLetter]
-- , [FreeGBs_Drive]
-- , [FreeGBs_Drive_InMonths_WithExpected_DB_Growth]
-- , [BackupDate_First]
-- , [BackupDate_Last]
-- , [DaysPeriod]
-- , [MinutesForBackup_First]
-- , [MinutesForBackup_Last]
-- , [MinutesForBackup_Delta]
-- , [GB_backup_size_First]
-- , [GB_backup_size_Last]
-- , [GB_BackupGrowth]
-- , [GB_ExpectedDailyGrowth]
-- , [GB_ExpectedAnnualGrowth]
-- , [Total Size in GB]
-- , [Available Space In GB]
-- FROM
-- #Results R
-- WHERE
-- R.DaysUntillDBGrowth >= 0 AND R.DaysUntillDBGrowth <= 30
--UNION ALL -- I want to see 2 warnings in the same day when it's the case... those are the really critical events.
-- SELECT
-- [DaysUntillDBGrowth]
-- , CASE WHEN R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth < 1 THEN 'HD IS FULL_' ELSE 'AUTOGROWTH WARNING_' END AS Warning_Description
-- , [database_name]
-- , [DriveLetter]
-- , [FreeGBs_Drive]
-- , [FreeGBs_Drive_InMonths_WithExpected_DB_Growth]
-- , [BackupDate_First]
-- , [BackupDate_Last]
-- , [DaysPeriod]
-- , [MinutesForBackup_First]
-- , [MinutesForBackup_Last]
-- , [MinutesForBackup_Delta]
-- , [GB_backup_size_First]
-- , [GB_backup_size_Last]
-- , [GB_BackupGrowth]
-- , [GB_ExpectedDailyGrowth]
-- , [GB_ExpectedAnnualGrowth]
-- , [Total Size in GB]
-- , [Available Space In GB]
-- FROM
-- #Results R
-- WHERE
-- R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth >= 0 AND R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth <= 1
-- END
--
--IF OBJECT_ID('tempdb..#dbs') > 0
-- DROP TABLE #dbs
--
--IF OBJECT_ID('tempdb..#Drives') > 0
-- DROP TABLE #Drives
--
--IF OBJECT_ID('tempdb..#Results') > 0
-- DROP TABLE #Results
--
ROLLBACK
May 10, 2011 at 7:06 am
All excellent replies! Thank you all for taking the time and lending your expertise. I use Idera's SQL Diagnostics Manager which makes it very easy to trend and I have been watching it diligently. I hate to take things for granted so checking with the experts proved to be well worth it!
Thanks again:-)
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply