July 5, 2011 at 6:21 am
Data files for a particular database is growing fast .
Shrinking of files - done
Distributing the files to diff locations- done
Any things else which i can look for ? Can i do something with the options present in the ssms -2008 like policy management , data collection , resource governor etc. Please advise .
Thanks in advance
July 5, 2011 at 6:25 am
Stop shrinking your dbs it'll only hurt you.
The only way to stop a db from growing is to stop inserting data.
Equivalent would be to start purging data, or archive it somewhere else.
If sql 2008 Enterprise and sql 2008 R2 standard you have access to data compression which can make a heck of a big difference (seem up to 70% compression).
July 5, 2011 at 6:30 am
So, are you advising me to go for data compression ?? If yes , steps please 🙂
p.s.- i am using enterprise edition
July 5, 2011 at 6:34 am
I'd consider it.
http://www.microsoft.com/sqlserver/2008/en/us/compression.aspx
But you also need to plan for the next 6 months to 3 years to make sure you have enough room and budget for expansion!
Keep in mind that compressions comes with increased cpu workload and reduced disk workload. Same thing for backup compression.
I have a script to estimate data growth over time if you're interested.
July 5, 2011 at 9:10 am
Hi Ninja - Please share those scripts .TIA 🙂
@All- I got some free space available in other drives which now contain data files with the same name as of my database , but i could see that when i hit sp_helpfile or right click teh db , I am no more using those files .
Now, that i am sure i dnt need those un-necessary data files , i want to delete it . But, again , when i try to delete those files - It is throwing me an error - File is in use /Make the disk is not full/ write protected .
Please advise .
July 5, 2011 at 9:16 am
There's a little bug here. The size estimates are right, but the delay before running out of HD / data file space will be wrong.
I don't have time to fix it at the moment but at least you'll have an estimate of the yearly growth.
IF OBJECT_ID('tempdb..#dbs') > 0
DROP TABLE #dbs
IF OBJECT_ID('tempdb..#Drives') > 0
DROP TABLE #Drives
IF OBJECT_ID('tempdb..#Results') > 0
DROP TABLE #Results
CREATE TABLE #dbs
(
DBNAME sysname
, DBID INT
, [Total Size in MB] INT
, [Available Space In MB] INT
, DriveLetter CHAR(1)
)
INSERT INTO
#dbs
(
DBNAME
, DBID
, [Total Size in MB]
, [Available Space In MB]
, DriveLetter
)
EXEC sp_MSforeachdb '
USE [?];
SELECT
DB_NAME() As DBNAME
, DB_ID() AS DBID
, SUM(size / 128) AS ''Total Size in MB''
, SUM(size / 128 - CAST(FILEPROPERTY(name , ''SpaceUsed'') AS int) / 128) AS ''Available Space In MB''
, LEFT(physical_name, 1) AS DriveLetter
FROM
[?].sys.database_files
WHERE
type_desc = ''ROWS''
GROUP BY LEFT(physical_name, 1) '
CREATE TABLE #Drives
(
DriverLetter CHAR(1) PRIMARY KEY CLUSTERED
, FreeMBs INT NOT NULL
, FreeGBs AS CONVERT(DECIMAL(18 , 2) , FreeMBs / 1024.0)
)
INSERT INTO
#Drives ( DriverLetter , FreeMBs )
EXEC xp_fixeddrives
--
--SELECT
-- DB_NAME() As DBNAME
-- , DB_ID() AS DBID
-- , SUM(size / 128) AS 'Total Size in MB'
-- , SUM(size / 128 - CAST(FILEPROPERTY(name , 'SpaceUsed') AS int) / 128) AS 'Available Space In MB'
--FROM
-- sys.database_files
--WHERE
-- type_desc = 'ROWS'
--Rémi : I deleted 4 logging tables I had build on March 25th, hence the ±350 MB drop.
;
WITH CTE_Backups ( database_name, BackupDate, MinutesForBackup, GB_backup_size, seqFirst, seqLast )
AS (
SELECT
bs.database_name
, DATEADD(D , 0 , DATEDIFF(D , 0 , bs.backup_start_date)) AS BackupDate
, CONVERT(DECIMAL(18 , 1) , DATEDIFF(s , bs.backup_start_date ,
bs.backup_finish_date)
/ 60.0) AS MinutesForBackup
, CONVERT(DECIMAL(18 , 3) , bs.backup_size / 1024 / 1024 / 1024) AS GB_backup_size
, ROW_NUMBER() OVER ( PARTITION BY bs.database_name ORDER BY bs.backup_start_date ) AS seqFirst
, ROW_NUMBER() OVER ( PARTITION BY bs.database_name ORDER BY bs.backup_start_date DESC ) AS seqLast
FROM
msdb.dbo.backupset bs
WHERE
bs.[type] = 'D'
AND bs.is_snapshot = 0 --veem agent that backups the whole server
-- AND bs.database_name NOT LIKE '%ate%'
-- AND name IS NULL
)
SELECT
CONVERT(INT , dtBackups.[Available Space In GB]
/ CASE WHEN dtBackups.GB_ExpectedDailyGrowth <> 0
THEN dtBackups.GB_ExpectedDailyGrowth
ELSE 0.0001
END) AS DaysUntillDBGrowth
, *
-- INTO
-- #Results
FROM
(
SELECT
a.database_name
, dbs.DriveLetter
, drv.FreeGBs AS FreeGBs_Drive
, CONVERT(DECIMAL(18 , 3) , drv.FreeGBs * 0.85
/ NULLIF(b.GB_backup_size - a.GB_backup_size, 0)
/ NULLIF(DATEDIFF(dd , a.BackupDate , b.BackupDate), 0) * 30.468) AS FreeGBs_Drive_InMonths_WithExpected_DB_Growth
, a.BackupDate AS BackupDate_First
, b.BackupDate AS BackupDate_Last
, DATEDIFF(dd , a.BackupDate , b.BackupDate) AS DaysPeriod
, a.MinutesForBackup AS MinutesForBackup_First
, b.MinutesForBackup AS MinutesForBackup_Last
, b.MinutesForBackup - a.MinutesForBackup AS MinutesForBackup_Delta
, a.GB_backup_size AS GB_backup_size_First
, b.GB_backup_size AS GB_backup_size_Last
, b.GB_backup_size - a.GB_backup_size AS GB_BackupGrowth
--, a.seqLast - a.seqFirst AS QtyofBackups
, CONVERT(DECIMAL(18 , 3) , ( b.GB_backup_size - a.GB_backup_size )
/ NULLIF(DATEDIFF(dd , a.BackupDate , b.BackupDate), 0)) AS GB_ExpectedDailyGrowth
, CONVERT(DECIMAL(18 , 3) , ( b.GB_backup_size - a.GB_backup_size )
/ NULLIF(DATEDIFF(dd , a.BackupDate , b.BackupDate), 0) * 365.256) AS GB_ExpectedAnnualGrowth
, CONVERT(DECIMAL(18 , 3) , dbs.[Total Size in MB] / 1024.0) AS [Total Size in GB]
, CONVERT(DECIMAL(18 , 3) , dbs.[Available Space In MB] / 1024.0) AS [Available Space In GB]
FROM
CTE_Backups a
INNER JOIN CTE_Backups b
ON a.seqFirst = b.seqLast
AND a.seqLast = b.seqFirst
AND a.database_name = b.database_name
LEFT OUTER JOIN #dbs dbs
ON b.database_name = dbs.DBNAME
LEFT OUTER JOIN #Drives drv
ON dbs.DriveLetter = drv.DriverLetter
WHERE
a.seqFirst = 1
AND a.seqFirst + a.seqLast > 2 --would always warn on the first day
) dtBackups
ORDER BY
database_name
--IF EXISTS ( SELECT
-- *
-- FROM
-- #Results R
-- WHERE
-- R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth >= 0 AND R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth < 1
-- OR R.DaysUntillDBGrowth >= 0 AND R.DaysUntillDBGrowth <= 30 )
-- BEGIN
-- INSERT INTO
-- dbo.RPT_Space_Warnings
-- (
-- [DaysUntillDBGrowth]
-- , [Warning_Description]
-- , [database_name]
-- , [DriveLetter]
-- , [FreeGBs_Drive]
-- , [FreeGBs_Drive_InMonths_WithExpected_DB_Growth]
-- , [BackupDate_First]
-- , [BackupDate_Last]
-- , [DaysPeriod]
-- , [MinutesForBackup_First]
-- , [MinutesForBackup_Last]
-- , [MinutesForBackup_Delta]
-- , [GB_backup_size_First]
-- , [GB_backup_size_Last]
-- , [GB_BackupGrowth]
-- , [GB_ExpectedDailyGrowth]
-- , [GB_ExpectedAnnualGrowth]
-- , [Total Size in GB]
-- , [Available Space In GB]
-- )
-- SELECT
-- [DaysUntillDBGrowth]
-- , CASE WHEN R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth < 1 THEN 'HD IS FULL' ELSE 'AUTOGROWTH WARNING' END AS Warning_Description
-- , [database_name]
-- , [DriveLetter]
-- , [FreeGBs_Drive]
-- , [FreeGBs_Drive_InMonths_WithExpected_DB_Growth]
-- , [BackupDate_First]
-- , [BackupDate_Last]
-- , [DaysPeriod]
-- , [MinutesForBackup_First]
-- , [MinutesForBackup_Last]
-- , [MinutesForBackup_Delta]
-- , [GB_backup_size_First]
-- , [GB_backup_size_Last]
-- , [GB_BackupGrowth]
-- , [GB_ExpectedDailyGrowth]
-- , [GB_ExpectedAnnualGrowth]
-- , [Total Size in GB]
-- , [Available Space In GB]
-- FROM
-- #Results R
-- WHERE
-- R.DaysUntillDBGrowth >= 0 AND R.DaysUntillDBGrowth <= 30
--UNION ALL -- I want to see 2 warnings in the same day when it's the case... those are the really critical events.
-- SELECT
-- [DaysUntillDBGrowth]
-- , CASE WHEN R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth < 1 THEN 'HD IS FULL_' ELSE 'AUTOGROWTH WARNING_' END AS Warning_Description
-- , [database_name]
-- , [DriveLetter]
-- , [FreeGBs_Drive]
-- , [FreeGBs_Drive_InMonths_WithExpected_DB_Growth]
-- , [BackupDate_First]
-- , [BackupDate_Last]
-- , [DaysPeriod]
-- , [MinutesForBackup_First]
-- , [MinutesForBackup_Last]
-- , [MinutesForBackup_Delta]
-- , [GB_backup_size_First]
-- , [GB_backup_size_Last]
-- , [GB_BackupGrowth]
-- , [GB_ExpectedDailyGrowth]
-- , [GB_ExpectedAnnualGrowth]
-- , [Total Size in GB]
-- , [Available Space In GB]
-- FROM
-- #Results R
-- WHERE
-- R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth >= 0 AND R.FreeGBs_Drive_InMonths_WithExpected_DB_Growth <= 1
-- END
--IF OBJECT_ID('tempdb..#dbs') > 0
-- DROP TABLE #dbs
--
--IF OBJECT_ID('tempdb..#Drives') > 0
-- DROP TABLE #Drives
--
--IF OBJECT_ID('tempdb..#Results') > 0
-- DROP TABLE #Results
ROLLBACK
July 5, 2011 at 9:29 am
you should buy some new bigger disks, as surely the database growing is a good thing?
July 5, 2011 at 9:34 am
steveb. (7/5/2011)
you should buy some new bigger disks, as surely the database growing is a good thing?
Not the only option... but my script will tell it to you just to way it is.
Archiving comes at a high costs as well in time and ressources. Bigger disks is usually a very small job in comparaison.
July 5, 2011 at 1:06 pm
Compression can be nice, but will vary, depending on what kind of data i stored in the DB. The question you have to answer is, why is the DB growing this fast? What data is going into the DB to make it grow this way. Once you know that you can begin to evaluate how to handle it.
I had a DB once that grew at a rate of 1-2G per day. It was geographical data, including pictures, and with bitmap data you can get almost no compression out of it. We got about 15% compression across the entire DB because of the amount of image data in the DB, and it all had to be online. All of the image data got loaded and we ended up at over 4.5TB. Ths solution we came up with though, was allocating more disk space on the SAN because the business said they needed the data online.
In your case, same kind of thing. Figure out why the DB is growing, what the data is, and how much, then talk to management and find out if this growth is normal, from their point of view, and if they require it to be online, enough that they'll buy more disk space to handle it, or if they want to archive/delete some of the data.
Chris
Learning something new on every visit to SSC. Hoping to pass it on to someone else.
July 6, 2011 at 6:49 am
@ninja - Thanks for the query ..
I need to estimate the figure ( capturing the growth of the data files of the db ) so that the client can mount additional disks and make more free space available
Can i go ahead and tell them that the projected growth in the db in another 1 year is 24 GB ( as GB_Expectedannualgrowth column is showing me 24 GB ) ? Please suggest
TIA 🙂
July 6, 2011 at 6:52 am
abhishek_dwivedi03 (7/6/2011)
@Ninja - Thanks for the query ..I need to estimate the figure ( capturing the growth of the data files of the db ) so that the client can mount additional disks and make more free space available
Can i go ahead and tell them that the projected growth in the db in another 1 year is 24 GB ( as GB_Expectedannualgrowth column is showing me 24 GB ) ? Please suggest
TIA 🙂
That's what the script tells you so yes. I would personally make sure that the db hasn't undergone a recent mass import or mass dump and that the backup history has a lot of entries (months if possible). To do that you can just run the query for the CTE and see the growth from 1 day to the next for massive jumps.
Any estimate based on bad data is not going to help. On that end I can't do much in my script to help you.
July 6, 2011 at 7:00 am
PS You need to consider a lot more than 24 GB for your estimates.
How many full backups do you keep?
How much space for normal daily log backups, how many days do you keep of those?
Do you backup the backup folder to a 3rd location?
Do you do test restores?
Do you plan only 6 months or 24 months ahead of time. 24 months means 24GB * 2 + 15% for growth of growth (or what % makes sense for the data, you can also estimate this using the full list of backups and see the normal growth over different periods of time).
By how much will you grow the datafile(s)? You need to consider that for all restores and all online dbs.
What about Dev, Test, QA environements? Do you backup those?
Do you have mass imports planned, maybe a merger with another company? That needs to be taken into account as well.
As you can see, a "little" 24 GB annual growth can quickly turn into a 1 TB allocation on a SAN... which is then on a raid setting on its own. And this is only for 2 years planning.
July 7, 2011 at 12:15 pm
I think it is inappropriate to recommend compression without a LOT more knowledge of the user's database.
In any case, I am curious to know how much data size we are talking here, and the growth rate? Why is the growth a problem?
Oh, and like someone else said - stop shrinking! :hehe:
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply