March 12, 2008 at 11:50 am
hi guys is there a script to get the average percent of growth of database let's say for the last two years? I would like to know to see how much disk space more to get.
March 12, 2008 at 11:58 am
I do not think that there is any way to figure out growth for past any days unless you have some sort of script or tool configured.
[font="Verdana"]--www.sqlvillage.com[/size][/font]
March 12, 2008 at 12:16 pm
The only way you could calculate this is if you were querying the system tables for that data and storing it in a table for later use. SQL does not store historical file size information - just the current state.
Many DBAs will perform a process of querying their systems' metadata for items such as this to trend later though. If you've done that let us know and we may be able to assist further.
For an example see my article on the subject here: http://www.mssqltips.com/tip.asp?tip=1426
- Tim Ford, SQL Server MVPhttp://www.sqlcruise.comhttp://www.thesqlagentman.com http://www.linkedin.com/in/timothyford
March 12, 2008 at 12:19 pm
thank you!, no i haven't i guess is never too late to start! 😀
March 12, 2008 at 12:59 pm
so for now since i don't have the existing script and i will create it to use it in the future. what would be the ideal way to know how much extra space should i get if my disks are getting full?
March 12, 2008 at 1:15 pm
I'll answer that question by explaining how we handle db/log sizing where I work. Most of our databases are vendor-created. I have the Analysts and the vendors determine an estimate for what the database size will be at the end of 3 years (the typical lifespan for servers in our environment.) I then pre-size the databases to this size, with the understanding that this is only an estimate, based upon projected record counts and estimated usage levels; therefore I do leave auto-incrementing on, but don't rely upon it. I take this approach to reduce external fragmentation more than for space considerations.
Logs are a trickier matter, obviously, the more frequently you back up your logs, the smaller the log file size can be. You get a feel for log sizing after working directly with the DB for a while.
My suggestion is to work with your Analysts and the vendor (if one exists) to determine future projected usage, while capturing the metadata in the new table you're rushing right out to create so you don't have to go through this again.
How big is the database now? Are we talking mb, gb, or tb?
- Tim Ford, SQL Server MVPhttp://www.sqlcruise.comhttp://www.thesqlagentman.com http://www.linkedin.com/in/timothyford
March 12, 2008 at 1:42 pm
If this helps.
DECLARE @dbname sysname
SET @dbname = DB_NAME()
SELECT CONVERT(char, backup_start_date, 111) AS [Date], --yyyy/mm/dd format
CONVERT(char, backup_start_date, 108) AS [Time],
@dbname AS [Database Name], [filegroup_name] AS [Filegroup Name],
logical_name AS [Logical Filename],
physical_name AS [Physical Filename],
CONVERT(numeric(9,2),(backed_up_page_count * 8192)/1048576) AS [File Size (MB)],
Growth AS [Growth Percentage (%)]
FROM
(
SELECT b.backup_start_date,
a.backup_set_id,
(a.backed_up_page_count),
a.logical_name,
a.[filegroup_name],
a.physical_name,
(
SELECT CONVERT(numeric(5,2),
(((a.backed_up_page_count*8192) * 100.00) / (i1.backed_up_page_count*8192))-100)
FROM msdb.dbo.backupfile i1
WHERE i1.backup_set_id =
(
SELECT MAX(i2.backup_set_id)
FROM msdb.dbo.backupfile i2 JOIN msdb.dbo.backupset i3
ON i2.backup_set_id = i3.backup_set_id
WHERE i2.backup_set_id < a.backup_set_id AND
i2.file_type='D' AND
i3.database_name = @dbname AND
i2.logical_name = a.logical_name AND
i2.logical_name = i1.logical_name AND
i3.type = 'D'
) AND
i1.file_type = 'D'
) AS Growth
FROM msdb.dbo.backupfile a JOIN msdb.dbo.backupset b
ON a.backup_set_id = b.backup_set_id
WHERE b.database_name = @dbname AND
a.file_type = 'D' AND
b.type = 'D'
) as Derived
WHERE (Growth <> 0.0) OR (Growth IS NULL)
ORDER BY logical_name, [Date]
SQL DBA.
March 12, 2008 at 2:48 pm
thank you!!! 100 GBs
March 13, 2008 at 10:23 am
hi thanks for your script, does your script do the same thing create a table with all the metadata information from the system tables?
March 13, 2008 at 11:02 am
Nice script $sanjayattray 🙂
hi thanks for your script, does your script do the same thing create a table with all the metadata information from the system tables?
No, it is basing growth on how many pages were backed up. This method is more accurate than basing growth on the size of the backups because it depicts a realistic measure of what happened. I would not call this a perfect growth anaylsis, but it is a good start.
You should still begin the practice of capturing database metrics so that capacity planning will not be such an issue.
This site has plenty growth monitoring scripts. You can find them in the script section.
March 13, 2008 at 11:03 am
hi thanks for your script, does your script do the same thing create a table with all the metadata information from the system tables?
In summary, it is basing growth off the backup history.
March 13, 2008 at 11:54 am
thank you all!!!:)
March 14, 2008 at 4:32 am
I will advice to read this article:
http://www.sqlservercentral.com/articles/Integration+Services/61774/
It has been very useful.
March 14, 2008 at 6:52 am
thank you! I did read the article I think at this point i will go for a much simpler solution just to get the disk space usage to create reports but for sure in the near future i will upgrade to that one. For now i am using the code in this article http://www.sqlservercentral.com/scripts/Maintenance+and+Management/31929
I am doing some testing 😀
again thank you!
March 18, 2008 at 3:02 pm
hi, i am doing your script in your article and it works great, now is there a way to run that script against different servers in an ssis package? if so can you give me the steps please?
Viewing 15 posts - 1 through 15 (of 20 total)
You must be logged in to reply to this topic. Login to reply