Blog Post

Capacity Planning – Monitor and Extrapolate

,

I wrote a post recently on capacity planning, and then thought of one more thing. In addition to planning for the future for new systems, you also need to regularly monitor your existing ones.

Planning for the future, IMHO, isn’t something you want to do when you have a crisis, like running out of disk space. A good DBA is proactive, and looking to monitor resource usage and extrapolate out regularly to get an idea of

  1. when you’re overwhelm your current hardware
  2. how big to go on the next upgrade.

Nothing worse than your boss complaining about performance, you recommending an upgrade and then having no idea what to ask for. You also don’t want to get caught in emergency situations when nothing has changes in your workload, other than you weren’t paying attention.

I used to do this in a few ways, and there are numerous ones today. You can use a product like SQL Monitor, you can use the MDW from Microsoft if you have EE, you can use other products or roll your own. I have done all of these, except MDW, and they have all worked well.

In terms of memory, I haven’t done this in some time, so I’ll let you get details from another source on how to monitor the usage over time on the modern versions. Here’s a good thread to get you started.

In terms of CPU, I tend to monitor the raw CPU percentage over time, using a broad average across a whole day, watching the peaks. I would trend this out for 3-4 months, and see if it is increasing. Since CPUs can be hard to upgrade, you want to know this well in advance.

Disk Space

This is the big one, and quite embarrassing when you can’t capacity plan here. There are numerous ways to do this, but here’s what I’ve done.

I monitor backup size every day and store it in a small administrative database. If I move the database, I move the backup information to the new instance as well. You can query the msdb tables for the sizes.

SELECT DATEPART( yyyy, backup_finish_date) 'backup year' , DATEPART( mm, backup_finish_date) 'backup month' , DATEPART( dd, backup_finish_date) 'backup day' , database_name
, SUM(backup_size) 'backup_size' , type FROM msdb.dbo.backupset
 GROUP BY database_name
 , type , DATEPART( yyyy, backup_finish_date) , DATEPART( mm, backup_finish_date) , DATEPART( dd, backup_finish_date) ORDER BY DATEPART( yyyy, backup_finish_date) , DATEPART( mm, backup_finish_date) , DATEPART( dd, backup_finish_date) , database_name

There’s a quick query, and you can perform some summaries of sizes by types of backup, convert to GB, etc. I used to run a similar query every day and do three things:

  • store the results
  • compare the last backup to the one before it.
  • extrapolate out the growth of backups to reach the disk size

Obviously I wanted to track the results, and I would monthly create an old average of the sizes from last year so I wasn’t storing crazy amount of data.

I compared the backup each day with the previous one to catch weird things. If the backups differed in size by more than 20%, I raised a flag. This was to catch changes in process, or potential errors from data loads or imports. Anything that changes 20% a day can be a problem, so this allowed me to find problems quickly.

Lastly, I aggregated all backups on the instance, extrapolating out the growth across weeks, and estimating when I’d fill the disk. When this was inside of 2 months, I knew I better start asking for more space.

Filed under: Blog Tagged: administration, sql server, syndicated

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating