November 18, 2008 at 1:33 pm
I have a database I'm monitoring from a third party. Generally the way it works is it polls data from different locations, and archives it in tables that store minute, hourly, daily, monthly, yearly. Each night, data from a table of a lower level is summarized and placed in the next one up (e.g. hourly summarized and placed in daily). The vendor has allowed us to customize the database (essentially a data warehouse), and have anything older than say one year be purged from the system.
In essense, this database will keep growing and essentially level off once the one year purging kicks in. I've been monitoring this data warehouse for the past sixty days and the average growth is about 250 - 300 megs each day. It levelled off, but started growing again once we told the database to monitor more locations and services.
As the database is set to grow in 10% increments, would it be better to have it fixed at say 300 megs absolute? Assuming a reasonable level of confidence on the growth rate, there shouldn't be anything wrong with performance in fixing the growth rate, is there?
Thanks.
Gaby________________________________________________________________"In theory, theory and practice are the same. In practice, they are not." - Albert Einstein
November 18, 2008 at 2:56 pm
Yes, it is better to grow in fixed increments rather than in percent. While this may be fine for a small db (think <10 GB), that is a 500 MB increase. But it's much different when that db reaches 100 GB+. Also, if it's growing by percent when it's small you will encounter a large number of virtual log files (see http://sqlblogcasts.com/blogs/tonyrogerson/archive/2007/07/25/sql-2000-yes-lots-of-vlf-s-are-bad-improve-the-performance-of-your-triggers-and-log-backups-on-2000.aspx).
It would be best to grow the database out to a large enough size that would essentially eliminate the need to grow again in 6 months to a year (you state 250-300 MB/day, so that's approx 10 GB/month). That said, you must continue to monitor the size and manually grow it again as the data file fills up.
DAB
November 18, 2008 at 7:11 pm
I agree... do an estimate to figure out what the total size should be with a year of data and add 20 to 25% to that number. Set the database to that size.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 18, 2008 at 7:33 pm
Yes, it is always better to specify file growth in MB than in %....estimate the how much the file growth would be and specify it in MB......
November 18, 2008 at 8:08 pm
No... I'm not talking about "growth" rates.... I'm talking about setting the database size correctly so it never has to grow. Database growth should never take you by surprise because it is the nature of automatic growth to occur at the time you can least afford the wait.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 19, 2008 at 8:00 am
Exactly Jeff! I've had a db start to grow in the middle of the day during peak web traffic and it brought everything to a crawl until the growth completed. After that I created a daily report showing the current size of each database, the size from the previous day and a delta (%change and MB change). Not only did that give an estimate of when the db would be full it also showed a few surprises when after 2 weeks I caught an abnormal change where the db went from about 150 MB/day and .8% to over 1 GB in day. And THAT uncovered an entirely different bad design.
DAB
November 19, 2008 at 8:20 am
Thanks everybody, given me food for thought. Unfortunately this DB was set up before I joined my company but but at least in the future, if I get any projections, the easiest way is to size it accordingly before anything else, less problems in the future.
Gaby________________________________________________________________"In theory, theory and practice are the same. In practice, they are not." - Albert Einstein
November 19, 2008 at 9:02 pm
Gaby A. (11/19/2008)
Thanks everybody, given me food for thought. Unfortunately this DB was set up before I joined my company but but at least in the future, if I get any projections, the easiest way is to size it accordingly before anything else, less problems in the future.
So, write a bit of code to track the size weekly... in a month, you'll have enough data to make the correct estimate.
--Jeff Moden
Change is inevitable... Change for the better is not.
November 20, 2008 at 7:02 am
Jeff Moden (11/19/2008)
Gaby A. (11/19/2008)
Thanks everybody, given me food for thought. Unfortunately this DB was set up before I joined my company but but at least in the future, if I get any projections, the easiest way is to size it accordingly before anything else, less problems in the future.So, write a bit of code to track the size weekly... in a month, you'll have enough data to make the correct estimate.
I did awhile back, http://www.sqlservercentral.com/scripts/sp_spaceused/64271/ as a table friendly version of sp_spaceused. That's what got me thinking. I always heard if the growth rate was fixed, danger of fragmenting the indexes, but it's not much of an issue as defragmenting and optimizations are run regularly on this particular DB.
🙂
Gaby________________________________________________________________"In theory, theory and practice are the same. In practice, they are not." - Albert Einstein
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply