August 7, 2018 at 7:44 am
Hi Experts,
Which option is best Filegrowth by 10% or in Megabytes for a large database that grows with average 40GB a week?
Thanks
TY
August 7, 2018 at 7:47 am
tmmutsetse - Tuesday, August 7, 2018 7:44 AMHi Experts,Which option is best Filegrowth by 10% or in Megabytes for a large database that grows with average 40GB a week?
Thanks
TY
Ideally you are managing the growth of the database instead of relying on auto growth to add space when needed. Auto growth should be for unexpected growth needs. When it is used, you should use a fixed size for the growth. Using 10% will increase the space in ever increasing sizes as it is based on the current size of the database not the original size of the database.
August 7, 2018 at 11:46 am
Percentage growth is always a terrible idea, and usually leads to a surprising growth coupled with a shortage of disk space and that overwhelming feeling that you really are going to have to do a shrinkdb now.
Set it to a sensible growth in MB, not too big but also not too small. And then set up processes to monitor your database sizes so you never have surprising increases anyway.
August 8, 2018 at 2:24 am
Thank you.if my DB grows with 40gb a week what is a sensible growth in MB should i set?
Thanks in Advance
August 8, 2018 at 6:44 am
Again, as others have said, you want to avoid auto-growing at all costs due to the performance hit. Without knowing your environment or application, I would set the initial size of your database to account for all of the expected growth based on the total amount of space you have on the data drive without filling it or triggering any monitoring alarms. For example, if you have a 500 Gb drive which only houses the data file for that database, you could go ahead and set the initial size to 400 Gb (409600 Mb) which would give you 10 weeks of growth.
August 8, 2018 at 6:44 am
In addition to the other advice, also ensure that Instant File Initialization is enabled (unless your environment prevents this) Google if you're not sure what it is. (The service account will need Perform Volume Maintenance rights.)
August 8, 2018 at 6:57 am
tmmutsetse - Wednesday, August 8, 2018 2:24 AMThank you.if my DB grows with 40gb a week what is a sensible growth in MB should i set?Thanks in Advance
40gb?
But seriously, manage the growth yourself instead of relying on the autogrow as Lynn already stated. Personally, I'd add, let's say, 200gb to the database. That should let it grow for four weeks. Set the autogrow to something reasonable like 20gb, to give yourself a little padding in case something bad goes on. Then, monitor the database space, and add another 200gb as needed. Better to do one big chunk occasionally rather than lots and lots of tiny chunks. That leads to serious problems too.
40gb/week of growth every single week, is quite a lot. You better be investing in a very big SAN or lots of drives right now. You need a terabyte every 6 months. You'll be at 4tb in two years. This assumes you don't grow faster. That's a big database. You have a number of interesting problems coming up, not just autogrowth. Backups, restores, index maintenance, possible needs for partitioning for data management... I'd start studying how people manage large databases.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 8, 2018 at 8:02 am
Thanks Grant,
Yes 40GB is a lot but i am creating some partitions.I always archive the data the delete the partitions.I am still new to the Organization and i am trying to find a better way to manage this .
Many thanks
T.
August 8, 2018 at 12:28 pm
tmmutsetse - Wednesday, August 8, 2018 2:24 AMThank you.if my DB grows with 40gb a week what is a sensible growth in MB should i set?Thanks in Advance
Add 2 terabyte to an existing file over the weekend, you will be good for a year.
August 8, 2018 at 2:58 pm
NewBornDBA2017 - Wednesday, August 8, 2018 12:28 PMtmmutsetse - Wednesday, August 8, 2018 2:24 AMThank you.if my DB grows with 40gb a week what is a sensible growth in MB should i set?Thanks in Advance
Add 2 terabyte to an existing file over the weekend, you will be good for a year.
Not a good idea if you ever need to do a restore to a lesser system.
As for all the worries about auto-growth causing a performance hit, if you have "Instant File Initialization" enabled, auto-growth of data files just doesn't cause a hit large enough to be called a "performance issue". Heh... and if you rebuild just one large index on the system in the full recovery model, you won't need to manage auto-growth because the rebuild will cause about 120% the size of the index will be added to the MDF if it's not already available.
Log file? Different issue.
--Jeff Moden
Change is inevitable... Change for the better is not.
August 8, 2018 at 3:07 pm
Is this a warehousing database that grows at the 40GB or a OLTP databases maybe a database strictly for reports? The way the db is used should also be part of the equation as to how you grow it. Ideally you grow the DB as few times as possible to decrease the fragmentation and VLF's, you can grow it monthly with the likes of 200GB that gives you the control and know exactly how much you are growing by over percentages and just 12 growth spurts a year... LOL. Just my .02 🙂
Good Luck
D
DHeath
August 8, 2018 at 3:34 pm
Yeah, can't say I've ever had an issue with auto growth for data files, as long as you don't set it to something silly like 1MB increments and then every actual growth fires off a whole slew of small file increases. Do make sure things are set up to allow IFI though because that removes pretty much all the potential overhead of data file growth. And even if you have auto growth on, you should still be monitoring your disk usage both so you know how long you've got before you'll need more capacity and so that you can identify sudden spikes where a change in process etc has made significant changes in your storage requirements that you may need to get ahead of.
August 8, 2018 at 4:50 pm
The idea, really, is that you should control the growth of the database instead of letting it grow whenever it needs to grow. And I would stay away from % growth for data and log files.
August 9, 2018 at 9:09 am
Jeff Moden - Wednesday, August 8, 2018 2:58 PMNewBornDBA2017 - Wednesday, August 8, 2018 12:28 PMtmmutsetse - Wednesday, August 8, 2018 2:24 AMThank you.if my DB grows with 40gb a week what is a sensible growth in MB should i set?Thanks in Advance
Add 2 terabyte to an existing file over the weekend, you will be good for a year.
Not a good idea if you ever need to do a restore to a lesser system.
As for all the worries about auto-growth causing a performance hit, if you have "Instant File Initialization" enabled, auto-growth of data files just doesn't cause a hit large enough to be called a "performance issue". Heh... and if you rebuild just one large index on the system in the full recovery model, you won't need to manage auto-growth because the rebuild will cause about 120% the size of the index will be added to the MDF if it's not already available.
Log file? Different issue.
Its not a good idea because of a restore to a lesser system? Yes sir I agree with you on that but it also depends on how frequently he refreshes Dev/Test environment from prod. I guess changing the autogrowth setting to 40 gb or 80 gb or whatever instead of using 10 % should help which everyone has already talked about. I wonder if he has a job which insert millions of millions of records into a table so changing the job timing to run off hours should help which shouldn't impact users. Just saying.
August 9, 2018 at 12:08 pm
NewBornDBA2017 - Thursday, August 9, 2018 9:09 AMJeff Moden - Wednesday, August 8, 2018 2:58 PMNewBornDBA2017 - Wednesday, August 8, 2018 12:28 PMtmmutsetse - Wednesday, August 8, 2018 2:24 AMThank you.if my DB grows with 40gb a week what is a sensible growth in MB should i set?Thanks in Advance
Add 2 terabyte to an existing file over the weekend, you will be good for a year.
Not a good idea if you ever need to do a restore to a lesser system.
As for all the worries about auto-growth causing a performance hit, if you have "Instant File Initialization" enabled, auto-growth of data files just doesn't cause a hit large enough to be called a "performance issue". Heh... and if you rebuild just one large index on the system in the full recovery model, you won't need to manage auto-growth because the rebuild will cause about 120% the size of the index will be added to the MDF if it's not already available.
Log file? Different issue.
Its not a good idea because of a restore to a lesser system? Yes sir I agree with you on that but it also depends on how frequently he refreshes Dev/Test environment from prod. I guess changing the autogrowth setting to 40 gb or 80 gb or whatever instead of using 10 % should help which everyone has already talked about. I wonder if he has a job which insert millions of millions of records into a table so changing the job timing to run off hours should help which shouldn't impact users. Just saying.
No, frequency of restores and autogrowth settings doesn't matter at all during restores. If you have, for example, a Terabyte of unused or unallocated space in your production database, the restore will build that same amount of the freespace in the lesser environment. If there isn't enough room for that, as discovered by the prechecks that SQL Server does, it will simply not allow the restore. And don't forget that if the poo hits the fan in a DR situation and you have to sacrifice some other lesser machine to act as your production box, it may not fit and then the fan also turns to poo. 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 1 through 15 (of 16 total)
You must be logged in to reply to this topic. Login to reply