September 7, 2007 at 1:42 pm
I've been trying to determine the best value for file growth on both the log and data files. I've set up a job that grabs the size of the files to help me see how often the files need to grow. From what I understand, if I set it too low, then it will have to grow often causing short performance issues and a lot of fragmentation -- if I set to too high it will take longer to grow and really affect performance.
I'm planning to set the values to a static number...just not sure what yet. The database I am interested in updating is currently about 11GB and high transactional.
Does anyone have any advice or warnings on calculating this value?
I would also like to set up an alert that will force the file to grow during off peak hours if it is over 85% full. There are commands for shrinking a file, but I have not found any on forcing a file to grow. Does anyone know how I would go about doing this? Is there a dbcc growfile or something similar?
Thanks in advance!
September 7, 2007 at 2:28 pm
I found the way to manually force a file to grow...
ALTER DATABASE <database name>
MODIFY FILE
(
NAME = <file name>,
SIZE = <size in MB>
)
I'm still curious as to how long it wil take a file to grow and if there are other factors that affect his.
September 7, 2007 at 2:35 pm
You should grow your files "manually" at an appropriate time, never using autogrow (which should only be there as a fail safe feature)
You need to monitor how "full" are your files and in that case create an alert when you are approching the limit so that you can plan the "manual" grow at your next maintenace window.
Cheers,
* Noel
September 7, 2007 at 3:02 pm
And - don't forget to defragment your DB and log drives. Ideally, do it during offline times if your org has those things. If not - DisKeeper can keep your disk fragmentation level to a minimum, so it doesn't interfere with performance, AND it will operate while the server is online.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
September 7, 2007 at 3:10 pm
Matt,
Thanks. I perform a dbcc dbreindex / dbcc indexdefrag weekly - during off peak hours. We don't defragment our drives because they are on the SAN. From what I understand, defragmentation does not do much in our situation.
DisKeeper - that must be a 3rd party tool?
September 7, 2007 at 4:49 pm
disk defragmentation is effective no matter what architecture you're on. disk striping and all RAID configs work better when files are in fewer, bigger chunks. Minor fragmentation won't affect your much, but disk fragmentation (enough of it) will make your SAN drives perform like old IDE drives (trust me - it's happened to me). I've heard that nonsense before from my network admin - so I had him run the test, and he's now a convert. Try turning auto grow on with a really small increment say - 50MB, and a shrink mechanism on, and in about a month - your SAN will be hurting... Not recommended for the faint of heart.
Of course - if you right-size the database to begin with, and/or you manage to grow it in as few increments as possible - you won't have the issue I ran into. Same thing with logs - try to find their comfortable size (the one that doesn't make them grow anymore), defrag the drive at that point (once), and don't shrink them again. If you plan it right - they won't grow out of control (because you're committing, and running BACKUP LOG with TRUNCATE often enough, etc...), and they won't be fragmented to all hell.
And yes - DisKeeper is a disk defragmenter. It's the "for pay" equivalent of Windows Defrag. Much more effective, must less taxing on the server, and doesn't seem to get in anyone's way while they're working. Worth every penny ni my mind.
One way to try to size things is to take the physical architecture into account. Most people would recommend RAID 10 drives (mirror+stripe) for database drive. If you size all of your databases to be exact increments of the SAN stripe size, and similarly make the growth increment some multiple of that strip size, you will more or less ensure that each disk read from the SAN (one stripe at a time read into the SAN buffer) is "all relevant" to that specific database. Now - some may grow at 1x stripe, somew at 10x, but at least you're getting the logical storage to work with the physical storage.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
Viewing 6 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply