April 14, 2010 at 6:23 pm
Hello Experts
I need some expert suggestion from you--
Now a days, getting many alerts from the Production server, where the data or log files can't grow with the autogrow size mentioned. Initially I noticed that for those database files only the alerts used to be generated where the autogrowth is set to 10% but now a days, alerts are coming from differenet servers where the autogrowth is set as 40 MB. This is strgange.
Can't OS allocate 40 MB of space to the disk. There is no issue with the disk, having enough space to grow.
How to prevent getting this errors.
Pls suggest ..
Thanks.
April 14, 2010 at 8:34 pm
Well, first of all you shouldn't be allowing your databases to autogrow. You should manually grow them out to at least 6 months of space and monitor the usage. Schedule a slow time for growing the files and add enough space to keep enough space available.
Second, if your log files are growing - then somebody must have incorporated a shrink operation and now they are autogrowing. Or, you have databases in full or bulk-logged recovery model and you are not backing up the log on a regular basis (every hour, at least).
Third, since growing the files can be an expensive operation - your applications are timing out during the growth. When the process that initiated the autogrowth times out - the autogrowth is rolled back and the next process to initiate it will then time out, repeating the cycle.
Now, for data files you can make sure you have instant initialization enabled (look it up in BOL). With that enabled, growing the files will not take as long. But, make sure you read up on the topic before you decide to enable it.
For log files - you cannot use instant initialization, so it is important that you size the log appropriately and leave it alone. Don't shrink it and let it grow - that causes quite a lot of performance issues, creates file level fragmentation and multiple VLF's (too many). Lookup Kimberly Tripp and VLF's in google and read her articles on improving log performance and how the number of VLF files affect performance.
Finally, why 40MB's? That seems quite small to me to grow a data file - not to mention the log file. If your database is of any significant size and processing a small to moderate amount of transactions - 40MB will be used up in no time causing the autogrow to happen a lot.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
April 15, 2010 at 8:45 am
You have the lecture on why not to autogrow so I will skip that.
With a 40 MB autogrow causing a timeout I am confident you may have the space to do it, but you are overwhelming your disk.
I would start monitoring my disks and see if you have an I/O bottleneck.
April 15, 2010 at 8:56 am
Henry, I myself too started thinking this could be due to the I/O bottlement only. Becoz the database log file growth is not very important, it doesn;t contain any user related data.
Thanks.
April 15, 2010 at 8:58 am
Until you need to roll something back or need to recover to a point in time..
T-log data is just as important in many was as 'data' data...
Viewing 5 posts - 1 through 4 (of 4 total)
You must be logged in to reply to this topic. Login to reply