May 6, 2011 at 11:53 pm
Hi,
What is the best practice to set the Autogrowth for Transaction Log (In % or in MB). Is there any thumb rule for setting the Autogrowth option.
Regards,
Nithin
May 7, 2011 at 3:00 am
Your autogrow increment must be large enough to avoid the performance penalties.The exact value to use in your configuration setting and the choice between a percentage growth and a specific MB size growth depends on many factors in your environment. A general rule of thumb to you can use for testing is to set your autogrow setting to about one-eight the size of the file.
For further information you can visit
http://support.microsoft.com/kb/315512
Regards,
Syed Jahanzaib Bin Hassan
BSCS | MCTS | MCITP | OCA | OCP | OCE | SCJP | IBMCDBA
My Blog
www.aureus-salah.com
May 7, 2011 at 3:45 am
The general recommendation is to chose a fixed MB growth, as the % can give you nasty surprises with larger files.
The autogrow should not be relied on, it should be on just in case something unexpected happens, but the DBAs should be monitoring the file-size and pre-growing it as necessary. Choosing a sensible initial size and pre-allocating the log (and data file too) means that you get a controlled and chosen number of VLFs and you don't get unexpected performance degradation if the files grow during busy periods.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 7, 2011 at 4:01 am
Thanks a lot for the valuable suggestion!!!!!!!:-):-):-)
May 10, 2011 at 1:06 pm
Curious about this topic. Say you have a 40GB DB file. Would it ever be realistic to have say a 20GB log file? I was always under the impression that your log file shouldn't get to be more then say 20 to 25% the size of your database file. In fact I heard 10% should be a good goal. Is that completely incorrect?
May 10, 2011 at 2:14 pm
JoeS 3024 (5/10/2011)
Curious about this topic. Say you have a 40GB DB file. Would it ever be realistic to have say a 20GB log file?
Sure. It depends what you're doing in that DB. There's no single optimal size for the log based on the data file. A TB-sized database could have a log under 10GB if it only did small transactions and had frequent log backups.
A 10GB db that does index rebuilds in full recovery in one log backup interval will likely need a log file larger than the data file.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 10, 2011 at 3:27 pm
you really should read this:
http://sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx
---------------------------------------------------------------------
May 10, 2011 at 3:39 pm
Kimberly's advice on the topic is to go for 8GB growths to keep the VLFs down if you have a large log.
What is your log size now? And is it growing (look at log backups)?
May 11, 2011 at 1:28 am
On 7th the Data File size was 12 GB & the Log size rocketed from 4 MB to 2.7 GB. I Detached the database & moved the Log File. Attached the database only with the mdf file. Right now the size of Log file is 100 MB.
May 11, 2011 at 2:02 am
kr.nithin (5/11/2011)
On 7th the Data File size was 12 GB & the Log size rocketed from 4 MB to 2.7 GB. I Detached the database & moved the Log File. Attached the database only with the mdf file.
You're lucky. That could easily have resulted in a recovery_pending, inaccessible database. The log is not an optional file, you can't just delete it and assume SQL will always recreate it without problem. It won't.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 11, 2011 at 4:54 am
are you running transaction log backups?
---------------------------------------------------------------------
May 11, 2011 at 5:27 am
I have another issue.
We wanted to backup the data from our production server to our test server for test purposes. But the issue is that the test server runs on sql server 2008 R2 express edition & the database size to be backed up is 12 GB. We cant backup since the max database size express supports is 10GB.
I took a backup of the database & then updated the columns of a particular table with '0'. The size of the table alone is 7.5GB since it stores message body from mails, so I updated the message body with '0'. When I checked the size of the table after updating the column with '0' the size came down to 266MB from 7.5GB. But when I checked the size of the database the size still remains the same at 12 GB. What could be reason for the database size remaining the same.
May 11, 2011 at 5:41 am
If you want a database file size to reduce, you need to run a shrink operation. This is generally not recommended, so don't do it on the production server, just in cases like this.
You'll likely need to rebuild the clustered index of the table that you updated first.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
May 11, 2011 at 5:44 am
you need to run dbcc shrinkfile. Try the truncateonly option first. If that gives you no space back specify a size less than 10GB but with which gives you reasonable free space in the database.
If truncateonly option did not work your database will be severely fragmented so you will need to reindex it.
that will likely grow your log file again....
so your real resolution is to get a test environmnet that suits your requirements.
---------------------------------------------------------------------
May 11, 2011 at 5:46 am
yes, i backup transaction log every hour.
Viewing 15 posts - 1 through 15 (of 21 total)
You must be logged in to reply to this topic. Login to reply