January 26, 2010 at 8:24 am
I'd like some help determining what would be the best settings for my transaction log properties.
When our SQL Server 2000 SP4 Database was setup it was done by developers without detailed information regarding all of the possible settings so most settings were left at their defaults.
I have been looking at the issues around setting the Transaction Log File Growth properties. The default is to grow automatically and to grow by 10% percent and the maximum File Size is set for unrestricted growth.That is our current setup. Our transaction log is 19Gb with a 10Mb data file so something needs addressing. Part of the reason for this growth has been some recent imports of data from other systems but these were not so significant in and of themselves. Total bulk loads were maybe 10,000 rows. There were 3 or 4 bulk deletes and reimports of these records but still not enough to cause such bloat of our ldf file...I think.
I found a job that reindexes 35 indexes every night regardless of their fragmentation. It issues a dbcc dbreindex command and uses 90 for every fillfactor. I'm thinking this may also cause disk fragmentation which may be contributing to the disk latency issues we have been having on reads and writes.
Clearly that will need to be rethought. Assuming that is straightened out and we're using intelligent reindexing, what factors would I look at to set an optimal file growth. Here are some details that of my installation:
mdf = 194,112Kb
ldf = 19,132,224Kb
We are running an OLTP db with frequent updates and inserts. We are in Full backup mode with transaction log backups every 15 minutes. Its an employee intranet where we handle employee timesheets, leave request, purchase orders...stuff that folks are partial to ergo the frequent backups and the FULL mode. We would very much require the transaction logs in the event that a backup were required.
Our backup sizes are currently 120Mbs and they grow an average of 1MB every 4 days or 5Mbs a month
Other options that are set: Auto Update Statistics, Torn Page Detection and Auto Create Statistics.
That is the information I have to go by. If there are other datapoints I could collect please let me know 🙂 Is there a way to find out what the growth rate of the Transaction logs has been over time? That might be where to look first huh?
I know frequent growth of the ldf can cause performance issues if it has to regrow frequently so I don't want to simply shrink the log to its smallest possible size. But I do want to reclaim space that won't be needed for awhile. Just could use some assistance as to what variables to consider.
We will be moving to SQL Server 2008 in a few months but I'd still like to get this part figured out before the upgrade.
Thank you very much!
Even as a mother protects with her life
Her child, her only child,
So with a boundless heart
Should one cherish all living beings;
January 26, 2010 at 1:30 pm
Are you sure you are performing transaction log backups every 15 minutes? With a transaction log file that large, and the data file that small - it is hard to believe that you have anywhere near 19GB of transactions at any time.
Review the article I link to in my signature on how to maintain your transaction logs. I would bet that you are not actually backing up the transaction log, and that is why it is continually growing.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
January 26, 2010 at 2:29 pm
You're right ...i checked and the transaction logs are backed up every hour. Thanks!
I read the article and it was very good. It filled in some details that other, mostly msdn, articles had left out. One thing that was especially helpful was"
"Large data modifications, such as data loads or index rebuilds should be taken into account when calculating a log file size."
Still, the mathematical formula still eludes me for how to determine the log size. If index rebuilds are contributing to the transaction log file how does one go about using that observation in a meaningful way, as in a formula?
Thanks!
Even as a mother protects with her life
Her child, her only child,
So with a boundless heart
Should one cherish all living beings;
January 26, 2010 at 7:16 pm
There really isn't a formula for figuring this out, as there can be a lot of factors for how large a transaction log will need to be.
Simply put, the transaction log needs to be as big as your largest transaction. I just cannot believe that a database that is less than 1GB in size needs a transaction log 19GB. There is just no way you have transactions that large, unless you have some really horrible code that keeps transactions open for a very long time.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
January 26, 2010 at 9:00 pm
lol.. I think you are right about the bad code. No one at my shop is an expert in dbs and unfortunately when the issue of performance hampering behaviors comes up people respond with 'oh it doesn't really use that many resources..its practically nothing." It drives me up a wall b/c that's the answer to everything. The open terminal server to our db server, open Enterprise manager on there, do a select * form some table and then let it sit open for days. "Oh its not that bad". I finally got our admin to boot people off terminal services after 30 minutes. Select Distincts and cursors are rampant. "Its not that bad". Until recently they allowed non technical users full access (with delete and update permissions) to our production database to run queries..using Access. "Oh I trust them..they haven't messed up yet."
How this got so huge I think has something to do with running the dbreindex every single night, in aqddition to crap code. I can probably do a test on that theory.
I would just like to shrink the log to a level that won't screw things up further. I ran DBCC Loginfo on the log and the status of every VLF was zero. I didn't see any 2 in there, which I guess means none of the vlf's are being used. If I remember correctly there over 500 rows returned in that DBCC loginfo routine!. On my local machine I ran the dbcc shrinkfile ('mydb', 110) thinking that I would end up witha 110mb transaction log file but it didn't reduce it near that. I thought that would provide a years worth of data growth... but that was prior to learning that these nightly reindex commands and god knows what else, will only cause it to grow more. Do you know if there is a stored procedure I can run to view the growth of the transaction logs over time...like that one that shows the growth of the backups over time?
Thanks for the help with this!
Even as a mother protects with her life
Her child, her only child,
So with a boundless heart
Should one cherish all living beings;
January 26, 2010 at 9:21 pm
There really isn't anything that will show the growth over time - but, you can look at the backup history of the transaction log and see what times are the largest backups.
I would definitely recommend shrinking the log, then growing it back to an appropriate size. You have way too many VLF files which will also cause some issues. Search on http://www.sqlskills.com for an article by Kimbery Tripp about the transaction logs and the number of VLF files. She goes into great detail about this subject.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
January 27, 2010 at 7:18 am
Thanks for the heads up about that site. i wasn't able to find the specific article you suggested as that site's search function isn't very good and the article titles didn't clue me in for that author. I did find one by Paul Randal called Understanding SQL Server Backups that helps.
You said I should shrink the log and "then growing it back to an appropriate size". It is precisely this appropriate size that I'm not sure how to determine. I am using this script to get details on the backup history:
use msdb;
select msdb.dbo.backupset.backup_set_id,
myDB.dbo.DateTimeFormat(msdb.dbo.backupset.backup_start_date, 'mmm dd yyyy hh:mm AM/PM') as 'BackUpDate',
msdb.dbo.backupset.name as BackupName,
case msdb.dbo.backupset.type
when 'D' then 'Full'
when 'I' then 'Differential'
when 'L' then 'Transaction Log'
end as BackUpType,
msdb.dbo.backupmediafamily.physical_device_name,
msdb.dbo.backupset.user_name,
msdb.dbo.backupset.database_name,
msdb.dbo.backupset.server_name,
msdb.dbo.backupset.software_build_version,
cast(backup_size/1000000 as varchar(14))+' MB' as bkSize,
CAST (DATEDIFF(second,backup_start_date , backup_finish_date)AS VARCHAR(4))+' Seconds' as TimeTaken
from msdb.dbo.backupset inner join msdb.dbo.backupmediafamily
on backupset.media_set_id = backupmediafamily.media_set_id
and from this I may be able to get close to an appropriate size by, hopefully, finding a transaction log prior to whatever caused the bloat and using that as my starting point. Just a hunch at this point but it seems to make sense.
I very much appreciate the feedback I've gotten on this. It's been very helpful and encouraging. I'll post followups in case anyone else stumbles upon this down the road.
sometime later ..
The above script was very useful to me. The very first Transaction log backup that appears in the msdb was recorded on 11.17.2009 and the backup size of that 18248.55 Mb and took almost 8 minutes to complete. The DB had been operation al for about 2 years prior to that initial backup. When I look at the actual directory on the db server via File Explorer I see bak files that precede this date but given some of the names it looks like these were full backups done manually due to a code rollout. For example: "myDB_20081126BeforeExpenseReports.bak."
Subsequent transaction log backups are around .50 Mb taken once an hour. There is a job that runs at 4AM that results in a 87Mb log.
So I am thinking, correct me if I'm wrong, that I could shrink the file and then resize it to 87MB PLUS add what I think would be the growth of the transaction log over the course of a year to prevent growth problems. I'm not sure how to determine the new size but what if I base it on the changes of the full transaction log back up that runs at 4AM each morning. That file grows about 1 Mb ever 7 days. So if I add 52Mb to the size that the ldf is shrunk to (the current size with no unused vlfs) I should be able to appropriately size my ldf file.
What do you think? Am I completely missing the boat?
Even as a mother protects with her life
Her child, her only child,
So with a boundless heart
Should one cherish all living beings;
January 27, 2010 at 8:18 am
Here is the article I was referring to: http://www.sqlskills.com/BLOGS/KIMBERLY/post/Transaction-Log-VLFs-too-many-or-too-few.aspx
The sizing at 8GB probably is not relevant for you - but, the information on how the VLF's are created will probably help you decide on how to actually build the transaction log.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
January 27, 2010 at 9:52 am
That's a great article! And the one prior to it specifies the steps required to do the work. Thanks for pointing that!
Even as a mother protects with her life
Her child, her only child,
So with a boundless heart
Should one cherish all living beings;
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply