February 22, 2007 at 10:00 am
Guys,
I've been asked to properly schedule some jobs on our system for a database following a full recovery model. I realized that the order for some of the jobs might matter and came up with the solution, which is described below.
Please let me know if this ordering makes sense and/or if you have any general comments/suggestions:
1. Backup all dbs [in case defragmentation messes something up]
2. Defragmentation [this will grow the transaction log]
3. Backup transaction log [backs up trans log and truncates inactive portion of it]
4. Shrink log file
Also, I've been previously suggested not to shrink the log file, unless absolutely necessary, as it is resource intensive. In my experience, however, this operation does not seem to be resource intensive at all. It took me 1 second to shrink the transaction log file from 25GB to 2GB.
Is it regrowing of the log file that will affect performance?
Also, if I do shrink the transaction log file, is there a suggested value to shrink it to - possibly a function of the database file size?
Thanks a lot
February 22, 2007 at 10:18 am
Shrinking and growing the log file increases the chances of file fragmentation on the drive. This can degrade I/O performance as the disk(s) have to seek more to locate the next point of the file. In my experience, it's better to take a fresh partition and allocate a log file which will cover your needs (this can be hard to predict) and have regular log backups.
February 23, 2007 at 12:33 pm
Questions / comments
1) How big is your DB? We are running 40DBs or so on an RAID 5 server, total about 15GB (not big). The server has *never* been defragmented. Unless you've got really, really big DBs, you can put defragmentation at the bottom of the "Things-to-do-to-improve performance list". Most people will see a bigger return re-indexing (especially if you do lots of inserts into tables having clustered indices).
2) Shrinking the log file: If you shrink it every night, and the next day it's back to 25 MB, then the appropriate size for your log is about 25MB.
As for appropriate size of log file: people will tell you 25%-50% of the data size. But the above method will get you a lot closer to your particular reality. Shrink it to 2MB each night, and record the size at the end of the next day. The average + 10% is probably where the log should be.
You are correct about the expansion of the log file being expensive, the worst thing being that expansion is usually taking place at the worst possible time: when there's a user waiting. (Few notice a fast DB app; *everyone* notices a slow one.)
HTH,
Paul
February 23, 2007 at 12:35 pm
I have seen queries time out because the log file could not grow fast enough.
In general, you should allow your log file to grow to accommodate the busiest day and do regular backups to keep the file from filling.
February 23, 2007 at 1:49 pm
Schleep,
1. Our DB is 9GB, but the transaction log file was 25GB before I shrank it. By defragmentation I meant the execution of the following: "DBCC INDEXDEFRAG". We use it instead of DBREINDEX not to have locks placed on tables.
2. After INDEXDEFRAG is when it grows at most I believe. Should I take the trans log file size after INDEXDEFRAG as the normal one?
Thanks guys
February 23, 2007 at 1:54 pm
Yes, since I presume you don't want table locks because there is no "down time" to do the index maintenance.
In that case, you don't want slowdowns for both the IndexDefrag and the log file growth.
P
February 26, 2007 at 9:27 am
I may be corrected on this, but the reason why you log shrank from 25gb to 2gb in 1 second was because there was less than 2gb of transactions in your log. Technically all that happened was the file pointer was moved and the bits are still in the same place, but now marked as free space.
Why are you shrinking the translog from 25gb to 2gb? Did you need the space? Why waste the I/O of increasing the log when you already had it at 25gb?
I honestly don't really see the need to shrink/grow databases. It seems like a bandaid fix for a DBA to not pay attention to his database. I make sure to turn that option off on all of my databases and then just keep a better watch on them.
I also have a default size of a database to 30gb data, 5gb log. That way the backup can be zipped and burned to a single layer DVD. It's a cheap and easy way to keep a history of backups.
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply