June 25, 2007 at 6:14 am
June 25, 2007 at 7:11 am
When you say that you do a transactional backup every 3 hours i assume that the database is somewhat critical and is a OLTP database. Remember that when a database has a recovery model FULL all transaction are logged and can be used to do a point in time restore.
One thing that i will be able to say is before executing the DBCC shrink file against the database just check if there is really any free space in the log file using DBCC SQLPERF(logspace)
If there is really a huge space available you can shrink the file else do not.
Cheers,
Sugeshkumar Rajendran
SQL Server MVP
http://sugeshkr.blogspot.com
June 25, 2007 at 7:35 am
This is our J.D. Edwards ERP system core database and it is mission critical, my management would have me back up the logs every five minutes.
I do not want to manually check the log file for available space, I simply want to automate a procedure to reclaim unused disk space.
The gist of my question is whether the DBCC command I want to use is 'safe', i.e. does it only remove space consumed by inactive entries in the log file which have already been backed up.
Should I specify a target size for the shrink so that I don't automatically trigger an auto grow of the log file?
June 25, 2007 at 8:08 am
I hope you have used the maintenance plan to backup the files in the database. If thats the case you have the option to remove unused space in database in the update data optimization information page(optimization). You can use that so that it will automatically remove unused space if there is any.
Cheers,
Sugeshkumar Rajendran
SQL Server MVP
http://sugeshkr.blogspot.com
June 25, 2007 at 1:38 pm
It would be a good idea to backup the logs much more often to prevent file growth. Why don't you do it every 5 minutes if that is what your management would like anyway? Three hours between logs backups is way too long for a critical database.
June 25, 2007 at 2:51 pm
If it's mission critical as you say, why not just buy some more harddisk space? That's a lot cheaper than losing data...
--Jeff Moden
Change is inevitable... Change for the better is not.
June 25, 2007 at 4:39 pm
Clearly, this illustrates an issue with attempting to address an issue via a forum.
I am getting lots of opinions but nobody has directly answered the original question.
June 25, 2007 at 7:01 pm
In that case, the answer to your original question is "YES".
I suppose I shouldn't mention it because it is outside the original question, but shrinking the file and having it regrow will continue to frag the file which will lead to some performance problems. That is, unless you have a system that constantly defrags files for you...
--Jeff Moden
Change is inevitable... Change for the better is not.
June 25, 2007 at 9:22 pm
I would be doing transaction log backups more frequently and not continually shrink the log file.
If the log file is continually growing then obviously it's not large enough to hold all the transactions that are performed in the time between the backups. More frequent backups will clear out the completed transactions more regularly and therefore you should incur less growths.
--------------------
Colt 45 - the original point and click interface
June 26, 2007 at 4:45 am
Clearly, this illustrates the problem with trying to help inexperienced people on a forum that have no desire to listen when you try to suggest a better way.
June 26, 2007 at 6:03 am
A further description of my situation based on feedback received:
The database in question uses about 75 GB. The log file typically uses about 10-15 GB (on another drive) with TX log backup every 3 hours.
Occasionally, depending on the type of application activity and, I believe, system maintenance such as reindexing, the log file grows to 50-60 GB.
I am not try to nickel-and-dime disk space but when the log file gets real large, it is simply wasting disk space. The log file is on a 146 GB drive but the drive also contains the log file for two other databases.
I really don't need to shrink the log very often but would like to schedule the process so that I don't forget to do it.
My core question was: Would the DBCC SHRINKFILE (N'LOGFILENAME_Log') command accomplish the task of eliminating inactive entries in the log file (which have already been backed up) without invalidating the log sequence?
June 26, 2007 at 2:21 pm
Yes, it's safe to run. SQL won't delete logs that have not been backed up (in FULL mode)
But, if you backup the t-log more frequently, it won't need to grow as much in the first place.
I reindex during times of no activity (Sunday afternoon) and set the database to SIMPLE RECOVERY so that the t-log does not grow. Then set back to FULL when reindexing is complete. I have it all in one job and fire it off
June 26, 2007 at 5:45 pm
Like I said earlier to your core question... "Yes".
It's just a bad idea to shrink files that are destined to grow again because of file fragmentation. You must first fix the growth problem (more frequent backups, whatever) before you institute a regular shrink-file proc.
--Jeff Moden
Change is inevitable... Change for the better is not.
June 26, 2007 at 7:53 pm
I thank everyone for their input. It is apparent I need to re-think the process annd come up with a long-solution.
Viewing 14 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply