November 10, 2010 at 12:35 am
RD Francis (11/9/2010)
The problem I run into is that DBCC SHRINKFILE on Log files doesn't seem to work in a particularly straightforward fashion. As often as not, I ask if to shrink the file down to 5GB of free space - and the file's still 30GB.
That's probably because the part of the log that's still in use is right at the end of the file--DBCC SHRINKFILE can only shrink a log back to the last used section, as far as I know. It should give you a message telling you what you have to do to allow it to shrink the file if it fails in this way, though.
November 10, 2010 at 2:42 am
Thanks for a great article. I don't typically read through 7 pages of comments but this one was worth it. I have to say though that some of the comments are out-and-out attacking. Why the hell would anyone contribute to a forum like this if we can't have a adult discussion about it. Post on Yahoo if you are into that.
I for one am appreciative that you wrote the article. I learned a lot from both the main piece and the comments.
November 10, 2010 at 3:46 am
I currently have a database with a 60Gb mdf and an 80Gb ldf. If I look at shrink file in SSMS it tells me that the available free space in the log file is 99%. Recovery model is simple and the db will not be backed up until the weekend using DPM, not SQL. I have some space to play with but not a great deal, especially if the log files of other databases, which I am no longer shrinking, start to grow as well. What, if anything, should I do?
Regards
George25
November 10, 2010 at 6:48 am
SQL-DBA (11/10/2010)
Thanks for a great article. I don't typically read through 7 pages of comments but this one was worth it. I have to say though that some of the comments are out-and-out attacking. Why the hell would anyone contribute to a forum like this if we can't have a adult discussion about it. Post on Yahoo if you are into that.I for one am appreciative that you wrote the article. I learned a lot from both the main piece and the comments.
Thanks, I appreciate the positive feedback, and also the fact that there was a lively discussion. The flame attacks, I try to ignore.
November 10, 2010 at 6:55 am
george25 (11/10/2010)
I currently have a database with a 60Gb mdf and an 80Gb ldf. If I look at shrink file in SSMS it tells me that the available free space in the log file is 99%. Recovery model is simple and the db will not be backed up until the weekend using DPM, not SQL. I have some space to play with but not a great deal, especially if the log files of other databases, which I am no longer shrinking, start to grow as well. What, if anything, should I do?Regards
George25
You can shrink the Log file, but the real question is "Why did it get to 80GB in the first place?". Shrinking the Log file is not as problematic as Shrinking the data file(s), but it does present it's own issues. Gail and others, myself included, have expanded on that in this discussion topic, I suggest you should scan back for those posts for details.
November 10, 2010 at 7:09 am
george25 (11/10/2010)
What, if anything, should I do?
Identify why it hit 80GB and what size it needs to be for normal operation.
Check the % used during peak usage, during any data loads, during and straight after index rebuilds. That will give you a fair idea of how large the log needs to be. Then, once you have that figure, consider whether or not to shrink and if so to what.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 10, 2010 at 8:28 am
In my case log file reaches more than 300GB within 3 to 4 hrs.
To shrink log file I have scheduled a job which executes every 1 hrs.
Is there any better way to deal with such big log files.
Thanks, Satalaj
November 10, 2010 at 8:43 am
satalaj (11/10/2010)
Is there any better way to deal with such big log files.
Yes. Stop shrinking it and find out why it's getting that size. It may be that it needs to be that size for the activity on the database and the frequency of log backup.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 10, 2010 at 8:49 am
Sometimes even log files of databases using the simple recovery model can grow indefinitely, maybe caused by a hanging replication. It can drive you mad if you try to shrink a log and nothing seems to work. You might want to check out this cause, especially because replication is often applied without fully understanding the impact on the database log and it's growth.
The log file doesn't grow that much until your (link to) the distributor breaks down, possibly because someone did not remove replication correctly all the way. If your stuck with an orphan publication, well ... good luck with it. Combined with a hard disk completely filled with a huge database log file, you are in big trouble. Most times when I did run into this situation, they called me only after things already had run completely out of hand. While I was not the one to blame, I was still the only one around able to clean up this mess.
By the way, sometimes you need to run SHRINKFILE twice on al log file, with a CHECKPOINT or a BACKUP LOG between the two runs. During the first run SHRINKFILE might not be able to empty the last VLF in the log file, which is necessary to reduce it's size. I do agree with most DBAs on this forum that scheduled shrinks may only be justified on hosted servers when you are paying a lot for every GB. But every now and then there are good reasons to shrink a database file, and then you generally need all the (good) advice you can get.
November 11, 2010 at 12:37 am
What can I do instead of shrinking big log files which grows rapidely.
November 11, 2010 at 6:07 am
After reading the article I tried running the recommended resource at the bottom of the page, pr_rebuildindexes.sql. After cutting and pasting this into SQL Query Analyzer, I receive numerous errors when I check the code before executing.
They would be:
Server: Msg 170, Level 15, State 1, Procedure pr_RebuildIndexes, Line 442
Line 442: Incorrect syntax near '('.
Server: Msg 170, Level 15, State 1, Procedure pr_RebuildIndexes, Line 661
Line 661: Incorrect syntax near 'TRY'.
Server: Msg 170, Level 15, State 1, Procedure pr_RebuildIndexes, Line 663
Line 663: Incorrect syntax near 'TRY'.
Server: Msg 195, Level 15, State 1, Procedure pr_RebuildIndexes, Line 668
'ERROR_NUMBER' is not a recognized function name.
Server: Msg 195, Level 15, State 1, Procedure pr_RebuildIndexes, Line 669
'ERROR_MESSAGE' is not a recognized function name.
Server: Msg 170, Level 15, State 1, Procedure pr_RebuildIndexes, Line 671
Line 671: Incorrect syntax near 'CATCH'.
Server: Msg 156, Level 15, State 1, Procedure pr_RebuildIndexes, Line 746
Incorrect syntax near the keyword 'END'.
Server: Msg 156, Level 15, State 1, Procedure pr_RebuildIndexes, Line 773
Incorrect syntax near the keyword 'END'.
Does anyone know if this procedure will run on SQL 2000? Has anyone debugged this procedure? Or the most likely scenario, am I doing something wrong (Simon highly recommends this procedure and uses it himself. That is what leads me to believe I am doing something wrong).
Any help would be appreciated.
Thank you,
pat
PS Let me give a little background. Customer complained that it is taking 4 seconds for a screen to close when it used to close immediately. The screen basically deletes 50 or so records and recreates those same records. I use an SQL Insert command in side a VB loop to do this (I will probable change this technique). The tables are moderately sized (1.2 million records and 8 fields per record). I was puzzled by why the sudden decrease in performance. I thought I would run the pr_Rebuildindex script to see if it helped. I decided to double check that I was using a Simple model. I was but then I noticed that Autoshrink was checked. After reading this article, I realize I have a problem. I have unchecked AutoShrink and now have to figure out how to fix the damage. Obviously, I am a beginner. If I can't figure this out I will post in the regular forums so as not to hijack this thread.
November 11, 2010 at 6:10 am
Pat
TRY...CATCH isn't available in SQL Server 2000.
John
November 11, 2010 at 6:12 am
mpdillon (11/11/2010)
After reading the article I tried running the recommended resource at the bottom of the page, pr_rebuildindexes.sql. After cutting and pasting this into SQL Query Analyzer, I receive numerous errors when I check the code before executing....
pat
Sorry Pat, that routine was never written for SQL 2000, its 2005 + only. For SQL 2000 (which we are trying very hard to remove from our environment) we use standard Maintenance Plan index operations.
November 11, 2010 at 6:21 am
Thanks guys. I appreciate the explanation.
pat
November 11, 2010 at 10:01 am
satalaj (11/11/2010)
What can I do instead of shrinking big log files which grows rapidely.
Several things:
1) Confirm the recovery model and backup scheme for you database is appropriate. Are you using the FULL recovery model? If so, how often are you backing up the transaction logs?
2) Confirm the behavior: You said your log file gets to 300 GB in 4-5 hours. Is that predictable? That is, it grows to that size if you shrink before start of business on Monday; after close of business on Wednesday; after close of business on Friday? Or does it grow to that size in a 4-5 hour period every Thursday, but is OK the rest of the week? When you shrink it, what does it shrink to?
3) Follow the links to Kim Tripp's articles on VLF files, and check the size/number of yours.
If there's a specific time of day, or day of the week, when the log file tends to grow very large, then what's happening at that time? As I stated in my post, I've got a DC where a 5GB log file is more than adequate for 6.75 days a week - but it grows beyond that (I've seen it top out over 35GB, when my data file is in the 65-80GB range). And that happens to be when I run a weekly optimization of the DB.
If it grows constantly (and you're in FULL recovery), look hard at those transaction log backups for a start. Are they being done regularly? If they're being done (say) every six hours, consider bumping them up to every hour - or every 15 minutes.
Consider, at least temporarily, adding some additional space to allow the log file to get even bigger. Does growth appear to be unlimited - or, maybe, does the file seem to top out at 400GB? Drive space for production environment may not be cheap (I may be able to pick up a 1TB drive for $100, but that doesn't mean I'd hook it up to my production server and start putting log files on it, instead of our SAN storage solution), but this might be necessary to diagnose the problem and its impact.
Some reasons why a log may remain relatively full have been mentioned - very long running queries, or transactions that have been started but never committed or rolled back. If the file consistently is both large and has relatively little free space, then it would sound like something like this is going on.
The point many people have made in this thread is that shrinking a log file, only for it to almost immediately start growing again, is bad for a number of reasons: file fragmentation, oddly sized VLF files, time to grow the file and zero it out during business hours.
Now, I would argue there are cases where it's reasonable to regularly shrink the log file - my scenario is one (although I only shrink my log files because I back up my DBs and restore them to another server nightly - a server where a total of 60GB of unused space locked into these files seems wasteful, as the logs never grow significantly on those servers). But note that I *know* my log files won't be growing again (if shrunk to the proper size, a battle I'm still waging) for a week, and will grow during off hours (when it's not a problem). And I am thinking through changes that would let me control several things better, including VLF sizes in my main log.
R David Francis
Viewing 15 posts - 61 through 75 (of 109 total)
You must be logged in to reply to this topic. Login to reply