May 26, 2009 at 9:17 am
First day at new job, don't know much about how things are set up. On the databse server, which uses SQL Server 2005, the hard drive is 95% full. I opened it up and saw that there are several databases, but one has a log file of over 113 GB, and that is the bulk of the problem.
These guys don't have a true DBA and I am a web developer, but I need to fix this. I know how I can limit a log's file size when setting up a new database, but what are my options in this case? I need to make that log a whole lot smaller, but obviously don't want to do any damage.
May 26, 2009 at 9:24 am
middletree (5/26/2009)
First day at new job, don't know much about how things are set up. On the databse server, which uses SQL Server 2005, the hard drive is 95% full. I opened it up and saw that there are several databases, but one has a log file of over 113 GB, and that is the bulk of the problem.These guys don't have a true DBA and I am a web developer, but I need to fix this. I know how I can limit a log's file size when setting up a new database, but what are my options in this case? I need to make that log a whole lot smaller, but obviously don't want to do any damage.
I'd check what recovery model the database is using. I'd quess it is the Full Recovery model. Then I'd check to see if they are running scheduled transaction log backups, and again my guess here is not.
May 26, 2009 at 9:25 am
You need to shrink log file.
DBCC shrinkfile('log_logicalname',10000). this commang will shrink log file to 10G, give the size you want to.
if it is not working, Try
backup log DBNAME with truncate_only
and then
DBCC shrinkfile('log_logicalname',10000).
I suggest you do a full backup after you shrink log file.
May 26, 2009 at 9:35 am
most probable cause of this is the database is in full or bulk logged recovery mode and the log has never been backed up (or not for a long time). Can you confirm this?
If this is the case two options:
run dbcc sqlperf(logspace). this will tell you what percentage of the log file is actually used
If you have the space backup the transaction log. This will produce a backup file = in size to value returned above.
else
set the recovery mode to simple. This will empty the log file on next checkpoint (you could force one by issuing a checkpoint command)
set recovery mode back to full and immediately take a full backup of the database (you could shrink logfile first, see below)
Then shrink the log filel using dbcc shrinkfile to a sensible value in MB. failing any other information go for 25% of the data file
then set up regular log backups of the database
If db is not in full or bulk logged mode, get back to us.
sorry cross post, honest! the article Lynn refers to is worth a read to give you a full understanding
---------------------------------------------------------------------
May 26, 2009 at 9:36 am
If the log file is too large, and I would bet that is the issue. you then need to determine how large to make it. I would shrink it to the original size, and then you need to set up regular log backups.
You can do this with a maintenance plan and the wizard, but I'd do hourly backups, then set a removal schedule that matches the full backup schedule. Look for your largest log backup in those hourlys and make the log a bit larger to handle peaks.
Monitor it and see if you are maintaining a consistent log file size. You should be.
May 26, 2009 at 9:39 am
Take transaction log backup,apply checkpoint,shrink the log file.
DBCC SHRINKFILE (, )
[font="Comic Sans MS"]+++BLADE+++[/font]:cool:
May 26, 2009 at 9:42 am
OK, thanks, both of you.
I have some follow-up questions, though. I just want to be very clear before I do something drastic.
1. If I run the command using Judy's suggested syntax, do I put the file name in there where she has "logical filename"? For example, do I type DBCC shrinkfile('***_Log.LDF',10000)? And is that the entire command that I need? (I have read BOL on this command, and don't understand all the options).
2. If I do this in the middle of the day, will it bring the DB down?
3. Generally speaking, what is the threshold of disc space that I want to stay under? That is, this one is 95% full, so I need to shrink it obviously. But what is too full? 50% 70%
May 26, 2009 at 9:51 am
best re-read all the posts, file wont shrink unless its cleared out first. what does dbcc sqlperf(logspace) say for this db.?
shrink the log to a sensible size rather than based on how much free space left on disk.
The backup or shrink will give a performance hit (though not bad) but will not stop database.
what will stop database is if you run out of disk space
---------------------------------------------------------------------
May 26, 2009 at 9:56 am
george sibbald (5/26/2009)
best re-read all the posts, file wont shrink unless its cleared out first. what does dbcc sqlperf(logspace) say for this db.?
Ok, thanks. I am still unclear on much of what is said here. In particular, the part about "file wont shrink unless its cleared out first."
To answer your question, here's my results when I rand sqlperf (I removed some database names for this post). The third from bottom is the culprit:
master1.74218833.632290
tempdb8.17968844.269340
model6.74218896.697560
msdb6021.55592.237910
ReportServer0.742187542.565790
ReportServerTempDB0.742187559.407890
CitrixFarm21.6171974.692810
EDI555.55472.8501220
***34731.6292.036870
***_PROD4322.2420.75285090
***_REGULATORY1.99218817.671570
***111057.295.001720
distribution31.6796928.603580
webdata5.05468825.62790
May 26, 2009 at 10:02 am
Helpful article, Lynn, but reading it makes me think we need to stay in Full Backup mode.
May 26, 2009 at 10:05 am
***111057.295.001720
You log space used % is 95%.
try truncate first then shrink log file.
Finding log logical name:
Sp_helpfile 'DBNAME'
May 26, 2009 at 10:08 am
1. If I run the command using Judy's suggested syntax, do I put the file name in there where she has "logical filename"? For example, do I type DBCC shrinkfile('***_Log.LDF',10000)? And is that the entire command that I need? (I have read BOL on this command, and don't understand all the options).
use "select * from sysfiles"...from that you will get log file name....You have to use that instead of logical file name.
[font="Comic Sans MS"]+++BLADE+++[/font]:cool:
May 26, 2009 at 10:11 am
middletree (5/26/2009)
Helpful article, Lynn, but reading it makes me think we need to stay in Full Backup mode.
If you need point in time recovery, then yes you do.
Now that you have read the article, I hope you can start looking at what needs to be done.
Run a BACKUP LOG ... to backup the current transaction log file.
After that, then yes, shrink the transaction log. I'd shrink it to about 1 GB but no smaller.
Then you can schedule periodic transaction log backups (BACKUP LOG ...) to manage the transaction log file.
Monitor the size of the transaction log over time and see if it is sized appropriately.
May 26, 2009 at 10:20 am
I ran the SHRINK statement, but it only saved me 5 GB. I need to do the Truncate thing first.
I know this sounds weird, but I have spent the last few minutes looking for syntax for truncating the transaction log, and cannot find any. I have tried BOL and Google. Anyone have a sample code snippet?
May 26, 2009 at 10:23 am
middletree (5/26/2009)
I ran the SHRINK statement, but it only saved me 5 GB. I need to do the Truncate thing first.I know this sounds weird, but I have spent the last few minutes looking for syntax for truncating the transaction log, and cannot find any. I have tried BOL and Google. Anyone have a sample code snippet?
DON'T truncate the transaction log. You want to backup the transaction log (BACKUP LOG in Books Online).
Viewing 15 posts - 1 through 15 (of 34 total)
You must be logged in to reply to this topic. Login to reply