August 31, 2010 at 6:55 am
Ramji29 (8/31/2010)
Matt-1034261 (8/31/2010)
Ramji29 (8/31/2010)
GilaMonster (8/31/2010)
DB I used to work with had a 250GB log file. Make sure that you're maintaining it correctly and don't worry too much.Thx Gail for correction... I missed the dot(.).... 🙂
Matt.. if you're not taking the log backups then it will be good if you change the recovery model to simple and size the log file to be around 5-6 GB..... btw is this a prod db??
Rohit
No we're not doing Log Backups but we do take backups of the entire disk.
Yes, this is a prod db
Then the best approach will be switch to simple and shrink log file to minimum and then size it back to 5-6 GB (this will reduce the count of VLF due to auto increment). Make sure you've taken a full db backup before doing anything.
Use below command to take db backup
use master
go
backup database <Your DB Name> to disk = '<Any local path>'
go
once the backup file is in place, do the above exercise.... it will be good if you've a window for this.. then schedule your full backup timely and you're done.
Rohit
Rohit
Wait, wait, wait!
If it's a production server, putting it into Simple could be a huge problem. What's the recovery SLA? If they're expected to be able to get back to a point in time, putting the production system into Simple will kill this ability. I would not recommend that, at all.
Get log backups in place so that you're backing up the log regularly. After a log backup you can checkpoint and then run another log backup. From there, as appropriate, you can shrink the file. But even there, I'd be conservative. Don't over-shrink it. File shrinking can lead to serious fragmentation, so it's to be approached cautiously.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 31, 2010 at 7:02 am
Grant Fritchey (8/31/2010)
Wait, wait, wait!
If it's a production server, putting it into Simple could be a huge problem. What's the recovery SLA? If they're expected to be able to get back to a point in time, putting the production system into Simple will kill this ability. I would not recommend that, at all.
Get log backups in place so that you're backing up the log regularly. After a log backup you can checkpoint and then run another log backup. From there, as appropriate, you can shrink the file. But even there, I'd be conservative. Don't over-shrink it. File shrinking can lead to serious fragmentation, so it's to be approached cautiously.
Okay thanks Grant, I'll give that a go.
August 31, 2010 at 7:03 am
Grant Fritchey (8/31/2010)
Ramji29 (8/31/2010)
Matt-1034261 (8/31/2010)
Ramji29 (8/31/2010)
GilaMonster (8/31/2010)
DB I used to work with had a 250GB log file. Make sure that you're maintaining it correctly and don't worry too much.Thx Gail for correction... I missed the dot(.).... 🙂
Matt.. if you're not taking the log backups then it will be good if you change the recovery model to simple and size the log file to be around 5-6 GB..... btw is this a prod db??
Rohit
No we're not doing Log Backups but we do take backups of the entire disk.
Yes, this is a prod db
Then the best approach will be switch to simple and shrink log file to minimum and then size it back to 5-6 GB (this will reduce the count of VLF due to auto increment). Make sure you've taken a full db backup before doing anything.
Use below command to take db backup
use master
go
backup database <Your DB Name> to disk = '<Any local path>'
go
once the backup file is in place, do the above exercise.... it will be good if you've a window for this.. then schedule your full backup timely and you're done.
Rohit
Rohit
Wait, wait, wait!
If it's a production server, putting it into Simple could be a huge problem. What's the recovery SLA? If they're expected to be able to get back to a point in time, putting the production system into Simple will kill this ability. I would not recommend that, at all.
Get log backups in place so that you're backing up the log regularly. After a log backup you can checkpoint and then run another log backup. From there, as appropriate, you can shrink the file. But even there, I'd be conservative. Don't over-shrink it. File shrinking can lead to serious fragmentation, so it's to be approached cautiously.
I totally agree with Grant but as in current scenario they 're not taking log backups hence won't be able to do a point n time recovery if system goes down..... it's true that the best approach will be to have log backups in place but I've seen some scenarios (db size less than 50 GB) where the db is in simple recovery with nightly full backups..... so if you're not planning to have log backups in place then I think switching to simple shouldn't be a bad idea...
I hope this make sense...
Thanks,
Rohit
August 31, 2010 at 7:15 am
Ramji29 (8/31/2010)
Then the best approach will be switch to simple and shrink log file to minimum and then size it back to 5-6 GB (this will reduce the count of VLF due to auto increment).
You can't make that recommendation, neither about the best size of the log nor about the best recovery model.
To be able to make a recommendation about recovery model, we need to know what the allowable data loss is. Sure, at the moment they're not running any useful backups, but that doesn't mean that complete loss of the entire database is acceptable.
To make a recommendation about the size of the log we need to know about the activity on the database, the frequency of log backups (if in full recovery), the frequency of index rebuilds (and how they're done), etc.
it's true that the best approach will be to have log backups in place but I've seen some scenarios (db size less than 50 GB) where the db is in simple recovery with nightly full backups..... so if you're not planning to have log backups in place then I think switching to simple shouldn't be a bad idea...
That can be best, if the business that the database supports is willing to lose up to a full day's data in the case of a disaster. If they are not, then simple recovery is most certainly not the best approach.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 31, 2010 at 7:26 am
I'm in agreement with Gail. Just because they're currently in a potentially dire position doesn't mean they wish to remain there. The best recommendation is to have them get together with the business to understand what's expected of recovery, not assume that because they're dependent on backups (which are not set up either) just to ignore the logs. Paranoia on production systems is never a bad thing.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 31, 2010 at 7:26 am
While I agree that using the SIMPLE recovery model may make sense for some production databases, very low rate of change or data easily recovered and restored to the database, etc; in this case we don't know what the SLA for the database or how volitale it may be. In this case it is better to advocate for the FULL (or BULK_LOGGED if you explain the possibility of losing the ability of point-in-time recovery of a log file back) is actually appropriate. Establishing a backup strategy using full, differential, and t-log backups will benefit the organization and if the t-log backups are properly scheduled it will assist in keeping the logfile from growing uncontrollably.
Please read (if you haven't already) the last article I reference below regarding Managing the Transaction Log (or something like that).
August 31, 2010 at 8:07 am
So here's what I tried on a test database
The data file was 337 MB an the Log file was 1.09 GB
I backed up the log file, did a checkpoint, backed it up again and then did a shrink on the log file. In the Shrink Action options I checked 'Release unused space'
After doing this the log file was still 1.09 GB. Should it have shrunk to a lower figure?
August 31, 2010 at 8:17 am
Probably that's the original size of the log file. You can't shrink below the original file size (well, not without other work & stuff, just go with it). You'd need to create the file, then grow it, then try to shrink it.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 31, 2010 at 8:19 am
Maybe you didn't have any free space in your log file, so there was nothing to shrink. What value was in "Available Free Space" ??
When I find I need to shrink a log file, I usually use the 2nd option "Reorganize Pages ..." and then set the size manually, leaving some free space for future use.
eg: DBCC SHRINKFILE (N'MyDatabase_Log' , 200)
August 31, 2010 at 8:32 am
It may have been my fault as I think I might have been trying to shrink the data file rather than the log file. I tried again and this time it has shrunk down to 2Mb
August 31, 2010 at 8:56 am
Matt-1034261 (8/31/2010)
It may have been my fault as I think I might have been trying to shrink the data file rather than the log file. I tried again and this time it has shrunk down to 2Mb
Now that it has been shrunk, you need to regrow it appropriately. Check the article by Kimberly Tripp where she talks about his and VLF's.
You will need to have an idea of transaction use to get an appropriate size (based on the prior size, a couple of GB appears to be more than adequate with spare room in case a process goes crazy).
Once you have re-grown the log - then defrag your indexes.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
August 31, 2010 at 8:58 am
CirquedeSQLeil (8/31/2010)
Once you have re-grown the log - then defrag your indexes.
Does defragmenting indexes apply to shrinking log files as well as data files ?
August 31, 2010 at 9:01 am
homebrew01 (8/31/2010)
CirquedeSQLeil (8/31/2010)
Once you have re-grown the log - then defrag your indexes.
Does defragmenting indexes apply to shrinking log files as well as data files ?
In this case it looks like he tried to shrink the data files.
Not necessary as a part of a log shrink but it would be good practice to get it going as regular maintenance.
Jason...AKA CirqueDeSQLeil
_______________________________________________
I have given a name to my pain...MCM SQL Server, MVP
SQL RNNR
Posting Performance Based Questions - Gail Shaw[/url]
Learn Extended Events
August 31, 2010 at 9:09 am
CirquedeSQLeil (8/31/2010)
homebrew01 (8/31/2010)
CirquedeSQLeil (8/31/2010)
Once you have re-grown the log - then defrag your indexes.
Does defragmenting indexes apply to shrinking log files as well as data files ?
In this case it looks like he tried to shrink the data files.
Not necessary as a part of a log shrink but it would be good practice to get it going as regular maintenance.
Yeah, but that was a test that he reported on, not the production system. And no, I wouldn't sweat the indexes after shrinking a log file. I would still concentrate on making sure I got the log file to the right size.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
August 31, 2010 at 9:16 am
So if I had a maintenance plan that backed up the log files every so often, would that maintenance plan also need to do a shrink on the log files as well?
Viewing 15 posts - 16 through 30 (of 54 total)
You must be logged in to reply to this topic. Login to reply