February 14, 2013 at 7:47 am
Autogrow settings 10% restricted to 2097152MB. DBCC LOGINFO returns 752 rows.
February 14, 2013 at 8:08 am
I wouldn't worry about 752 but I would definitely change from 10% to something in MB. I use 128MB by default. Then shrink your file once to a sensible size and monitor for new growth.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
February 14, 2013 at 9:58 am
Great info thanks. What would a sensible size be, 50 gig? I did notice a 'Shrink File' in the Maintenance Plan, is this what I use for the one time shrink? What happens if I try to shrink the file past what the size actually is? Sorry for all the questions, but, we're almost home.....
February 14, 2013 at 10:11 am
midnight251 (2/14/2013)
Great info thanks. What would a sensible size be, 50 gig?
It's mostly a guess. The only way to know for sure would be to capture the output of this query before you take a log backup for a while, maybe for a day or a week or even a month depending on your regular business cycle, and see how much space the log file actually has to accommodate before it is backed up and used space is freed.
USE YourDatabase;
SELECT name AS LogicalName,
CAST(size / 128. AS DECIMAL(34, 2)) AS Size,
CAST(FILEPROPERTY(name, 'SPACEUSED') / 128. AS DECIMAL(34, 2)) AS SpaceUsed
FROM sys.database_files
-- tran log file
WHERE type = 1;
I did notice a 'Shrink File' in the Maintenance Plan, is this what I use for the one time shrink?
No, just issue the T-SQL directly. I am not sure I even like that Microsoft put that Task into Maintenance Plans. It sends a bad signal.
What happens if I try to shrink the file past what the size actually is?
It will simply fail to shrink the file lower than what it needs. No real consequences.
edit: add USE to code sample
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
February 14, 2013 at 10:31 am
midnight251 (2/14/2013)
Great info thanks. What would a sensible size be, 50 gig? I did notice a 'Shrink File' in the Maintenance Plan, is this what I use for the one time shrink? What happens if I try to shrink the file past what the size actually is? Sorry for all the questions, but, we're almost home.....
A good rule to follow for sizing your transaction log file would be to find out what your largest clustered index size is for that particular instance, and size your file to that size (preferably a little more). That way when your indexing strategy kicks off, you hopefully won't have to grow your log file while your larger indexes are being rebuilt.
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
February 14, 2013 at 10:42 am
MyDoggieJessie (2/14/2013)
midnight251 (2/14/2013)
Great info thanks. What would a sensible size be, 50 gig? I did notice a 'Shrink File' in the Maintenance Plan, is this what I use for the one time shrink? What happens if I try to shrink the file past what the size actually is? Sorry for all the questions, but, we're almost home.....A good rule to follow for sizing your transaction log file would be to find out what your largest clustered index size is for that particular instance, and size your file to that size (preferably a little more). That way when your indexing strategy kicks off, you hopefully won't have to grow your log file while your larger indexes are being rebuilt.
That's kind of OK if you're in SIMPLE recovery mode. When you're in FULL recovery mode chances are good that more than one index will be rebuilt in between your log backup schedule.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
February 14, 2013 at 10:46 am
Of course, I agree. Just saying as a general rule...more of a when in doubt - For Full Recovery DB's I run log backups every 15 minutes so generally don't have much of an issue with this. For the sake of the OP, if there's a table that's 80GB in size, there wouldn't be much point in making the log file 50GB.
______________________________________________________________________________Never argue with an idiot; Theyll drag you down to their level and beat you with experience
February 14, 2013 at 10:55 am
MyDoggieJessie (2/14/2013)
Of course, I agree. Just saying as a general rule...more of a when in doubt - For Full Recovery DB's I run log backups every 15 minutes so generally don't have much of an issue with this. For the sake of the OP, if there's a table that's 80GB in size, there wouldn't be much point in making the log file 50GB.
Agreed for the lower boundary. The upper boundary is the tricky one.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
February 15, 2013 at 1:52 pm
This is great info folks. I want to thank you again for all the help. Log files are getting backed up nicely every hour on the hour, full DB backup getting done twice in 24 hour period (sleeping better at night). I'll probably get at the re-size of the log file next week (we have a long weekend here in the Great White North). Just curouis as to how long you guys keep your log backup files around for? The first log file that got generated in my case is just over 38 gig.
Thanks again
February 15, 2013 at 2:00 pm
Nice work. For keeping log backups around...it depends on how long you need to retain point-in-time recovery. I usually keep mine around for ~10 days, or whatever my log shipping retention period is for that server. After that I would need to go back to the operating system tape backups that scoop complete drive images nightly.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
February 21, 2013 at 8:37 am
opc.three, you got me thinking about point-in-time recovery. If DB fails, would you not just restore the lastest full backup, do a tail end backup of the log file, and then start to restore the log files up to the point of failure, then restore the tail end backup? I guess I'm thinking, why would you want to go back further past the last full backup, (unless the full backup failed)?
February 21, 2013 at 8:46 am
Your approach would be correct if you were in a disaster recovery scenario and wanted to recover as much data as possible.
As for why you would want previous log backups, say someone noticed data missing from an important table and it was traced back to an errant delete carried out by a person or an application and you needed to recover that data just before the delete occurred, and that delete occurred 4 days ago...
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
February 21, 2013 at 1:12 pm
Interesting. I didn't know you could take it down that far and only restore certain data to certain tables. We're running MS NAV 2009 ERP on SQL, and if something gets deleted , it's deleted from this table and that table, etc. And with MS not releasing the schema, it's almost impossible to know all the tables it hits. You do have a valid point.
February 21, 2013 at 1:24 pm
midnight251 (2/21/2013)
Interesting. I didn't know you could take it down that far and only restore certain data to certain tables.
You can't restore data for specific tables without the help of some third-party tools. What you would have to do is restore the relevant FULL backup taken prior to the DELETE and then restore each log backup from that point forward one at a time WITH STANDBY, checking for the missing data, and then when it first disappeared from the database you'll have identified which log file logged the DELETE operation. You would then need to repeat the process starting with the FULL backup and all the logs minus the last one you restored...and then you could restore that last one with the STOPAT option to get as close to that point-in-time before the delete occurred. Of course I am describing a nightmare scenario for a DBA, one I've had to do but where I pretty much knew within a few minutes when the data was deleted so it wasn't a huge deal, but some folks are not so lucky.
There are no special teachers of virtue, because virtue is taught by the whole community.
--Plato
February 21, 2013 at 1:39 pm
opc.three (2/21/2013)
midnight251 (2/21/2013)
Interesting. I didn't know you could take it down that far and only restore certain data to certain tables.You can't restore data for specific tables without the help of some third-party tools. What you would have to do is restore the relevant FULL backup taken prior to the DELETE and then restore each log backup from that point forward one at a time WITH STANDBY, checking for the missing data, and then when it first disappeared from the database you'll have identified which log file logged the DELETE operation. You would then need to repeat the process starting with the FULL backup and all the logs minus the last one you restored...and then you could restore that last one with the STOPAT option to get as close to that point-in-time before the delete occurred. Of course I am describing a nightmare scenario for a DBA, one I've had to do but where I pretty much knew within a few minutes when the data was deleted so it wasn't a huge deal, but some folks are not so lucky.
In a situation like this, you restore your database to a temprary database so as not to affect your current live database. So you get your full backup and restore it as "MyProductionDatabaseName_TempRestore" or some such. When you have it restored from the correct time, copy the missing table data from "MyProductionDatabaseName_TempRestore" to "MyProductionDatabaseName".
As for all your transaction log backups, there's a setting in the Maintenance Plan for how long you want to retain the log backup files. 10 -14 days is fairly common.
Viewing 15 posts - 16 through 30 (of 33 total)
You must be logged in to reply to this topic. Login to reply