June 29, 2009 at 8:44 am
My db set up to full recovery model,my transaction logs are set to 111,672 MB and my drive is getting full and data is only set to 4,307 MB. I don't have Transaction job running for this db, what is your suggestion,that I should do?Maybe change recovery model to simple?
June 29, 2009 at 8:49 am
Please read through this - Managing Transaction Logs[/url]
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 29, 2009 at 8:51 am
It depends on your business needs. How critical is your data? If it is critical, I would say set up a DR Plan. If you have a good DR plan then you will have backups and transaction log back ups regularly. This you will be able to manage the log file and space correctly.
There is a good article by Gail Shaw regarding Managing transactional Logs. Try to read that.
-Roy
June 29, 2009 at 10:18 am
Thank you very much, it is a great article, but what I don't understand, if I have db set to full recovery, is it nessary to set up transaction job, my users don't need it
June 29, 2009 at 10:27 am
Absolutely it's necessary. In full recovery the transaction log does not get truncated except by a log backup. No log backups, no reuse of the log file, the log file will grow until it fills the drive.
Are you very sure that point-in-time restore is not needed for this DB? If full backups happen at 8pm and the DB crashes at 7pm, is restoring to the previous full backup and losing a full day's data acceptable to your users?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
June 29, 2009 at 10:47 am
I am in the process to find out if they ok to loose a full day of data. So if they ok,should I switch the recovery to simple? Thank you
June 29, 2009 at 11:02 am
If the database is set to full recovery model, you MUST run frequent transaction log backups. How frequent they are run will be determined by your business requirements. You need to determine how much data loss is acceptable and how long of a downtime can be incurred. In other words, how long is it going to take to restore the system back to an acceptable level of data loss.
If your users can accept more than 24 hours (potential) data loss - then you can set the database to simple recovery model, perform a one time shrink file operation on the log file and forget about backing up the log. For your system, I would recommend shrinking the log file down to 1000MB.
If your users cannot accept that much data loss - implement transaction log backups at least every hour (if not more frequent). If you have to recover the system faster - you then need to look at implementing differentials also. I would say that a database at ~4GB of space - you probably don't need differential backups.
Jeffrey Williams
“We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”
― Charles R. Swindoll
How to post questions to get better answers faster
Managing Transaction Logs
June 29, 2009 at 11:39 am
1.Do I switch to simple and then shrink the file?
2.What is the syntax to run it? Thank you
June 29, 2009 at 11:49 am
1. Yes
2. Read Books Online (the SQL Server Help System). You can access it from SSMS (SQL Server Management Studio) by pressing the {F1} function key.
July 1, 2009 at 7:06 am
Thank you for the advise. I changed to simple and ran DBCC shrink.
The db will stay as simple for right now,but I want to make sure, it sat right. Here is what I have:
For the data:
EnableAutoGrowth set to true
FileGrowth
In Megabytes 1
In Maximum
Unrestricted File Growth
For the log:
EnableAutoGrowth set to true
FileGrowth
In Percent 10
In Maximum
restricted File Growth 2,097,152.
Is it correct or I should change it.
Thank you
July 1, 2009 at 7:13 am
That's an awfully small autogrow amount for a 4GB data file
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 1, 2009 at 7:22 am
Couple of things. One, setting the data file to grow in 1 MB increments is not quite right, especially since the database is apparently about 4 GB in size. Not knowing how much data is inserted on a daily basis it is hard to give you a good estimate, but I'd probably put the autogrow to about 100 MB. Remember, you don't want to rely on this, you want to manage the growth, but you also don't want constant growth throughout the day if it is a busy day.
As for the Transaction Log, again, not a good setting to have it grow in 10% increments. Why, because each time it grows, it will grow by a larger amount. Again, you want to manage the growth of the transaction log, you don't want it growing on its own constantly. Perhaps set it to also grow in 100 MB increments.
Remember, these aren't hard and fast rules, you need to monitor your databases, and make adjustments as needed until you have things well in hand and managed.
Edit: wording.
July 1, 2009 at 7:31 am
Edit: redundant
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
July 1, 2009 at 7:56 am
Thank you for your help, so just be cleared.I will find out how much data comes daily,but in mean while. This is how I will set the db settings:
For the data:
EnableAutoGrowth set to false
FileGrowth
In Megabytes 100
In Maximum
Unrestricted File Growth
For the log:
EnableAutoGrowth set to false
FileGrowth
In Megabytes 100
In Maximum
Unrestricted File Growth
Is this correct?
My initial size is for data is 4,307MB
My log is 2 MB and it set to use full-text indexing
July 1, 2009 at 8:08 am
Krasavita (7/1/2009)
Thank you for your help, so just be cleared.I will find out how much data comes daily,but in mean while. This is how I will set the db settings:For the data:
EnableAutoGrowth set to false
FileGrowth
In Megabytes 100
In Maximum
Unrestricted File Growth
For the log:
EnableAutoGrowth set to false
FileGrowth
In Megabytes 100
In Maximum
Unrestricted File Growth
Is this correct?
My initial size is for data is 4,307MB
My log is 2 MB and it set to use full-text indexing
I'd make your transaction log 100 MB to start. I would also make sure your data file has about 25% free space right now.
Also, if you re-read my previous post, you should notice a slight change in wording. I said "are" when I meant "aren't".
Viewing 15 posts - 1 through 15 (of 20 total)
You must be logged in to reply to this topic. Login to reply