August 1, 2008 at 5:56 am
We have real time OLTP application where data is continously written into the database. Very high Insert.
This database is full recovery and we are not talking Log back. Log file is growing to 50 GB by end of day. We take Full Backup at EOD.
My question are:
1. Is it right approach NOT TO TAKE LOG BACKUP?
2. Will Log Backup will help Performance?
August 1, 2008 at 7:58 am
Paresh Randeria (8/1/2008)
We have real time OLTP application where data is continously written into the database. Very high Insert.This database is full recovery and we are not talking Log back. Log file is growing to 50 GB by end of day. We take Full Backup at EOD.
My question are:
1. Is it right approach NOT TO TAKE LOG BACKUP?
No. If you don't take log backups and you are in full recovery mode, your log file will grow without bound. Full backups do not truncate the log file. If you're not taking log backups, you are only able to restore to the last full backup (if say the drive fails and the data and log files are lost)
Is the potential loss of 24 hours of data acceptable?
2. Will Log Backup will help Performance?
Log backups have nothing to do with performance. They have to do with recoverability.
See here for background info - Recovery model and transaction logs[/url]
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 1, 2008 at 9:37 am
Thanks. I got it. My other interest is to know how will effect performance....
How will it effect performance depending on log size small (LOG BACKUP SO LOG FILE SIZE WILL BE SMALL) or big (NO LOG BACK SO BIG LOG FILE SIZE)? Will small Log File size give better performance?
OR Will it be same irrespective of Log File Size........
August 1, 2008 at 9:58 am
Performance of what? Log backups? Database backups? Normal operation?
Frequency of log backups isn't determined by performance. It's determined by how much data you are willing to lose if the server fails.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 1, 2008 at 12:23 pm
why not just switch it to simple recovery if you aren't doing log backups?
August 1, 2008 at 10:04 pm
Let me clarify......
SQL Noob,
It's the production database so FULL Recovery Model required and also we replicate tables too...
Now the application is Heavy heavy Insert and concern is that Log File backup will LOCK the Log File during backup and Insert will have to wait writing into Log File....... so No Log backup............
Gail Shaw,
Performance of Insert... Question is on writing into Log File and Performance..... Log File Writing mechanism......
Does performance of insert has anything to do with Log File size? Because Insert is LOG operation and will write into the log file too. Heavy INSERT will heavy on LOG file too.
I mean Will Big Log File would degrade performance?
August 2, 2008 at 1:53 am
Paresh Randeria (8/1/2008)
Let me clarify......It's the production database so FULL Recovery Model required and also we replicate tables too...
But if you're not backing the log up, what's the point of having the DB in full? You don't have point-in-time recovery, which is the entire and only reason for using full recovery mode.
Replication doesn't require full recovery mode, you can replicate tables with the DB in simple
Does performance of insert has anything to do with Log File size? Because Insert is LOG operation and will write into the log file too. Heavy INSERT will heavy on LOG file too.
I mean Will Big Log File would degrade performance?
All data modifications write into the log, as do the regular checkpoint operations and other internal processes.
What has the most effect on the speed of logging operations is the speed of the log drive. The size of the file shouldn't affect things, though if the log file has to grow, that will slow things down.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 2, 2008 at 10:23 am
i agree
our main critical db's used to be simple recovery model until a few months ago. the san would copy them to another set of volumes and attach to another server where we would run full backups every day. and we had them publishing articles for years. now we are doing log backups so we had to switch to full recovery just for that reason. and once in a while we get backup errors because there are still transactions waiting to be replicated and the backup can't truncate the log. no biggie
August 2, 2008 at 11:48 am
Thanks Gail Shaw and SQL Noob,
I was under impression that we need to have FULL recovery model for Replication. My case SIMPLE Recovery Model will work.....
I got logical explanation for all my doubts but (I AM NOT DRAGGING ON BUT LIKE TO CLARIFY...)
Assuming Full Recovery Model and Heavy heavy insert,
Will the Log File backup will LOCK the Log File during backup and Insert will have to wait writing into Log File?
August 2, 2008 at 12:01 pm
i don't think so
we have 2 critical db's with heavy inserts. in each case the main table is around 300 million rows. one gets inserts of data all day long and some batch jobs at night. the other there is a batch job 15-20 times a month with a lot of inserts. the biggest one is 50 million commands splilt between inserts and updates. never had a problem where a log backup would cause blocking. i might have seen it a few times with the checkpoint process but that was when we had the simple recovery model
August 2, 2008 at 12:03 pm
The only thing I have ever seen blocked by a log backup, was a full/diff backup, and that was on SQL 2000. But you can test it out for yourself, just to be sure. It's not a hard thing to simulate in a development environment.
Oh, and for clarity, you can't back the log up in simple recovery mode. You'll get an error if you try.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 3, 2008 at 2:51 am
Thanks SQL Noob and Gail Shaw....
August 4, 2008 at 4:34 am
If the transaction log file(s) does not have sufficient free space the constant auto-expanding will impact performance. You should schedule transaction log backups often enough that the log does not need to auto-expand between backups.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply