January 14, 2015 at 8:25 am
Hi,
we have to move from one datacenter to another some sql server databases.
We are thinking to use log shipping to do this, but because of the daily transactions size we have to estimate the size of transaction log backup in a day by day basis.
At the moment all databases are in simple recovery model.
In advance to switch these databases from simple to full recovery model we need to know if our backups infrastructure is properly sized to support the new temporary workload.
In Oracle databases I can obtain these information counting the redo log switches. In SQL server I have not idea. Maybe can I use fn_dblog()??? Can you help me?
Regards
Giova
=========================================================
I'm always happy to hear your opinion!
Senior Oracle DBA
Sql Server "learner" and... enthusiast!
January 14, 2015 at 8:45 am
No, you can't use fn_dblog as in simple recovery model the log is truncated on every checkpoint, so every couple minutes.
Best thing, prior to implementing log shipping, switch the DBs to full recovery and schedule log backups to disk, then look at the sizes of those log backups.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 14, 2015 at 8:52 am
Thank you.
This make me sad.... It seems impossible to me that with sql server is not possible to get informations like this.
I hate Oracle because of complexity, but in certain cases I hate sql server because of stupidity.
I'm not so expert with sql server, are you sure that non exist a way to obtain this information?
Thank you
regards
Giova
=========================================================
I'm always happy to hear your opinion!
Senior Oracle DBA
Sql Server "learner" and... enthusiast!
January 14, 2015 at 9:00 am
Is there a problem with switching to full recovery mode and keeping track of the log file sizes for awhile? You can leave your current recovery plan in place and just delete the transaction log backups if you are worried about space.
January 14, 2015 at 9:15 am
Yes, Ill do it in this way at the end, I'm only a bit worried about the I/O performance on theses database servers. Over than this, I have to use a disk first and then switch backups on the tape if the space and the frequency needed fot backups is not too high.
We are speaking about terabytes of transactions per day.
Thank you for your suggests
Regards
Giova
=========================================================
I'm always happy to hear your opinion!
Senior Oracle DBA
Sql Server "learner" and... enthusiast!
January 14, 2015 at 1:16 pm
Giova (1/14/2015)
I hate Oracle because of complexity, but in certain cases I hate sql server because of stupidity.
There's nothing stupid about it.
In simple recovery model you're telling SQL that it can mark portions of the log as inactive after a checkpoint. fn_dblog shows the active portions of the log. Hence looking at fn_dblog is not going to show you what you want.
Taking log backups and looking at the sizes is the easiest. If you're worried about storage space, add a step into the job that records the size of the backup file and then deletes the file. You're not losing point in time recovery by doing that, because you don't have any in simple recovery as it is.
You could set up a job to poll sys.dm_db_virtual_file_stats and look at the bytes written to the transaction log. It's not going to match exactly to the size of the log backup, especially if you're compressing the log backups (which is probably a very good idea if you're worried about storage space and transmission time). The data in there is cumulative since the database was opened, so take snapshots and diff them.
There's probably a perfmon counter (or several) that record the rate data is written to the log. Again it's not going to match exactly the size of the log backups and it'll probably need more computation to get a sensible answer out of than the DMV.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 15, 2015 at 12:28 am
sys.dm_db_virtual_file_stats: thank you Gila. It is what I was really seraching for!
=========================================================
I'm always happy to hear your opinion!
Senior Oracle DBA
Sql Server "learner" and... enthusiast!
January 15, 2015 at 2:49 am
Bear in mind that it's not going to match the size of the log backups exactly and if you're compressing the log backups it's not going to match at all.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 15, 2015 at 3:02 am
Hi Gila,
thank you.
I wrote a stored procedure that do the folowing:
[font="Courier New"]read num_of_bytes_written from sys.dm_io_virtual_file_stats
random insert and delete rows from tables
read num_of_bytes_written from sys.dm_io_virtual_file_stats
calculate delta in megabytes
do a compressed transaction log backup
check the difference between log backup size and delta in megabytes[/font]
I'll execute the procedure 1000 times.
In this way I can calculate the ratio between trn size and num of bytes written
Regards
Giova
=========================================================
I'm always happy to hear your opinion!
Senior Oracle DBA
Sql Server "learner" and... enthusiast!
January 15, 2015 at 3:51 am
That'll work as an average, but only as an average. Different data, different types of data will compress differently.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 10 posts - 1 through 9 (of 9 total)
You must be logged in to reply to this topic. Login to reply