July 1, 2010 at 12:05 am
Comments posted to this topic are about the item Log shipping and A Control Mechanism
July 1, 2010 at 3:50 am
Hi,
Very interesting article. However, I wasn't sure if the reason for architechting this solution was to facilitate log shipping replication for SQL 2000 and below, or as a way of improving upon 2005's already available functionality. Obviously, there is a mention that this is a custom solution to the problem of implementing log shipping replication.
Thanks,
July 1, 2010 at 12:57 pm
amarokmusic (7/1/2010)
Hi,Very interesting article. However, I wasn't sure if the reason for architechting this solution was to facilitate log shipping replication for SQL 2000 and below, or as a way of improving upon 2005's already available functionality. Obviously, there is a mention that this is a custom solution to the problem of implementing log shipping replication.
Thanks,
Log shipping can be used not only for DR but to keep data periodically updated as ETL process for the usage on the other servers. The article shows how to use control tables and how to make solution that is more robust than Microsoft's one. We don't have any issues with this solution for years while Microsoft's solution required reset from time to time.
July 2, 2010 at 7:29 am
I had to do a similar home-grown approach for a client of mine with many thousands of databases on a single server that we needed to get over to a reporting/DR box. Quite interesting!
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
July 24, 2010 at 3:16 am
Great piece Leo. I think for DBA's with less experience (Like myself) it would have been nice if the article was little more detailed, but then I guess it would be a book instead of an article.:-D
Thanks for it. I'm experimenting a bit with it. Could you please tell me where could I get more detailed information regarding the two Jobs you created. Thanks in advance!
July 26, 2010 at 8:12 am
Ignacio A. Salom Rangel (7/24/2010)
Great piece Leo. I think for DBA's with less experience (Like myself) it would have been nice if the article was little more detailed, but then I guess it would be a book instead of an article.:-DThanks for it I'm experimenting a bit with it. Could you please tell me where could I get more detailed information regarding the two Jobs you created. Thanks in advance!
Unfortunately, you need implement those jobs by yourself. I did describe the steps that each job should have and pieces of code in the article.
"To keep log shipping near real time, I created two additional jobs. The first job is on the source server and will be started by the alert from a trigger on the table ls_backupset. This job has several steps such as copying database log files to the destination server based on the information from ls_backupset and CopySetHistory tables. When each log file is copied, the record of success or failure becomes inserted to the history table CopySetHistory. To define the log backup file’s name the last two characters have to be taken out from the file’s name in table msdb.. backupset. This is how Microsoft writes it in the name field. It can be done while inserting the records to the table ls_backupset and at the same time extension trn can be added at the end of the file’s name.
Then, this job starts a second job which is located on the destination server. It can be done many different ways. For example, one way is through the linked server option between source and destination servers. The destination server job is going to insert all new records from the source control tables: ls_backupset and CopySetHistory.
To restore log files,all database connections must be killed first. I developed 3 stored procedures. The first one is restoring one log file based on the next submitted parameters – database name, log file path and name, standby file path and name. The second one is the wrapper for one database that allows us to restore multiple log files for this database in a loop. The third one is the wrapper which allows us to restore log files for multiple databases on the same server. Remember that table BackupDBInfo has all the necessary information for each database and this allows us to construct a copy and restore statements with dynamically built code inside the stored procedures. Also, it permits us to make the procedures generic. "
July 27, 2010 at 3:54 am
Thank you for your response Leo. I don't think it is unfurtunate that I have to implement the jobs myself, on the contrary I'm quite excited by the challenge.
You did indeed describe the jobs, but what I was actually for was a sample TSQL for the job steps. For example to create the trigger on the ls_backupset table. I was considering to create a SSIS package that will be called by the trigger on the ls_backupset table. What do you think about it?
Once again thank you for your time and help.
July 27, 2010 at 6:26 am
Ignacio A. Salom Rangel (7/27/2010)
Thank you for your response Leo. I don't think it is unfurtunate that I have to implement the jobs myself, on the contrary I'm quite excited by the challenge.You did indeed describe the jobs, but what I was actually for was a sample TSQL for the job steps. For example to create the trigger on the ls_backupset table. I was considering to create a SSIS package that will be called by the trigger on the ls_backupset table. What do you think about it?
Once again thank you for your time and help.
You refered to "The first job is on the source server and will be started by the alert from a trigger on the table ls_backupset"
This is simple trigger that has basically one line of code with raiserror statement. Please read article (see link) and it gives you all necessary trigger info. http://www.sqlservercentral.com/articles/Administration/sqlserveralerts/1435/
Viewing 9 posts - 1 through 8 (of 8 total)
You must be logged in to reply to this topic. Login to reply