February 8, 2021 at 2:51 pm
In my Always on Avaliability group setup there are four nodes in the cluster in two datacenters, an avaliability group has for replicas, one on each node. Two replicas in the same datacenter have sync replication for HA and the other two replicas in the other datacenter are async for DR purposes.
My company wants to perform a Disaster recovery test in the form of simulating one datacenter going down from Friday night to Sunday afternoon.
When the primary datacenter has gone down I can manually fail the AGs from the primary to the secondaries, we ave good latency so any data loss will be very negligable.
However the one thing that concerns me is how to handle the logs on replicas that are up and running in the DR site, which will grow and grow as the secondary replicas will be down and the log files cannot be backed up.
I had planned to just let the logs grow for the period of time (friday night to Sunday afternoon) and when the secondary replcias were brought back up let the databases sync up again and fail back. There is a decent amount of space on the disc drive for the logfiles.
I am not sure if I can let this situation happen for this long amount of time, the only other option I can see is removing the secondaries that are offline and then recreating and reseeding the databases when the Primary site is back. But I really don't want to have to do this as it will be a substantial amount of Databases and will take quite a bit of time.
Is there any way I can handle it better than just letting the log files grow?
February 8, 2021 at 9:06 pm
Usually, failing back is part of a DR test. If you decide to remove the secondary's, and rebuild them, will that be considered a successful DR test?
Are your concerns about the log files growing based upon baselines you have captured, or a best guess? Is the worry that you will run out of space, or that the logs will take a long time to catch up, making the fail back take a significant amount of time?
Also, in the DR test, is activity going to be at a "normal" level? Will the users be doing the same level of activity on the databases? I'm guessing that it will be far less, making the log growth also smaller.
As an example, I have 1.3 TB of databases in our production system. The "worst" database for log growth is between 80 and 120 GB per day, depending upon the day of the week.
We had to fail over to our DR site a few weeks back. Connectivity to the 2 servers in the primary DC was down for 2 days. When the servers came back online and the data started syncing again, it took less than 40 minutes for the data be synced up. The disks that hold the log files were never in any danger of filling up.
So, my answer is leave the logs grow. That being said, assuming that you are fairly certain that they will not fill the disks.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
February 9, 2021 at 7:15 am
This was removed by the editor as SPAM
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply