May 25, 2016 at 9:34 pm
We have a SQL Server DB hosted by AWS. This DB contains two relatively large tables (currently ~10 million rows). We would like to copy the entire DB (but specifically these two large tables) to a duplicate DB for processing/backup purposes, and would like to do so without impacting the production DB’s ability to handle incoming I/O in a timely fashion (which is critical to this application). What is the best practice for doing this sort of copy? For now, we just want to do this once (although down the road it would be nice to create a solid backup that updates frequently on which we can run heavy analysis on without affecting production I/O timing).
May 25, 2016 at 10:49 pm
Backup .. Restore will be the best option in this case. It won't impact the incoming i/os to the procduction DB unless you are hosting the duplicate db on the same instance and heavy processing is doing on that DB
Down the road you can also think about always on availability group if your server edition is enterprise. It will allow readonly secondaries where you can do the heavy processing, but additional license will be required for the readonly secondaries
Viewing 2 posts - 1 through 1 (of 1 total)
You must be logged in to reply to this topic. Login to reply