Read Replica Database for SQL Managed Instance

  • I want to layout all the options to my app team for Read replica database in SQL MI. The requirement is to have a Secondary DB for Read queries/Reports.

    SQL MI to MI (Transactional Replication), SQL MI Read replica in Business Critical Tier, Connect to Geo-secondary (when Autofailover group is configured).

    Can anyone please confirm if these are the only options available in SQL MI.

  • Think those are the only realistic options.  We use all 3 of those for various reasons but we also have some CDC and Data Factory in place to keep two instances in sync for a few hundred tables across multiple databases where we don't want (or can't have) the whole database.

  • Thanks for your reply. We are looking for a database size of over 500GB and need all the tables to be replicated. I dont want to recommend Transactional replication as it has limitations of not allowing Truncate/Drop/Rename commands (requires code changes) and Snapshot creation time is very slow(more than 2days to create/apply snapshot). Is DataFactory performance/sync time better compared to Replication.

  • Data Factory uses bulk copy really and it's all in Azure if using Managed Instances so is quite fast.  However, you can't truncate a table if you have CDC enabled on it and there are other 'features' like you can't just add a column to the table as CDC won't pick it up (like you can with replication).  There's also a cost with Data Factory data transfer that is kind of free with replication.

  • Thanks, Do you have any links for this setup of continuous sync from MI to MI using DataFactory

  • Data Factory isn't really real-time or continuous, it runs on a schedule to transfer data from source to destination.

    Basically:

    • create a table holding the list of tables that you want to transfer
    • enable CDC on the source tables
    • In ADF:

      • create a lookup to get the list of tables
      • use a foreach to loop through the table list
      • copy the data from source to destination using the CDC tables where the changes are newer than the last time it ran
      • run a proc to merge the new data with the old

    The following might help:

    Incremental Data Loading using Azure Data Factory

    How to Build Dynamic Azure Data Factory Pipelines

    However, you mentioned you do table truncates and drops and renames so it may not be suitable for you.

     

    • This reply was modified 3 years ago by  DNA_DBA.
  • This was removed by the editor as SPAM

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply