One or Many Databases

  • Comments posted to this topic are about the item One or Many Databases

  • For 15+ years my co-workers and I have supported a single-tenant system. It was originally designed for on-prem installation, which is why we could only have one client per database. As we moved into offering our software as SaaS, it was exceptionally helpful to have the ability to spin up a new SQL instance as growth required or when we wanted to isolate a high-load customer. We went from dozens of SaaS customers, to hundreds, to thousands during my time here. It was also helpful to be able to restore Acme's data to an earlier point in time without affecting any other customers.

    To address schema drift, all of our installation/upgrade scripts had to be written in an idempotent* manner so as to allow customers to upgrade at their own pace. This was great at first, but the complexities involved in making such a script took more and more development time, and caused each upgrade to take longer and longer (current upgrade script is about 40mb). It has recently been taking up to two weeks to fully upgrade all databases.

    We've just started a rewrite of the software from the ground up, and we're writing it as multi-tenanted with a deltas-only upgrade process. We chose multi-tenanted primarily for cost savings on SQL licenses and an elimination of the costly switchboard process that the UI has to use to connect to the correct database.

    I would enjoy hearing from someone with the reverse story - a long-term single-tenanted system that is being rewritten as multi-tenanted.

    [edited for clarity & spelling]

    *idempotent = every execution of the same script results in the same outcome. If a new column was added in release 3.5, a second execution of the 3.5 script would see that the column existed and bypass that step in the process (thus bypassing a SQL error). Also, the same script would upgrade any database to 3.5, regardless of whether it started as version 2.3, 2.9, or 3.2.

    • This reply was modified 2 years, 6 months ago by  jim.riedemann.
    • This reply was modified 2 years, 6 months ago by  jim.riedemann.
  • I am working with an application architected to use a single Azure SQL Database per client (~150), all with identical schemas. These run in Elastic Pools, one for each of the Azure Regions (~8) optimal for the client. One Region has two Elastic Pools due to limits with Elastic Pools (Query Store memory issues for more than 50 databases per pool, for this specific application which has non-optimally parameterized statements generated by EF/LINQ) and to resolve some "noisy neighbor" issues. Indexes are managed globally; if we make an index change anywhere, we make the same change everywhere. Releases are done using Azure DevOps, DACPACs, and PowerShell scripts. Generally, the DACPACs are set up to ignore objects that are unexpected (temporary backup tables created during data fixes, etc.), and such cleanups are done through idempotent post-deployment scripts. Very time-consuming index changes are sometimes excluded from the DACPACs and done through post-deployment scripts (with ONLINE = ON) as well to avoid holding up the main deployment process.

    The quarterly releases to production have been going smoothly, after working out any issues (usually variable changes needed in the PowerShell) in the test environments (at least one per Region). We have a scripted process to move a client from one EP to another, either within or across Regions. We use separate Azure Active Directory groups and service accounts per client for database access to minimize the security issues.

    This is somewhat similar to the hybrid approach you mentioned. Other than not having the ability to tune indexes for individual databases to maximize performance (not every client's workload balance is the same), I don't see any significant problems in this approach.

    We use Azure Automatic Tuning Recommendations as merely suggestions, so we can optimize the index changes and implement them across all databases as part of a release instead of letting it automatically add whatever it thinks is a good idea for each database individually, which would make it impossible to get globally predictive results from load testing. There is no feature that allows telling Azure that a list of databases should be analyzed as a single workload.

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply