August 9, 2013 at 2:04 am
Hi all,
I am looking for some feedback on a solution I've been tasked with providing.
Currently, we have a subsidiary company who provides a pretty lightweight application to their client-base. The application is based on a SQL Server 2008 database with approximately 30 user tables in (containing client-specific data), and around 70 - 90 system tables (containing unchanging system-related information on the product).
Their current method of deployment is one database per customer in a fully managed environment (client has no access to the infrastructure or db), resulting in a SQL server with around 1000 individual databases on, ranging from just a few MB to a few hundred MB.
There are a number of recommendations I am going to make to them, but my main quandry is how best to provide a solution to them to make the product more efficient. It would seem to me that the one db to one customer approach is pretty inefficient in storage, performance and maintenance.
My thoughts are to have one db containing all data for all customers, and use schemas to separate each environment (with the obvious object-level security changes required for this approach). Further to this, I thought of suggesting one Filegroup per customer and grouping all objects for each customer into said Filegroups. This would provide for relatively easy backups/restores and performance. I was also going to suggest having a separate db or schema for the system data.
I've checked the maximum capacity specs for SQL Server and it looks like it shouldn't be a problem, but I wondered if anyone had any experience of this scenario and/or any alternatives or better suggestions.
Thanks in advance,
Mike.
August 9, 2013 at 2:25 am
Is it necessary to duplicate all tables for each client, even if it is in the same database but in a different schema?
Need an answer? No, you need a question
My blog at https://sqlkover.com.
MCSE Business Intelligence - Microsoft Data Platform MVP
August 9, 2013 at 2:38 am
Hi,
It isn't necessary to duplicate all system tables for each client.
It isn't necessary either to duplicate the user tables, so all data could theoretically be held in the same 30 tables or so. However, we don't have licensing for Ent edition, so I think that may restrict the options of partitioning.
Cheers.
August 9, 2013 at 2:58 am
berniesprouster (8/9/2013)
Further to this, I thought of suggesting one Filegroup per customer and grouping all objects for each customer into said Filegroups. This would provide for relatively easy backups/restores and performance.
Except that you'll never be able to restore a customer's data back to an earlier point in time if necessary.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 9, 2013 at 3:01 am
berniesprouster (8/9/2013)
However, we don't have licensing for Ent edition, so I think that may restrict the options of partitioning.
Partitioning for what?
1000 DBs is possible (limit is 32 767), but that's really going to strain the limits of standard edition. Database recovery on startup will take forever, there's going to be overhead in the cache and Standard edition's limited on memory.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
August 9, 2013 at 6:28 am
Point taken about restores if all the data is stored in one set of tables.
Regarding partitioning, I thought you were thinking of partition functions/schemes if all the data was in one set of tables, rather than separated into schemas, in which case we wouldn't realistically be able to go down that route I think because it's only available in Ent edition.
It was just the number of databases which concerned me, and the relative size of them (pretty tiny).
August 9, 2013 at 6:52 am
berniesprouster (8/9/2013)
Point taken about restores if all the data is stored in one set of tables.Regarding partitioning, I thought you were thinking of partition functions/schemes if all the data was in one set of tables, rather than separated into schemas, in which case we wouldn't realistically be able to go down that route I think because it's only available in Ent edition.
It was just the number of databases which concerned me, and the relative size of them (pretty tiny).
Don't use 1000 schemas to keep the customer's data separate, that's no further forward than 1000 databases. Add a customer column to the tables and merge the data. There are tons of ways of restricting a customer to their own data. Think of an online business. I can't see your purchases, you can't see mine, but they're in the same table.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
August 9, 2013 at 7:37 am
Thanks for the suggestion. With that though, would I not have an issue if I needed to restore a particular customer's data from a point in time?
August 9, 2013 at 7:44 am
berniesprouster (8/9/2013)
Thanks for the suggestion. With that though, would I not have an issue if I needed to restore a particular customer's data from a point in time?
Is this a realistic requirement?
To answer your question, it would certainly involve more steps than if the data were kept in separate tables - but not impossible.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
August 9, 2013 at 7:50 am
Possibly not. I'd have to discuss it with the development guys in charge of the app.
If we did go down the route of one set of tables for all customers to use, what in your opinion would be the best method of providing for restorations of individual clients?
I'm not sure of the complexity of the db in terms of FKs etc. so wouldn't know whether simply just scripting out the data, table by table, would work.
August 9, 2013 at 7:59 am
A method I've used in the past is to restore elsewhere, remove the unwanted data from the restore, refresh indexes then use the restore to update live. With the relatively small amount of tables and particularly data that you are working with, this might be a viable approach - but others here who have had more experience may well come up with better approaches.
For fast, accurate and documented assistance in answering your questions, please read this article.
Understanding and using APPLY, (I) and (II) Paul White
Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden
August 12, 2013 at 4:50 am
Thanks for the suggestion. I'll raise it with the development guys and see if it's workable. Hopefully it will be - if so, I'm sure the performance will be significantly better than it is at the moment.
Thanks again,
Mike.
August 13, 2013 at 11:30 am
To piggy back on this question.... for the DBA's that have dealt with a SQL Server 2008R2 with a lot of databases on them what would you say for a beefy server with 48 gig of memory would a maximum # of databases be for a busy system? (Meaning all of the databases will have a lot of inserts and selects run against them). 200... 300.. 500? I know I am being generic and to a degree it is all a senarion of.. 'it depends'... Reason I asked is I saw in this thread the idea of recovery upon a server restart.... the more databases you have and if they were busy for the restart will have a longer window of db recovery.
The reason I ask is that soon we will be hosting approx 1200 databases here.
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply