June 26, 2007 at 10:45 am
Hi all. I am a software developer designing a new web application for my company. This application is an extension of our currently existing software applications. Our database server is SQL Server 2005.
The architecture of our current system isolates the data for each one of our 100+ "clients" (all clients reside on the same database server). So there are 100+ client databases and 1 code repository database where the stored procedures reside. The majority of the code in our hundreds of stored procedures is dynamic SQL.
I would really like to get away from the dynamic SQL, I think it would improve the style and readability of our code, make debugging easier, improve performance, etc.
So, as I see it, I have three options for the new architecture:
1) I can maintain the current clients and code repository architecture.
2) I can isolate the data and maintain the stored procedures on the client database level (this seems impossible given the number of client databases).
3) I can depart from the notion of isolated clients and maintain all client's data and stored procedures in a single database.
The database would consist of a small number of tables. The two main informational tables would be populated by batch updates from the other system databases (so there would be infrequent batched inserts). I can estimate that monthly, 60000 records would be inserted. These records would be updated by the user application, but there would be no deleting or manual (non-batch) insertions. We haven't defined a flow for when / if these records would be removed, so the potential is that this database could grow fairly large.
I understand that my description is incredibly high-level, but any recommendations would be appreciated. If I decide to implement anything but the first option, I will have to make a strong case for it, as I'll be going against the norm.
Thanks!
Heidi
June 26, 2007 at 8:01 pm
Couple thoughts. I like the separate databases, especially as you can easily move one client to another server if you need the capacity.
If you have a lot of standard stored procs and don't want to maintain each db, you can replicate the stored procs from a central location, same thing for tables of common data.
Think scalable if you can. Single database works well and is easier, but it limits you if there is growth.
July 22, 2007 at 7:04 pm
You can check out this whitepaper. It gives various architectural options.
July 24, 2007 at 1:05 pm
I have a broadly similar situation in that I have approximately 100 clients using the same base application although the data is customised for each client, and the functionality is is sometimes customised as well. I have recently had to implement a version of our system for a client who has three discrete business areas who are treated as completely seperate clients.
I have achieved the partitioning of client related data by adding a client key to each table and a function that authorises the client user to view, edit or create new client data. The function is used as a validator in all data selections. This allows the database structure and the stored procedures to be common to all clients. An advantage to this is that you don't have to use dynamic SQL to decide which database and which table to use.
The biggest issue I have had is the fact that the system needs extensive testing to verify that the data is correctly partitioned.
From the sound of your databases, they are not too big and therefore the testing required would not be as extensive as it might otherwise be.
August 27, 2007 at 8:27 am
I've done something similar except I use a view to select rows which each client can select, update, or delete. I also have a further complexity in that I have multiple clients and multiple vendors within the same database where one client can share data with multiple vendors and one vendor can share data with multiple clients. Vendors don't share with other vendors or client don't share with other clients. Don't know if a function or view is more efficient. You would need to play with both.
April 29, 2008 at 10:15 am
Cutespn - That's a great link to Multi-Tenant Data Architure. It is exactly what I'm looking for as it discusses using Multiple Databases, Multiple Schemas, and Shared Schemas.
Thank you!
Follow up:
Here's what I'm thinking..
Basic concept is to use schemas to override the default behavior of a database for different users. So a user maps to a certain default schema which may contain a custom stored proceed/view to handle their unique situation. So we have dbo.myProc and client.myProc. When the user logins in and executes a section of code it will call myProc (non schema qualified), but use client.myProc because that's the default schema for the user.
For multiple clients and large data requirements I we could use Multiple Databases that use DDL Triggers to ensure consistancy of Stored Procedures, Tables, Views etc. Then create a second schema inside the client databases named "client" which can be used for overriding the default application logic by having the clients' login default schema set to "client".
Now that sounds good in theory, but here's what I've run into in practice.
1) You cannot map a default schema to a Windows Active Directory Group... so now we are left to administrate the users individually right? Nope, if you log in using Windows Integrated Security you end up being in the dbo schema anyways!
2) Application roles... sounds like a good idea until you try to go cross database/server resulting in using the guest account. Connection pooling is going to be fun to implement too.
3) What happens when we want to share data between 2 client schemas, but not all client schemas
Anybody have some ideas?
May 1, 2008 at 8:32 am
Another idea...
Replicating Stored Procedures, UDF, Views etc...
Create a master database with the common data flow, ie commonly used Stored Procedures Views. Replicate this into each of the databases using DDL triggers or whatever change management. Create a schema convention in the client databases for the override Stored Procedures such as override.prcName. In the event of multiple schemas in your master database, and thus client databases, put the override Stored Procedures in schemas such as overrideaccounting.prcName etc. Then use a DDL trigger when you update the overrideacccount.prcName that will replicate it's change to account.prcName.
Replicating Tables
Undoubtedly some tables should be replicated to all the databases with data; not just the structure. Use the meta data available to us, through the Right Click Properties on the table, to create a flag for replication with data.
Cross Database Dependencies
Suppose there is an override written in a client database overrideaccounting.prcName that we want to use in another client database, but we don't want to affect the master database. Same with tables structure, table data, etc. We could create Meta data on all objects to provide replication from a client database to other client database(s). Of course replicating from one client's to another client's database could open up security issues so a word of caution. Note, the replication meta data would not pass with the object; preventing recursive replication (limited to 1 level in depth).
What am I missing? Any ideas?
Viewing 7 posts - 1 through 6 (of 6 total)
You must be logged in to reply to this topic. Login to reply