Database Design question

  • Hi all,

    We have a client for whom we've designed a web application with a corresponding database backend. Web app is written in .NET 4.5, DB is SQL Server 2014 running off Windows Azure.

    The web app is your usual interface for managing data, with pages to display, access, and manipulate that data.

    We now have another client who is interested in our product, however, has their own specifications for the same general functionalities, with some overlap.

    I'm in the process of trying to decide how best to approach this question from a design perspective. As I see it, I have two options:

    A) The first option of course, is to write a brand new solution for the new client. I can take much of the existing code, and then adapt the remainder to fit, along with the database backend.

    B) The second option is the one that I'm here to ask about. Essentially, as I see it, there are three major modifications I need to make: to the dashboard display of the data, to the visualization of the full spectrum of data for a specific functionality, and to the page where this data can be modified. The way I'm considering doing this is as follows:

    For the dashboard, I'll be using a stored proc with a SELECT statement. The WHERE filters will be the same, the only difference will be the columns returned through the FROM clause. So, I'm thinking of having a second table, which will just have the column names and a flag for visible. In the .NET code, I'll loop through all the columns returned by the SELECT statement, and compare them with the records returned by a query on that second table, and set the visibility accordingly.

    For the visualization, I'll have a similar setup, except instead of table columns being hidden, itt'l be .NET controls. So same argument, for the given page, I'll have records in a table that return whether a given control is visible or not.

    Finally, for the modification page, I'll use the same setup as for the visualization, except that the UPDATE procedure will have inputs for all the table columns, and only those which are relevant for the particular client's configuration will be used.

    The last part of this configuration will be having separate CSS pages for the visual display, but that's not really too big of a concern.

    The concern I have with this approach is whether this will scale well or not, as we add more clients with their own specifications. What I want to avoid is having a nightmare of a code mess to deal with in the future.

    For anyone who has had similar issues, have you any comments on my proposed idea? Any better ways of handling this?

  • That won't scale well, as you'll end up with wide tables as the clients grow, where many of the columns are null.

    The other way to do this is customize the codebase for clients, and that's also tough because you'll have lots of branches of code.

    I guess the issue is really how different are the client requirements? Are there really different data fields or do they just want to call things different names and have different labels?

    The more you can data drive this, the better, but I worry about having tables that have holes for some clients. A few isn't bad, but if every clients wants 3 new fields, it's a mess. Plus, it can cause your indexing to have issues.

    In Azure, I'm guessing you'll do with a database per client, which is best, at least that allows different indexing.

  • This is why SQL Server moved schemas from user dependent to independent objects in 2005.

    Create a schema per client (you could also move this to its own database but the same principle applies) this fixes the issue of creating long columnar tables each time a new client comes on-board.

    On the application side, all that would be needed is to add preprocessing work to grab the schema information based upon client login and adjust the application accordingly.

  • This can turn into a complex question. The rule-of-thumb that I try to follow on things like this is: The more clever the solution has to be, the worse it is.

    A tricky, clever solution will be hard to document, hard to maintain, hard to scale, hard to refactor new functionality into.

    A simple solution will be easy to document, easy to maintain, easier to scale, and easier to refactor.

    For separate clients, I'd be inclined towards separate copies of the database, customized as needed. That means more than one code-base to maintain (which can get tricky), but that's something you can roll into maintenance and upgrade fees for the clients.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GSquared (6/15/2016)


    This can turn into a complex question. The rule-of-thumb that I try to follow on things like this is: The more clever the solution has to be, the worse it is.

    A tricky, clever solution will be hard to document, hard to maintain, hard to scale, hard to refactor new functionality into.

    A simple solution will be easy to document, easy to maintain, easier to scale, and easier to refactor.

    For separate clients, I'd be inclined towards separate copies of the database, customized as needed. That means more than one code-base to maintain (which can get tricky), but that's something you can roll into maintenance and upgrade fees for the clients.

    This is where I lean.

    If you use separate schemas (noted above), this can work, but it's the same thing, just in one db. You still have multiple copies if your db. If you do this, be sure that you actually abstract the connection strings so that if you need to move a schema to it's own db, you can.

  • The idea of multiple schemas to me doesn't really solve the problem - it's still going to involve long tables, just that particular clients might only be viewing part of it. While it might address the issue of the overhead of long tables, it doesn't solve the problem of complexity.

    I realised though, that since my application is just calling a stored proc and then dealing with the results in the code, my databases don't actually need to have the same schemas for the tables - all that matters is that the procedures have the same names. This means that I can have different database structures without worrying too much about the impact on the application.

    Only issue is that I'll have to watch out for making modifications to all of my stored procs - can't just execute the same modification to each, I'll have to take care to only change the parts necessary.

  • I'd use separate dbs rather than separate schemas. We are client-based as well, and some of our clients demand that their data is isolated from the data of all other clients. While you can make the argument that you can properly provide security for different schemas, it's much easier at the db level. ClientA's users simply can't access ClientB's db and vice-versa, and they have completely separate logins on top of that. [With one db, you might be more tempted to use a single login for both/all clients, with more risk of accidental cross-data viewing.]

    SQL DBA,SQL Server MVP(07, 08, 09) "It's a dog-eat-dog world, and I'm wearing Milk-Bone underwear." "Norm", on "Cheers". Also from "Cheers", from "Carla": "You need to know 3 things about Tortelli men: Tortelli men draw women like flies; Tortelli men treat women like flies; Tortelli men's brains are in their flies".

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply