December 8, 2011 at 12:11 pm
Hi,
I am a programmer that have been assigned the task of building our database infrastructure. After reading a good share of information, best practices and opinions, I am still having difficulties deciding on the proper strategy to employ for our database. I have included some background on our project in order to help (I hope so) the reader understand my questions.
Intro
We are porting a fat client application to the web. Currently each of our customers (companies) have their own installation with a specific database. Each company may have multiple users but the average is about 3 users. In the new application, our customers will all share the same database (hosted in SQL 2008 R2) which brings us to security issues. We'll be using High Safety Database Mirroring with a Witness.
Security
The web application will connect to the database using a single login and will only be allowed to query objects in its own SCHEMA. When a new customer registers with us, its new company will be assigned a new master user (without login) and a new SCHEMA (using the new master user) with all the database objects required to run the application. This is to insure isolation between companies without the server performance hit (or $$$) of creating a separate database for each company. Each master user will have the right to impersonate the application login in order to access objects limited to its own SCHEMA. This process is done dynamically through a stored procedure when a new customer registers.
FILEGROUP
My questions are:
- Should I create a separate FILEGROUP for each company (which might be usefull for "piecemeal restores")?
- OR Should I use one FILEGROUP for data and one FILEGROUP for indexes for all companies (read that as a best practices)?
- OR Should I use only one FILEGROUP for all the companies?
- Should I allocate multiple files for each FILEGROUP so that our SAN can distribute the I/O?
Considering that we are using a SAN, using multiple files might be an beneficial I/O wise. Is that right?
Thanks in advance for any answers, suggestions or constructive remarks!
Pierre Boucher
December 8, 2011 at 12:42 pm
In my opinion, this is not a good solution at all. Unless I had extremely many customers, I would have a separate database for each customer, mainly to maintain security.
I hope you are aware of the following facts:
* Piecemeal restore is enterprise edition only
* If you restore the primary filegroup, ALL filegroups will be unavailable during the restore
If you have separate databases however, restore of customer databases will be completely independent from each other.
Edit:
Overlooked the part with mirroring. Mirroring a lot of databases does not work. If I recall right, in case of restore you will have to break mirroring and restablish it. So, if you have to restore a single customer, you will take down the HA solution for all other customers. So, my recommendation goes in the direction of a SQL cluster with a separate database for each customer.
December 8, 2011 at 1:23 pm
okbangas (12/8/2011)
In my opinion, this is not a good solution at all. Unless I had extremely many customers, I would have a separate database for each customer, mainly to maintain security.
I would tend to agree
I hope you are aware of the following facts:
* Piecemeal restore is enterprise edition only
* If you restore the primary filegroup, ALL filegroups will be unavailable during the restore
Plus if you restore a filegroup it must be brought up to date with the rest of the DB. So if each customer has a filegroup, it won't be possible to restore that customer's data to last week.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
December 8, 2011 at 2:08 pm
GilaMonster (12/8/2011)
okbangas (12/8/2011)
In my opinion, this is not a good solution at all. Unless I had extremely many customers, I would have a separate database for each customer, mainly to maintain security.I would tend to agree
I hope you are aware of the following facts:
* Piecemeal restore is enterprise edition only
* If you restore the primary filegroup, ALL filegroups will be unavailable during the restore
Plus if you restore a filegroup it must be brought up to date with the rest of the DB. So if each customer has a filegroup, it won't be possible to restore that customer's data to last week.
I was aware of the filegroup and restore problem but I was not about the Piecemeal restore being restricted to enterprise edition only. So this advantage is irrelevant now.
Regarding the number of customers, based on our competitors, we are talking in the 10,000+ which makes a whole lot of databases. Sorry for not mentioning that in my first post. This number is mainly the reason we would like to avoid 1 database per customer.
For the restore problem, we were thinking about a solution involving a complete restore on another server and copying of the data for the requesting customer. I know it's not the best solution but it should work with the type and number of customers we'll have.
Any alternative ideas are welcome!
Pierre Boucher
December 9, 2011 at 12:46 am
I am a programmer that have been assigned the task of building our database infrastructure.
I am sorry to say it's not your specialization & you are taking toooo many responsibilities on your shoulders.
Each company may have multiple users but the average is about 3 users. In the new application, our customers will all share the same database (hosted in SQL 2008 R2) which brings us to security issues.
The web application will connect to the database using a single login and will only be allowed to query objects in its own SCHEMA.
Regarding the number of customers, based on our competitors, we are talking in the 10,000+
IMO it's not the file storage you should worry about but on SECURITY of your data. You should seriously consider hiring a Professional for DB Design. It would be a smart expenditure to avoid many security issues in future which might cost you more as compared to performance issues.
December 9, 2011 at 2:25 am
Pierre Boucher (12/8/2011)
- Should I allocate multiple files for each FILEGROUP so that our SAN can distribute the I/O?
It's not a given, and in fact for user databases i wouldnt employ it unless you are specifically experiencing the issues this is designed to mitigate
-----------------------------------------------------------------------------------------------------------
"Ya can't make an omelette without breaking just a few eggs" 😉
December 9, 2011 at 7:07 am
Dev (12/9/2011)
I am sorry to say it's not your specialization & you are taking toooo many responsibilities on your shoulders.
I know quite well it's not my specialization, but like many other small companies, we've get to do with what we have and right now we don't have a professional DBA nor the resources to get one.
Dev (12/9/2011)
IMO it's not the file storage you should worry about but on SECURITY of your data. You should seriously consider hiring a Professional for DB Design. It would be a smart expenditure to avoid many security issues in future which might cost you more as compared to performance issues.
Being aware of the fact that it's not my specialization, I am trying to validate myself every step of the way. As an example, our instances are named in a way difficult to guess using a dictionary, they are hidden and the SQL Browser service has been stopped, the SQL Service is not listening through port 1433 and is running under a domain account with very limited rights, mirroring is not done through port 5022 and the communication is done through a private network reserved for the traffic between the 3 servers involved in mirroring, firewalls have been configured only to allow specific ports between known and trusted servers, communication between servers is encrypted, passwords are generated with a specialized tool, etc.
For the database itself, we'll be using one application user account (using encrypted connection string and a strong password) that has limited access to the database. One connected, the company's master account (USER with no LOGIN) is impersonated to access the company's SCHEMA. This USER is associated to a ROLE (one per company) that confine the USER to that specific SCHEMA. So it's a 3 parts security, the SCHEMA, the ROLE and the USER. All unnecessary rights have been REVOKED from the ROLE and USER. The same type of security has been applied to the application user account so only sees the application SCHEMA and none of the company's SCHEMA.
I am not a professional DBA, but I can read, I can learn and I am open to ideas and comments.
Pierre Boucher
December 9, 2011 at 7:08 am
Perry Whittle (12/9/2011)
Pierre Boucher (12/8/2011)
- Should I allocate multiple files for each FILEGROUP so that our SAN can distribute the I/O?It's not a given, and in fact for user databases i wouldnt employ it unless you are specifically experiencing the issues this is designed to mitigate
Good point.
Thanks!
Pierre Boucher
December 9, 2011 at 7:18 am
Pierre Boucher (12/9/2011)
Dev (12/9/2011)
I am sorry to say it's not your specialization & you are taking toooo many responsibilities on your shoulders.I know quite well it's not my specialization, but like many other small companies, we've get to do with what we have and right now we don't have a professional DBA nor the resources to get one.
Dev (12/9/2011)
IMO it's not the file storage you should worry about but on SECURITY of your data. You should seriously consider hiring a Professional for DB Design. It would be a smart expenditure to avoid many security issues in future which might cost you more as compared to performance issues.Being aware of the fact that it's not my specialization, I am trying to validate myself every step of the way. As an example, our instances are named in a way difficult to guess using a dictionary, they are hidden and the SQL Browser service has been stopped, the SQL Service is not listening through port 1433 and is running under a domain account with very limited rights, mirroring is not done through port 5022 and the communication is done through a private network reserved for the traffic between the 3 servers involved in mirroring, firewalls have been configured only to allow specific ports between known and trusted servers, communication between servers is encrypted, passwords are generated with a specialized tool, etc.
For the database itself, we'll be using one application user account (using encrypted connection string and a strong password) that has limited access to the database. One connected, the company's master account (USER with no LOGIN) is impersonated to access the company's SCHEMA. This USER is associated to a ROLE (one per company) that confine the USER to that specific SCHEMA. So it's a 3 parts security, the SCHEMA, the ROLE and the USER. All unnecessary rights have been REVOKED from the ROLE and USER. The same type of security has been applied to the application user account so only sees the application SCHEMA and none of the company's SCHEMA.
I am not a professional DBA, but I can read, I can learn and I am open to ideas and comments.
Pierre Boucher
I believe you got me wrong. I just wanted to warn you and not at all underestimating you (or your skills).
Reading your explanation, I happily take my words back. As a developer you configured User Access so well. Keep it up! 🙂
December 9, 2011 at 7:39 am
Dev (12/9/2011)
I believe you got me wrong. I just wanted to warn you and not at all underestimating you (or your skills).
Reading your explanation, I happily take my words back. As a developer you configured User Access so well. Keep it up! 🙂
Sorry for the frustration and misinterpretation but I was expecting more constructive comments than what I got so far.
Thank you for your last comments.
December 9, 2011 at 8:06 am
For the database itself, we'll be using one application user account (using encrypted connection string and a strong password) that has limited access to the database. One connected, the company's master account (USER with no LOGIN) is impersonated to access the company's SCHEMA. This USER is associated to a ROLE (one per company) that confine the USER to that specific SCHEMA. So it's a 3 parts security, the SCHEMA, the ROLE and the USER. All unnecessary rights have been REVOKED from the ROLE and USER. The same type of security has been applied to the application user account so only sees the application SCHEMA and none of the company's SCHEMA.
So if I understand your requirements correctly, you are planning 10000+ DBs or schemas (1 per customer).
DBs: N (1 per customer)
Or
Schemas: N
Roles: 3 * N
Users: 3 * N
It sounds overcomplicated.
I assume the table structures of all customers would be same. Can you add a Customer Identifier in tables? Then create logical databases for your customers (with views filtered by Customer Identifiers). Create SPs with Customer Identifier or Create SPs in Customer Schemas. With this you should be able to manage User Access with fewer efforts. (Just trying to make you happy coz I don’t like to hurt anyone, even unintentionally ;-))
With above design you can satisfy all of your customers with one database so decide on file groups based on performance criteria (not on individual customer needs). Also consider partitioning your data.
December 9, 2011 at 8:54 am
Dev (12/9/2011)
So if I understand your requirements correctly, you are planning 10000+ DBs or schemas (1 per customer).
DBs: N (1 per customer)
Or
Schemas: N
Roles: 3 * N
Users: 3 * N
It sounds overcomplicated.
10,000+ schemas in 1 DB, yes.
Customers will be able to register themselves in our application. It will automatically create a ROLE, a master USER and a SCHEMA for the new company. This was already tested with a stored procedure and it works fine. A customer could also decide to close his account after a while or after a trial period. This is also covered by another sp that deletes the USER, the SCHEMA and the ROLE associated with that company.
Dev (12/9/2011)
I assume the table structures of all customers would be same.
With a new account, yes they will be the same. Overtime, our customer may require user defined fields that will be added to the common structure. Having all the tables for one customer in one schema gives us more flexibilities with new requirements. Because we cannot go with the 1 database per customer solution ($$$) the 1 schema per customer sounded like a good solution.
Dev (12/9/2011)
Can you add a Customer Identifier in tables? Then create logical databases for your customers (with views filtered by Customer Identifiers). Create SPs with Customer Identifier or Create SPs in Customer Schemas. With this you should be able to manage User Access with fewer efforts.
I guess that would be the standard way of doing it. I was not comfortable with this solution because of the lack of isolation between customer's data. This would be "logical isolation" versus "physical isolation" with separate schemas and tables.
Dev (12/9/2011)
(Just trying to make you happy coz I don’t like to hurt anyone, even unintentionally ;-))
Don't worry, I wasn't hurt 🙂
Dev (12/9/2011)
With above design you can satisfy all of your customers with one database so decide on file groups based on performance criteria (not on individual customer needs). Also consider partitioning your data.
Good point about performance vs customer needs. Like someone else mentioned earlier, we should address performance when needed.
Can you elaborate on "partitioning your data".
Pierre Boucher
December 9, 2011 at 8:59 am
Can you elaborate on "partitioning your data".
I am emotional but I am also professional :-P. I can help you more but it would be paid service... 😀
Viewing 13 posts - 1 through 12 (of 12 total)
You must be logged in to reply to this topic. Login to reply