Does this sound at all familiar? A user submits a request stating they must have elevated access to a database. In your attempt to gather more information about the request, they must have access because so-and-so has elevated access and they are curious about the database.
The conversation doesn’t go quite as smoothly as that. In fact, it may take quite a lot of effort to squeeze that tiny bit of real information from the request.
After assembling that tidbit of information, you may find yourself choking back a scoff or three. “You need access because you want to explore the database?” Yeah I can hear the incredulity in your thoughts from here. At no point is this kind of “need” a legitimate reason to gain access to a database.
“I need elevated access!” Oh No you don’t!
Beyond the back and forth of the demand for access just to play around in the data and the DBA resisting the access in the best interest of the data and company, there may possibly be a middle ground that can be reached. I am not talking about creating a dummy database with obfuscated data. Nor am I talking about a dummy database with fake data.
Every now and then, this kind of request may cross your desk and all the requester is really trying to do is find some means of becoming familiar with the database. This can be ferreted out with some extra warnings and more discussions with the requester. Maybe the request really has nothing to do with the data and everything to do with just finding a way to get to know the schema because there is no documentation.
Given the plausibility of this need, there are many ways to grant the request – some may require more resources than others. Let’s suppose the user has absolutely no understanding of TSQL. With that minimal knowledge, you know there is likely to be many potential issues that could cause the database to be brought down and unusable. This is terrible if the affected database is in production. It is also very bad if the database is a non-prod database. Given this potential, you know you don’t want to grant any permission that implies read or write permissions in the database (if there is data present).
What if we could create a database without the data. Would that work? Absolutely that would work. This would in essence be a sandbox for the user and all responsibility for database issues could be directed back to the specific user. This is a fabulous solution if you have adequate resources for more databases/instances/servers to be created. There are multiple ways to create an empty database, but to ensure the best end-user experience considerations for database naming, resources, and potential impact to other databases must be acknowledged.
Personally, I prefer to maintain the same database name regardless of tier (dev, test, prod etc). This also applies to schema only types of databases where limited access would be granted. Changing database names on the users only breeds confusion and complicates code promotions. If you need to provide the same sort of solution for 30-40 or more databases, you can imagine what kind of confusion that would cause on a server where the actual data-filled databases actually exist (think dbcc clonedatabase types of solutions here where the cloned database must be named differently).
What to do? What to do?
Thankfully there is still at least one more entirely viable option that remains. We can use the power of security to achieve some semblance of the user want and still preventing the person from viewing the data or even causing performance issues.
Limiting access for prying eyes is fairly easy while still granting the opportunity to explore the database design. In SQL Server we have a permission called “View Definition” which will help us accomplish exactly this requirement. In preparation for this, one must determine the scope of the need. Generally speaking, this should be done on a per-database approach. There is a method to grant this for all databases on the server but that is a topic for another day. Once the scope is determined, it is time to proceed with the setup.
USE DingleBerry GO CREATE ROLE [BusinessViewDef]; GO GRANT VIEW DEFINITION TO BusinessViewDef; GO
I prefer to take advantage of Roles within the database. This simplifies the future maintenance and management of the permissions. If a new principal needs the same permissions as Joe from accounting, then I just add the new user to the same role(s) as Joe – piece of cake.
ALTER ROLE BusinessViewDef ADD MEMBER testvwdef;
With the Role created, user added to the role, and the “View Definition” permission granted to the role, we are ready to test.
First order of business is to connect with that new login we created (a SQL login was created and is used in this example, but the effects are the same with a Windows Login) as shown in the preceding image. After connected, let’s try some things.
The first test I want to perform is to confirm the user has no read access to data to ensure this permission does not grant extra access along the way.
Great! Select access to objects from this permission alone does not work. Now, let’s test the ability to view object definitions. First, let’s try via the object designer.
First we see a warning about not having enough permissions to save changes, which is good. After clicking OK on that warning, a new message pops up to remind us that the table is “read only”. Perfect! The permission we gave this user do not allow them to alter objects – just view them (as we would expect).
Seeing these results should also help confirm for us that I can browse the table definition directly in SSMS. Let’s take a gander from a TSQL approach.
Great, we can explore the table definitions etc with TSQL. That is of course a system stored procedure, so let’s try one more method.
USE DBA; GO DECLARE @tablename sysname = 'LoginAudit' --'Store' /* NULL for all tables and table name for a specific table */, @isMSShipped TINYINT = NULL /* null for all, 0 for user objects, 1 for ms objects */, @ColNamesysname = NULL /* null for all, column name to search all tables for specific column if tablename is null */; SELECT SCHEMA_NAME(t.schema_id) AS TabSchema ,t.name AS TableName , c.name AS ColumnName , c.column_id AS ColOrdPosit , dt.name AS DataType , c.max_length AS ColLength , c.precision AS Precision , c.scale AS Scale , dc.type_desc ,dc.definition FROM sys.tables t INNER JOIN sys.columns c ON c.object_id = t.object_id INNER JOIN sys.types dt ON c.user_type_id = dt.user_type_id --For system data types, user_type_id = system_type_id LEFT OUTER JOIN sys.default_constraints dc ON c.object_id = dc.parent_object_id AND c.column_id = dc.parent_column_id WHERE t.is_ms_shipped = ISNULL(@isMSShipped,t.is_ms_shipped) AND t.name = ISNULL(@tablename,t.name) AND c.name = ISNULL(@ColName, c.name) ORDER BY TabSchema, TableName, ColumnName;
Running this query, I will see results similar to this.
Knowing that we can run custom queries to explore the schema is almost comforting. It does confirm that we are able to query system objects (yes a select works here but this is read only object definition data). But does this mean the user can also see system data? Let’s try!
Excellent! So it seems we have a plausible option to keep the footprint small and minimize access to the data.
The Wrap
This article has explored various different options for reducing the access to data for users who “need” it. With a little planning and care, we can find various ways to fulfill user requests without compromising the environment.
Every now and then, it will require extra effort in trying to get the pertinent details from the requester. That said, with that extra effort you will find satisfaction and calm knowing your environment is secure and that you are providing quality service to your customers.
This has been another post in the back to basics series. Other topics in the series include (but are not limited to): Backups, backup history and user logins.