September 2, 2013 at 11:01 pm
Comments posted to this topic are about the item Variable Default Values to a Stored Procedure (SQL Spackle)
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
September 3, 2013 at 2:08 am
The parameters for a stored procedure have to be declared at design time and, presumably, you need to know how those parameters will be used within the procedure. So, if you know which parameters your stored proc needs, why would you not have one strongly-typed column in your control table which corresponds to each SP parameter and do away with all the nasty string-splitting business?
September 3, 2013 at 4:24 am
The basic idea is nice, but I have my doubts concerning the implementation.
First of all - why is there no primary key defined on the table? And why are all columns nullable?
Second - why not use a properly normalize the table?
CREATE TABLE dbo.Configurations
(EnterpriseID varchar(40) NOT NULL
,ConfigName varchar(40) NOT NULL
,ParamName varchar(40) NOT NULL
,ConfigValue varchar(8000) NULL
,CONSTRAINT pk_Configurations
PRIMARY KEY(EnterpriseID, ConfigName, ParamName)
);
INSERT INTO dbo.Configurations (EnterpriseID, ConfigName, ParamName, ConfigValue)
VALUES ('ACME', 'MySP', '@MySPParm1', '1'),
('ACME', 'MySP', '@MySPParm2', '2'),
('ACME', 'MySP', '@MySPParm3', '3'),
('ACE', 'MySP', '@MySPParm1', '4'),
('ACE', 'MySP', '@MySPParm2', '5'),
('ACE', 'MySP', '@MySPParm3', '6'),
('ALLIED', 'MySP', '@MySPParm1', '7'),
('ALLIED', 'MySP', '@MySPParm2', '8'),
('ALLIED', 'MySP', '@MySPParm3', '9');
SELECT *
FROM dbo.Configurations;
That would also make the rest of the code a lot simpler. No need anymore for the string splitter.
Of course, the table is still the EAV design, with all problems associated with it. But I guess that for a relatively small table such as this, one can live with it. (Just make sure never to use the values directly in a query, always assign them to a variable, of the correct datatype, first. Implicit conversions in plans can cause major performance issues).
September 3, 2013 at 4:28 am
Hugo, that's why I suggested having one column per parameter in my previous post - so that they can be strongly typed rather than stored as strings as in the original post and your EAV approach 🙂
September 3, 2013 at 4:46 am
pierre-702284 (9/3/2013)
Hugo, that's why I suggested having one column per parameter in my previous post - so that they can be strongly typed rather than stored as strings as in the original post and your EAV approach 🙂
I was interrupted while typing my reply. I actually started it before you posted, so I had not seen it.
That would work, but it would introduce a new problem. What if you want to do this for seventy stored procedures. Some of them have two parameters, others may have up to six. How many columns will your table eventually have? (And most of them will be NULL - so I guess this would be a good use case for SPARSE columns).
On the other hand, you can avoid some of the duplication by using standard naming convention for the parameters. If multiple stored procedures use a parameter @CustomerID, they could all use the same column from the Configurations table.
September 3, 2013 at 4:48 am
I'm echoing the responses of the others but, if I was working on a database-only solution and didn't have the benefits of a front-end app to manage and supply parameters to the procedures, I'd be inclined to make a basic table for each procedure, in a 'configs' schema.
Each table would be keyed (or at least indexed) off of the organisational unit (be it user-, department- or even client-based), and would strongly-type a column-to-parameter mapping, e.g.
CREATE TABLE configs.the_procedure (
usr_id INT PRIMARY KEY,
param1 INT,
param2 VARCHAR(20),
param3 DATETIME,
CONSTRAINT fk_the_procedure_usr_id (usr_id)
REFERENCES users (usr_id)
);
The procedure could then use your technique (sans the splitting/pivoting) to fetch the parameter values at the top of the run:
SELECT
@param1 = COALESCE(@param1, param1),
@param2 = COALESCE(@param2, param2),
@param3 = COALESCE(@param3, param3)
FROM configs.the_procedure
WHERE usr_id = @usr_id;
Once you know your basic structure, you can even codegen the config table definitions from the sys.parameters view (or INFORMATION_SCHEMA.PARAMETERS) fairly easily.
September 3, 2013 at 4:57 am
The article said there was a whole lot of configuration management in the front end app. So why not keep it there? Make wrapper sprocs for each set of default variables and store which sproc is called for which enterprise with the rest of the configuration management. Otherwise you introduce the likelihood that someone forgets to update this control table or that there is a significant time in deployments between when table updates are applied and code is applied during which something won't work.
September 3, 2013 at 4:59 am
I might also add that, for FUNCTIONS, given that I like to use inline table functions as much as is reasonable, that I'd probably go with the following:
WITH params AS (
SELECT TOP 1 a.* FROM ( --> I'm being lazy here, expand this if SCHEMABINDING is required
SELECT TOP 1
param1 = COALESCE(@param1, param1),
param2 = COALESCE(@param2, param2),
param3 = COALESCE(@param3, param3)
FROM configs.the_function
WHERE usr_id = @usr_id
UNION ALL SELECT
param1 = @param1,
param2 = @param2,
param3 = @param3
)
) a
)
SELECT b.foo
FROM bar b, params p
WHERE b.baz = p.param1
...but I haven't done any real testing to see whether the inlining performs significantly better than a simple multi-statement function, so YMMV.
[edited to handle the case where a user-specific config has not been persisted]
September 3, 2013 at 5:42 am
wbrianwhite 33048 (9/3/2013)
Otherwise you introduce the likelihood that...there is a significant time in deployments between when table updates are applied and code is applied during which something won't work.
This is a particularly good point, especially if you have to add parameters to a procedure.
With all centralised-configuration approaches, I find you either become particularly resistant to signature changes, or heavily reliant on appropriate default values.
This, of course, leads to another concern:
Eventually, your config tables essentially become code, with all the versioning concerns that brings. Be very prepared to have to script the data in such tables to insert statements and keep those scripts in version control (maybe even as a nightly job). Either that or to have to write some painful version/SCD logic into the tables (which will add significantly to the per-procedure boilerplate).
September 3, 2013 at 7:29 am
In order to bridge the gap between Mr. Hugo's excellent comments and other comments about multiple columns, I've seen systems that used a type specific column in their normalized settings table to handle for strongly typing the parameters (or settings). That way a 2 is typed as Int (or BigInt, etc) and not as a string to be inlined into a dynamic query setting.
September 3, 2013 at 9:27 am
Great article.
We have had the same challenge. We overcame it in a similar way but more normalized. We have a table that contains all the options that can be configured. Then another table that contains the configured value for each option. One row per option. If there is no entry in the value table then the system falls back to the default value in the options table.
We also were confronted with the strongly typed issue. We thought about having a value column for each SQL supported data type and a column that indicated which one to use. We decided that while that might be towards pure is had bad consequences too. The short answer here is that a fixed width column consumes that same amount of space even if it is NULL. So we swallowed hard and covered this another way.
We went further and put options in groups and we have a table for that. We can even assign configurations to specific users. We tightly control who gets to change what.
It is complicated but is saves us in so many other ways.
ATBCharles Kincaid
September 3, 2013 at 2:29 pm
venoym (9/3/2013)
In order to bridge the gap between Mr. Hugo's excellent comments and other comments about multiple columns, I've seen systems that used a type specific column in their normalized settings table to handle for strongly typing the parameters (or settings). That way a 2 is typed as Int (or BigInt, etc) and not as a string to be inlined into a dynamic query setting.
I'd use SQL_Variant for such a thing. It's still "loosly" typed based on whatever data you put in but it will be treated as whatever type you put in. You can even interogate the SQL_Variant column with the SQL_VARIANT_PROPERTY(SomeSQLVariantColumnOrVariable,'BaseType') function.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 3, 2013 at 7:32 pm
Hugo Kornelis (9/3/2013)
The basic idea is nice, but I have my doubts concerning the implementation.First of all - why is there no primary key defined on the table? And why are all columns nullable?
Answer to that is simple. My sloppy! Indeed my application's configuration table has a primary key. The columns in my case are nullable because there's actually 3 that can hold configuration values (unfortunately all of the same type), so some (never all) may very well be NULL. In my actual case, I had about 10 parameterized values so I needed to pack them into 3 columns, and I found I could do it this way with just one.
Hugo Kornelis (9/3/2013)
Second - why not use a properly normalize the table?
CREATE TABLE dbo.Configurations
(EnterpriseID varchar(40) NOT NULL
,ConfigName varchar(40) NOT NULL
,ParamName varchar(40) NOT NULL
,ConfigValue varchar(8000) NULL
,CONSTRAINT pk_Configurations
PRIMARY KEY(EnterpriseID, ConfigName, ParamName)
);
INSERT INTO dbo.Configurations (EnterpriseID, ConfigName, ParamName, ConfigValue)
VALUES ('ACME', 'MySP', '@MySPParm1', '1'),
('ACME', 'MySP', '@MySPParm2', '2'),
('ACME', 'MySP', '@MySPParm3', '3'),
('ACE', 'MySP', '@MySPParm1', '4'),
('ACE', 'MySP', '@MySPParm2', '5'),
('ACE', 'MySP', '@MySPParm3', '6'),
('ALLIED', 'MySP', '@MySPParm1', '7'),
('ALLIED', 'MySP', '@MySPParm2', '8'),
('ALLIED', 'MySP', '@MySPParm3', '9');
SELECT *
FROM dbo.Configurations;
That would also make the rest of the code a lot simpler. No need anymore for the string splitter.
Of course, the table is still the EAV design, with all problems associated with it. But I guess that for a relatively small table such as this, one can live with it. (Just make sure never to use the values directly in a query, always assign them to a variable, of the correct datatype, first. Implicit conversions in plans can cause major performance issues).
I agree with you on this also. Normalization would have been a good way to reduce complexity (and be a good database designer). The irritating thing is that, I see forum posters all the time coming back with an answer like "I can't change that because that's the design I'm forced to work within" and now I must use that same lame excuse myself. This was in a 10 year old legacy application that literally has thousands of these configurations scattered throughout forms, SPs, FUNCTIONs and god knows what else. It would have been a bit of a daunting task to retrofit all of that.
Of course, I could have simply created a new Enterprise Configs table and started fresh to move into the future but I chose not to do that. Still for the article I probably could have proposed the better way.
Thanks Hugo, for taking the time and interest to suggest this alternative.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
September 3, 2013 at 7:34 pm
Jeff Moden (9/3/2013)
venoym (9/3/2013)
In order to bridge the gap between Mr. Hugo's excellent comments and other comments about multiple columns, I've seen systems that used a type specific column in their normalized settings table to handle for strongly typing the parameters (or settings). That way a 2 is typed as Int (or BigInt, etc) and not as a string to be inlined into a dynamic query setting.I'd use SQL_Variant for such a thing. It's still "loosly" typed based on whatever data you put in but it will be treated as whatever type you put in. You can even interogate the SQL_Variant column with the SQL_VARIANT_PROPERTY(SomeSQLVariantColumnOrVariable,'BaseType') function.
Interesting that you should suggest that Jeff. As I was reading through Hugo's response, I was thinking along the same lines, showing that a Padawan Apprentice eventually learns to think like his Master.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
September 3, 2013 at 7:36 pm
And a general thanks to everyone that took the time to read and/or post a comment on this article. I'm seeing some great suggestions here and probably all of them are workable in one form or another.
I'm hoping that the idea itself was good enough to at least partially overcome the noted deficiencies in the table design.
My thought question: Have you ever been told that your query runs too fast?
My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.
Need to UNPIVOT? Why not CROSS APPLY VALUES instead?[/url]
Since random numbers are too important to be left to chance, let's generate some![/url]
Learn to understand recursive CTEs by example.[/url]
[url url=http://www.sqlservercentral.com/articles/St
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply