September 18, 2015 at 7:00 am
GilaMonster (9/16/2015)
<snip>Yup, that's one of the solutions in the simple-talk article.
Which i just forwarded to the manager of the dev team. They are planning a code review of the more frequently used Uberprocs.
Thank you for writing it!
September 18, 2015 at 8:41 am
You're welcome
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
September 22, 2015 at 6:51 am
kim / data detection group (9/17/2015)
Of course, SQL Server could certainly benefit by implementing the Package concept that Oracle provides.You can bundle all your related procedures and functions into a single Package for portability and maintenance, but internally they are all individual modules.
This avoids the ugliness and confusion of trying to do lots of different, unrelated things in a single procedure, yet you have all the source code in a single source file (ok, usually two with separate Package definition and body files).
There are LOTS of times when I need to use a simple local function within a stored procedure, but have to create and maintain a separate external source file to implement the function.
If you are using proper source control this is hardly an issue.
September 24, 2015 at 7:01 pm
Manic Star (9/22/2015)
If you are using proper source control this is hardly an issue.
Source code control is important, but beside the point.
Most higher-level languages have some form of encapsulation. It is annoying to have to create, publish, and catalog a new public stored procedure when I only need to use it in two or more places within a single procedure or function.
With Oracle, if I need to apply a simple algorithm in two places in a procedure, I can quickly define a local embedded function and call it twice.
procedure/function MyOuterProcedure
function MyInnerFunction ...
...
set @a = MyInnerFunction(...);
set @b-2 = MyInnerFunction(...);
end;
With SQL Server, I have to either create a separate public function (and all the procedural script, and check it in to source control, and make sure it is built and delivered with the end package) or copy/paste duplicate code (shudder!), even if this function will never be used anywhere else.
Further, if I have a collection of "local" procedures and functions, I can wrap them in a Package, and only expose those internal procedures/functions that I choose. The others are private to the Package.
Just a couple of features I wish Microsoft would add to SQL Server...
September 25, 2015 at 2:04 am
Manic Star (9/22/2015)
kim / data detection group (9/17/2015)
[]There are LOTS of times when I need to use a simple local function within a stored procedure, but have to create and maintain a separate external source file to implement the function.
If you are using proper source control this is hardly an issue.
Any way this local code is implemeted as a global DB object, same as the calling procedure.
September 28, 2015 at 7:00 am
For an in-house system with just over 10,000 records in the main entity table, I created a set of stored procedures of the following pattern:
For each entity (table, each with identity column primary keys), two stored procedures, with the following related data operations:
1 read/query
2 create/update/delete(or pseudo-delete) (which has more restricted permissions)
Each type of stored procedure uses an overload parameter model.
For example, if procedure 2 is not passed an ID parameter, it is treated as an INSERT, otherwise if it has a delete BIT parameter a (pseudo-)DELETE, else an UPDATE. This gives three (simple) queries in one procedure.
If procedure 1 is passed a ID parameter, it selects one row, else multiple rows filtered by the other parameters. Some instances of procedure 1 pattern include a selectAll parameter to cope with the case of populating Reporting Services dropdown parameters with an All/NULL label/value first record. Parent entity parameters are also included so that cascading parameters in reports are supported. We implemented a parameter pattern for title column LIKE match for SharePoint Business Connectivity Services user interface purposes.
However, in each case, the returned procedure 1 recordsets have identical columns, and form a single set-based query.
I am not sure from your examples if such overload cases are covered, but we have not encountered any noticeable performance problems with our production data operations.
One reason I brought in this approach is that there were often many stored procedures in older systems doing much the same thing, with some not being bug-fixed or optimized, and there was a maintenance and quality challenge.
Perhaps there is a happy medium?
September 28, 2015 at 7:30 am
With just over 10,000 rows, most things should run fast. Once it keeps growing, that might be an issue. You shouldn't create code based on entities, you should create code based on processes.
September 28, 2015 at 7:31 am
kim / data detection group (9/17/2015)
Of course, SQL Server could certainly benefit by implementing the Package concept that Oracle provides.You can bundle all your related procedures and functions into a single Package for portability and maintenance, but internally they are all individual modules.
This avoids the ugliness and confusion of trying to do lots of different, unrelated things in a single procedure, yet you have all the source code in a single source file (ok, usually two with separate Package definition and body files).
There are LOTS of times when I need to use a simple local function within a stored procedure, but have to create and maintain a separate external source file to implement the function.
If an application has a lot of stored procedures, I sometimes maintain the DDL for multiple related stored procedures in the same .sql script. That's similar to Oracle's concept of a package.
I'm thinking it probably has to do with code re-use, which is an area that T-SQL could improve in. It makes me wonder if T-SQL could benefit from something like an ASP's #include directive, so developer can stub in code blocks for things like error handling and variable declaration.
For example:
<!-- #include "errorhandler.sql" -->
If code reuse is what's motivating this, then analyze the code for one of the uber-procedures and see if leveraging common views across multiple procedures would be a better solution.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
September 28, 2015 at 7:36 am
Tavis Reddick (9/28/2015)
For an in-house system with just over 10,000 records in the main entity table, I created a set of stored procedures of the following pattern:For each entity (table, each with identity column primary keys), two stored procedures, with the following related data operations:
1 read/query
2 create/update/delete(or pseudo-delete) (which has more restricted permissions)
Each type of stored procedure uses an overload parameter model.
For example, if procedure 2 is not passed an ID parameter, it is treated as an INSERT, otherwise if it has a delete BIT parameter a (pseudo-)DELETE, else an UPDATE. This gives three (simple) queries in one procedure.
If procedure 1 is passed a ID parameter, it selects one row, else multiple rows filtered by the other parameters. Some instances of procedure 1 pattern include a selectAll parameter to cope with the case of populating Reporting Services dropdown parameters with an All/NULL label/value first record. Parent entity parameters are also included so that cascading parameters in reports are supported. We implemented a parameter pattern for title column LIKE match for SharePoint Business Connectivity Services user interface purposes.
However, in each case, the returned procedure 1 recordsets have identical columns, and form a single set-based query.
I am not sure from your examples if such overload cases are covered, but we have not encountered any noticeable performance problems with our production data operations.
One reason I brought in this approach is that there were often many stored procedures in older systems doing much the same thing, with some not being bug-fixed or optimized, and there was a maintenance and quality challenge.
Perhaps there is a happy medium?
Is there a reason to have those kind of stored procedures at the DB level if they're just generic insert/update/delete procedures instead of passing that off to the application?
September 28, 2015 at 7:50 am
Luis Cazares (9/28/2015)
With just over 10,000 rows, most things should run fast. Once it keeps growing, that might be an issue.
I agree, but since each row matches a physical asset in our organization and we are limited to a geographical region, I expect the table size to grow approximately arithmetically, by a predictable amount each year, and even if the types of asset expands, we are unlikely to be looking at millions. If that did happen, we would likely need to change our approach.
You shouldn't create code based on entities, you should create code based on processes.
Could you expand on this please? We consider the stored procedures as an abstraction layer which represents the data operations on entities, which are typically physically represented as tables.
ZZartin (9/28/2015)
Is there a reason to have those kind of stored procedures at the DB level if they're just generic insert/update/delete procedures instead of passing that off to the application?
Partly for security (principle of least privilege), it is a lot simpler and more secure to limit access to a stored procedure which does only what is required. And for reasons like using pseudo-delete, not a real delete. And partly for things like passing back identifiers as return parameters on INSERT. And generally for encapsulation reasons. Performance, maybe. And the stored procedures will be used by more than one application (SharePoint, Reporting Services, InfoPath and a forthcoming batch operation, in our case).
September 28, 2015 at 8:40 am
Tavis Reddick (9/28/2015)
Luis Cazares (9/28/2015)
You shouldn't create code based on entities, you should create code based on processes.Could you expand on this please? We consider the stored procedures as an abstraction layer which represents the data operations on entities, which are typically physically represented as tables.
For example, when inserting a customer, you're not only affecting the customer entity, as you might also need to insert an address, phone numbers and contacts. You need to control everything within a transaction to be sure that you're not missing something in your process. Or when you insert a payment, you also affect the invoices or customers' balance, which also mean affecting several entities. Most tables will have a parent-child relationship (or header-details) which need to be coded adequately.
September 28, 2015 at 9:37 am
Luis Cazares (9/28/2015)
Tavis Reddick (9/28/2015)
Luis Cazares (9/28/2015)
You shouldn't create code based on entities, you should create code based on processes.Could you expand on this please? We consider the stored procedures as an abstraction layer which represents the data operations on entities, which are typically physically represented as tables.
For example, when inserting a customer, you're not only affecting the customer entity, as you might also need to insert an address, phone numbers and contacts. You need to control everything within a transaction to be sure that you're not missing something in your process. Or when you insert a payment, you also affect the invoices or customers' balance, which also mean affecting several entities. Most tables will have a parent-child relationship (or header-details) which need to be coded adequately.
Thanks, Luis. Yes, I take your point.
However, these are only just the base procedures, and we initially used them to support SharePoint External Lists (one list for the main Asset entity, many others for the lookup properties). These lookups are also present in InfoPath forms and Reporting Services dropdown parameter lists.
There is no corresponding case where, for example, we could create a new property value at the same time as we create a new instance of the main Asset entity. So the closest thing would be, if there was an Asset to be added to a new room that wasn't on our system, the room would have to be added first (and potentially a new parent building and campus), then picked from a list when inserting the Asset.
Even if that was not the case, I wonder if separate INSERT procedures for entity could still be appropriate in some cases, batched within a larger transaction.
September 28, 2015 at 9:52 am
In a system for which I was one of the principal architects, we code-generated stored procedures for each and every table that implemented full parameter validation and auditing, then added business layer procedures that grouped the "process"-related tables into business object transactions.
With this approach, we virtually eliminated errors at the database level - they were caught in the business layers, or at worst at the individual table procedure level and fully logged and reported up through the layers. We still had people making logical errors in the presentation layers, but the data in the database was consistent and correct.
We also enforced (with a serious threat of being fired) that no one was allowed to directly execute any SQL Insert, Update, or Delete against any table. All transactions that could alter data had to go through the appropriate stored procedures.
January 26, 2016 at 11:44 am
So we had a development. An SQL MVP at days of .Net told our devs that their beloved UberQueries were ok since the QA cached the plans at the statement level, completly ignoring the other variables of parameter sniffing they introduce.
*headdesk*
January 26, 2016 at 12:23 pm
Manic Star (1/26/2016)
So we had a development. An SQL MVP at days of .Net told our devs that their beloved UberQueries were ok since the QA cached the plans at the statement level, completly ignoring the other variables of parameter sniffing they introduce.*headdesk*
MVP does not always means wisdom or knowledge.
Viewing 15 posts - 16 through 30 (of 39 total)
You must be logged in to reply to this topic. Login to reply