February 24, 2004 at 9:36 am
February 24, 2004 at 11:43 am
Here is what another SQL site recommends:
http://www.sql-server-performance.com/stored_procedures.asp
I was the 'new DBA' appearing to revoke rights from all-knowing developers too. I do not envy you.
Michelle
February 24, 2004 at 2:23 pm
Aaack...The pains of a DBA. I'm pulling for you. I haven't read the article yet. A big issue is with recompiles and compiles. That is an avenue you can take. I am sure I can find some resources on the performance issues with this. Also might want to think about the packets sending the query to the Database engine.
Other points for stored procs include manageable security..faster access to the SQL for the DBA when it is time to tune...etc.
I'm pulling for you on this!!!
"Keep Your Stick On the Ice" ..Red Green
February 24, 2004 at 2:35 pm
Okay, I'll jump in with my standard link on dynamic SQL. SQL Server MVP Erland Sommarskog has written some fine stuff on that at http://www.sommarskog.se/dynamic_sql.html .
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
February 24, 2004 at 2:38 pm
Was wondering when Frank would find this thread
"Keep Your Stick On the Ice" ..Red Green
February 24, 2004 at 2:42 pm
Just before it's time to go to bed and hopefully get some sleep.
As for big name quotes is also ask, what now follows are three from Joe Celko (Am I overstressing him ). Three because I don't know how long a post can be
>> It's bad to see you 'kludge' code also. You should be using modular
coding techniques via stored procedures and no longer embedding SQL -
its a maintanence nightmare - I know, I make a good living going round
clients trying to debug/enhance the performance of applications that
have been written with embedded SQL. <<
I have a few publication credits in the area of modular coding
techniques. I used to write a column in the trade back in the late
1970's on Software Engineering while I was at AIRMICS, setting up coding
standards for the US Army.
Dynamic SQL is a total violation of module coding. Look up coupling and
cohesion. Weak cohesion: the functionality of the dynamic SQL can be
changed by invoking module. Strong coupling: the invoking module must
know exactly what data is coming back since it can change each call.
A module is defined by what (preferably single) function it performs,
not by what language(s) it is written in. For maintaining code, I found
it was nice to be able to see the host language and the SQL all in one
place. Yes, there is a little mental shifting when you read two
languages intermixed, but if you really get stuck get one guy who is
strong on the host language and one who is strong on the SQL.
Using "EXEC SQL CALL SomeProc();" is embedded SQL, but if the procedure
call is simply to a block of code with only a FETCH NEXT statement, why
not use "EXEC SQL FETCH NEXT SomeCursor ...;" and show the poor guy
maintaining the code what is happening?
T-SQL and the other vendor 4GLs were not meant to be applications
development languages and SQL is supposed to be a data retrieval
language, period. You have to use embedded SQL to do anything. (Opps!
Exception: Informix 4GL was meant to be a development language and a lot
of mainframe packages are written in it. I am not sure if Oracle's
PL/SQL is really being used that way. )
When structured programming first hit as a fad, we used to get over
modularized code -- every few statements were cut and pasted into a
procedure body without much sense to them. My favorite was the "rule"
that a module had to be (n = (50, 100, whatever)) lines of code, so the
programmer would spend all his time trying to re-arrange code to fit
into the magic page size.
>> You are still dynamically creating the SQL, in VB/C++ for example you
still dynamically build the SQL inside the application by using string
concatenation and then execute the SQL against the database you are
using. <<
Unh? Embedded SQL is compiled. You put the source code thru a
pre-processor, which looks for EXEC SQL flags, etc. and converts it into
API calls. Then you send that output to the regular host language
compiler and link it to the API library. That is why embedded SQL is so
many times faster than dynamic SQL and so much safer -- the users are
running an executable, not an interpreter. Ther is more to life than VB
and C++ -- COBOL is still the king, even if it gets no respect.
When I write a stored procedure, I try to keep it as close to pure SQL
as I can. Avoid IF-THEN-ELSE, WHILE loops, local variables, temp table,
proprietary code, etc. I try for a BEGIN-END block of code with some
parameter checking at the top, pure SQL in the middle, some error
handling at the end. Basically, I want a McCabe number of one.
--CELKO--
===========================
Please post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, datatypes, etc. in your
schema are.
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
February 24, 2004 at 2:43 pm
#2
>> a generic function that concatenates results of a
query into a string ...<<
In a client server architcture, the display is done by the front end,
not in the database side. SQL is a strongly typed language and they
don't like generic functions. The idea is to know what you are doing
*before* run time, instead of letting the users, present and any future
ones make up functioanlity on the fly.
>> I am able to create the function but I cannot determine how to define
a variable table in the select. <<
The short answer is use slow, proprietrary dynamic SQL to kludge a query
together on the fly with your table name in the FROM clause.
The right answer is never pass a table name as a parameter.
1) This is dangerous because some user can insert pretty much whatever
they wish -- consider the string 'Foobar; DELETE FROM Foobar; SELECT *
FROM Floob' in your statement string.
2) It says that you have no idea what you are doing, so you are giving
control of the application to any user, present or future. Remember the
basics of Software Engineering? Modules need weak coupling and strong
cohesion, etc.
3) If you have tables with the same structure which represent the same
kind of entities, then your schema is not orthogonal. Look up what
Chris Date has to say about this design flaw.
Go back to basics. What is a table? A model of a set of entities or
relationships. EACH TABLE SHOULD BE A DIFFERENT KIND OF ENTITY.
What a generic procedure that works equally on automobiles, octopi or
Britney Spear's discology is saying that your applications a disaster of
design because you have:
1) failed to put all items of the same kind into one table. Chris date
calls this orthogonal design, and I call it attribute splitting. Common
example, 12 identical tables, one for each month, with the same
information them instead of a single table with a temporal attribute.
2) failed to tell the difference between data and meta-data. The SQL
engine has routines for that stuff and applications do not work at that
level, if you want to have any data integrity.
Yes, you can write a program with dynamic SQL to kludge something like
this. It will last about a year in production and then your data
integrity is shot. Please stop writing code like this before you
destroy your database.
--CELKO--
===========================
Please post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, datatypes, etc. in your
schema are.
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
February 24, 2004 at 2:45 pm
#3
The short answer is use slow, proprietrary dynamic SQL to kludge a query
together on the fly with your table name in the FROM clause.
The right answer is never pass a table name as a parameter. You need to
understand the basic idea of a data model and what a table means in
implementing a data model.
1) This is dangerous because some user can insert pretty much whatever
they wish -- consider the string 'Foobar; DELETE FROM Foobar; SELECT *
FROM Floob' in your statement string.
2) It says that you have no idea what you are doing, so you are giving
control of the application to any user, present or future. Remember the
basics of Software Engineering? Modules need weak coupling and strong
cohesion, etc.
3) If you have tables with the same structure which represent the same
kind of entities, then your schema is not orthogonal. Look up what
Chris Date has to say about this design flaw.
Go back to basics. What is a table? A model of a set of entities or
relationships. EACH TABLE SHOULD BE A DIFFERENT KIND OF ENTITY.
What a generic procedure that works equally on automobiles, octopi or
Britney Spear's discology is saying that your applications a disaster of
design because you have:
1) failed to put all items of the same kind into one table. Chris date
calls this orthogonal design, and I call it attribute splitting. Common
example, 12 identical tables, one for each month, with the same
information them instead of a single table with a temporal attribute.
2) failed to tell the difference between data and meta-data. The SQL
engine has routines for that stuff and applications do not work at that
level, if you want to have any data integrity.
Yes, you can write a program with dynamic SQL to kludge something like
this. it will last about a year in production and then your data
integrity is shot.
--CELKO--
===========================
Please post DDL, so that people do not have to guess what the keys,
constraints, Declarative Referential Integrity, datatypes, etc. in your
schema are.
--CELKO--
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
February 24, 2004 at 3:35 pm
After Frank's diatribe, I'm not sure my input is needed, but here's a couple thoughts.
1.In SQL 2000, dynamic SQL can be cached as well, so there is slightly less of an argument. However, not all queries are cached, so there still can be a benefit for a stored procedure.
2. Consolidation - Does your dev manager disallow functions and sub routines from being called? Doesn't he consolidate stuff into a function, whether or not it will be used in another place, it still gets wrapped. That's what a stored procedure does.
3. Efficiency - You can splot the workload and let the DBA optimize SQL in one place and it takes effect for all calls. You are not depending on the developer to check his SQL, which he will most likely not do. Also prevents stupid mistakes since the DBA can syntax check, verify results, catch cross joins, etc.
There are other reasons depending on what you build. Compiled app? Can change the SQL without a build.
February 24, 2004 at 6:46 pm
First a simple demonstration of SP security versus inline method should help.
Inline
-------------------------------
Create table with several data elements
Create a user
Give that user read/write access to the table created
Change the data in all the fields.
SP
-------------------------------
Using table from Inline create two procedures
1 to read select data from the table and only specific columns
2 create an update one that updates only one column based on id field
Create a user with execute writes on these
Demonstrate you cannot update the table directly (or read)
So read with 1st sp and update with 2nd.
This is the highest power of an SP since if a user got hold of the user account and password from the first they can sevierly damage the data or see things that they should not. However in the 2nd case you limit them to a much better extent.
Second demonstrate powers of SP for communication (Note: needs to use ADO with COMMAND object and PARAMETERS to get fully effect)
Multiple steps wrapped together logically and in a contained transaction.
Output parameters can be used instead to return limited data and prevent need for recordset to even return.
Multiple recordsets in a simple to call package, using nextrecordset in ado recordset object can move thru all, not multiple calls to SQL.
Do insert,update,delete, and/or select all in one pass, no need to code each step.
There are other things you could do.
Also using profiler do a monitor for cache misses with inline and sp to compare.
Third facts you can use.
If a application is of the compiled variety and something is found wrong inline code requires rebuild and redistribute, sp can be fixed without this need.
Execution plans are saved with SP where inline has to build at time of execution, this gives a boost in time.
Lastly he may try it and find no difference in performance from his end so security will be a top point. However this generally is because the programming method may be outdated (using DAO as opposed to ADO, straight command text instead of stored proc as commandtype in command object, not using paramters properly, etc.)
I had other stuff earlier but closed notepad without saving. (Opps)
February 25, 2004 at 12:57 am
After Frank's diatribe, I'm not sure my input is needed, but here's a couple thoughts.
It's just to help the original poster to collect some ammunition for his meeting
--
Frank Kalis
Microsoft SQL Server MVP
Webmaster: http://www.insidesql.org/blogs
My blog: http://www.insidesql.org/blogs/frankkalis/[/url]
February 25, 2004 at 4:48 am
February 25, 2004 at 6:59 am
Wow, I cannot believe that a so called development manager would argue the benefits of stored procedures. Sounds to me like they are just being lazy and don't want to do the work to convert the code.
The best suggestion I saw here was to compare the stored procedure to a function within code. If the program called for parsing a string in the same way several times withing the program, would they have thier developers do it inline every time, or would he have them write a function. What if a change comes up? If it's inline, it has to be corrected everywhere.
Even when I first started learning about SQL, as a developer, I saw the benefits over having to recompile a program every time there was a data related business rule change. Good Luck, let us know how it goes.
February 25, 2004 at 7:10 am
Persoanlly I think it is more of a control issue. By using SPs the DBA potientially takes there hand out of the pie and they don't like it.
February 25, 2004 at 7:15 am
It could be, but then that person is not really a good fit for the company, in my opinion. In business it has to be "What is best for the company?" not "what is best to keep my hand in the cookie jar". Amazing how diverse human beings are in thier thoughts. As a developer, I would rather have the DBA looking at my code and making sure it is optimal, but I have worked with others who hate it precisely for the reason you mentioned.
Viewing 15 posts - 1 through 15 (of 57 total)
You must be logged in to reply to this topic. Login to reply