Using Views

  • I agree withGary's first comment above; views are not a patch on stored procedures as a mechanism for preotecting the schema from becoming frozen/unchangeable because the apps depend on knowing its exact structure. Stored procedures do a much better job of this. Anyone who has read my other posts on defending the schema from being frozent by the app is alreadyaware of my views on this.

    Someone else pointed out that it's impossible to update views that involve more than one table. That's an implementation restriction, there's nothing in relational theory that prevents update through views involving multiple tables provided it's computationally possible to determine exactly which rows in which tables are involved in the update; but the general problem "is this update on this view possible" is Turing-undecidable (proved by Buff in 1986) so Codd's original Rule 6 had to be modified to avoid saying that all possible updates through views had be valid and say instead that the system had to decide which updates would be valid, and all the implementations are very lazy about this, for example T-SQL says only updates to seingle table views can be valid. That means that any update that involves two tables has to be done by the app as two sepaate updates if views are used unless INSTEAD OF triggers are used to get round this poblem, which suggests that using views to separate the schema from the app forces you either to use triggers or to have multiple queries from the app to do the update, each of which is undesirable. Using stored procedures to separate the schema from the app doesn't cause this problem. The delete case is even worse that the update case - almost no deletions on multi-table views are possible. So using views is usually going to cause performance problems unless you use triggers, and it's at least as easy to write a set of stored procedures as it is to write a set of triggers.

    Performance can be an issue too. Stored procedures tend to beat views there. They also win on security - if apps have access only to stored procedures it's very easy to make sure that parameters are not misused so than injection attacks are impossible. Using views doesn't help with that at all.

    Of course it can be useful to provide the apps with some views - indexed views can sometimes give a big performance boost; but it's not a good idea to let the app have direct access to them.

    Tom

  • The one that I've seen our development team doing and it worked rather nicely: They would never delete a column. They would just add them. Then copy data from the old column(s) and turn on the new app. Then if something went wrong it was just a matter of just rlling back to the old format pointing at the original column.



    ----------------
    Jim P.

    A little bit of this and a little byte of that can cause bloatware.

  • Agree for 100% in terms of stored procedure - that is ultimately "last line of defense" 🙂

    You would not scare me with Access linked tables to SQL Server -apparently this is curren "state of the art" of legacy systems created as mushrooms after rain since mid 90s. Been there, done that (no free T-Shirt unfortunately).

    I usually use Views to retrieve data in SP (the R part of CRUD), and I would use them for reporting and data export during integration. I normally even enforce TOP 100 Percent to enforce sorting in views.

    I usually stay shy from using ANY triggers - IMHO unless you implementing certain "JOB CREATION SCHEME". My usage of triggers was limited to populating historical tables when you need a "before and after" snapshots.

    Create, Update and Delete SPs would work against table and I prefer this way because I could operate

    with a set of records (bulk insert, update and delete via TVP) rather than struggle with RBAR anti-pattern

    introduced by INSTEAD OF triggers.

  • fregatepllada (8/1/2014)


    I usually stay shy from using ANY triggers - IMHO unless you implementing certain "JOB CREATION SCHEME". My usage of triggers was limited to populating historical tables when you need a "before and after" snapshots.

    I agree with that generally, but there are also times when a trigger is a very useful thing to use to undo some of the damage done by someone's earlier "job creation scheme".

    Way beack when I had the problem of converting a heap of rubbish into a viable system. One major component was a vast quantity of C++ which was full of embedded SQL, often that SQL being generated dynamically using C++ string manipulation. One particular problem was a table that grew like topsy, although it should have been very small. This table was declare like this (names changed to protect the guilty):

    CREATE TABLE sometable (

    ID int identity PRIMARY key

    , thingID int REFERENCES thing(ID)

    , otherthingID varchar(8) REFERENCES otherthing(ID)

    , and several columns more, irrelevant to this comment

    ) ;

    It turned out that the C++ part was doing a lot of inserts. Although thingID and otherthingID weren't declared NOT NULL, they were never observed to be NULL, and the logic of what the application was supposed to present to end users was that (thingID,otherthingID) was a natural key despite there being no unique constraint. The ID column couldn't easilty be eliminated, because the last value inserted was carried about and used in the C++ code (read by "select top 1 ID from something order by ID desc" tacked on to the end of the embedded SQL string whenver the C++ thought it was creating a new row), unsurprisingly given the quality of the rest of the embedded SQL), and rewriting the app was going to take a long time, longer than we could live with this problem.

    So the short term problem was how to prevent the table from growing too big and causing performance problems without eliminating the ID column. The natural thing to do was try adding not null restrictions to the two columns of the natural key and a unique constraint and see what happened; what happened was chaos; the unique constraint caused the application to fail in a number of ways (it felt like about a million differnt ways, but of course wasn't); the NOT NULL constraints did no harm, so we tried using them plus an "instead of insert" trigger which deleted all rows which had the same thingID and otherthingID as the row to be inserted before inserting it. That worked - the app didn't complain, and about a dozen known functional errors disappeared from the outstanding bug list (the inserts should have been updates, but we couldn't do that because the highest ID was used in generating some more embedded SQL in that dreadful C++).

    Of course we did rewrite that C++ eventually; but first we changed the rest of the app (mostly writen in JS, and with no string manipulation generating SQL) so that it didn't use the C++ component except where essential (the previous developers' job creation project had decided to route all calls to this database through the C++ component, which was incredibly stupid but sure would have guaranteed them lots of work if I hadn't been pulled in to trouble-shoot the db stuff) so there was a lot less functionality needed in C++.

    Tom

  • fregatepllada (8/1/2014)


    ... ... Create, Update and Delete SPs would work against table and I prefer this way because I could operate

    with a set of records (bulk insert, update and delete via TVP) rather than struggle with RBAR anti-pattern

    introduced by INSTEAD OF triggers.

    Why do you think that INSTEAD OF triggers introduce a RBAR pattern? If the triggers are properly written they are set-oriented.

    Tom

  • @tom - You answered your question: "if triggers are properly written".

    I had to deal with nested loops instead of set-oriented implementation in triggers 🙂

  • I've been in a project until recently where we had to develop a new database interacting with several applications.

    At the beginning it was just me who advocated against the plan "just do it like we did a similar db project 10 years ago" (started with SQL7 and then moved to SS2k; the code has been modified just minimally to reflect the deprecated features like ORDER BY in views).

    So I suggested several rules:

    Each app will get it's own schema where all the app-related code will be placed.

    SELECTS from an app will only be against views.

    INSERT/UPDATE/DELETE will only be allowed through stored procedures.

    Permissions are based on schema level (grant exec and select),....

    The decision was to "give it a try".

    During the project several adjustments needed to be made:

    Examples:

    Tables had to be separated (split vertically) to improve performance and reduce blocking -> the app Dev didn't even notice we had to do it nor when we did it

    Requests for new stored procedures/SELECT's: the app Dev would request a new function and we would create the "interface" (name, parameter list, output format) and create the sproc/view with dummy data matching the expected output. Now the app dev and we could work in parallel: we made the code to return data based on "real data" while the app dev could work on their side in parallel. Did we have to bother regarding security? Not really, since all we had to do is add the sproc/view to the related schema and everything simply did work. This allowed us to reduce the time-to-deliver significantly.

    When we had performance issues we could very easily identify the source being either the database or the app and we could start to solve such issues immediately.

    But the major change was the ability to quickly answer the question: "Who needs access to data stored in coly c1 of table t1?" We've been able to answer it instantly (since we didn't add a view/sproc to a schema without a proper request and documentation and provided only the columns needed). It's been slightly more difficult for the folks dealing with the 10yr old project...

    Guess what concept will be used for the next project coming down the road...



    Lutz
    A pessimist is an optimist with experience.

    How to get fast answers to your question[/url]
    How to post performance related questions[/url]
    Links for Tally Table [/url] , Cross Tabs [/url] and Dynamic Cross Tabs [/url], Delimited Split Function[/url]

  • I really hope that I work with DBAs like Lutz on my next project 🙂

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • call.copse (8/1/2014)


    In light of the QOTD is anyone using synonyms for such purposes yet or would they not work for that? Don't know much about them.

    Views have been soured for me by a particular project (where they were done very wrong), but I guess they could have a place if done correctly - I might consider them for a project I have coming up.

    The using all stored procedures to define the interface thing is fine but it can put an awful lot of business logic into fairly impenetrable code. It's got some advantages, but the lack of ability to do things in a very well structured way puts me off that course of action (as a developer, without any access to DBA services). Of course they always have a place in my book for some jobs - just not simple CRUD stuff.

    Nice win for our lads at the Amex on another note Gary!

    We use synonyms here and there, and have never had any issues with them "not working".

    We use SPs over SQL code in our application whenever possible. In some organizations it might shift a lot of load to the DBAs or make the developers have to wait on DBAs to get them changed if needed, but it also allows changes to be made "on the fly" a lot easier in many cases without the need to do a code release. It just depends on the organization/environment you're working in.

    The Redneck DBA

  • Gary Varga (8/1/2014)


    This editorial highlights a key reason why I believe in using stored procedures. All I want from a stored procedure is to answer some question (no, not query) or to store a set of data (no, not a data set). I don't care how these happen and that gives power to those either side of the stored procedure interface definition. Anything can change on either side as long as the contract remains unbroken.

    As a systems developer that often has a RDBMS at the back end, I want to give as much leeway to DBAs to do performance tuning as well as allowing for multiple system access to a single database.

    Right on! If all data was CRUDed via stored procedures, the world would be a better place.

    Be still, and know that I am God - Psalm 46:10

  • TomThomson (8/2/2014)


    fregatepllada (8/1/2014)


    I usually stay shy from using ANY triggers - IMHO unless you implementing certain "JOB CREATION SCHEME". My usage of triggers was limited to populating historical tables when you need a "before and after" snapshots.

    I agree with that generally, but there are also times when a trigger is a very useful thing to use to undo some of the damage done by someone's earlier "job creation scheme"...

    We user triggers for population of audit tables and little else. I agree that they can be useful, but can also be poorly used and problematic.

    Stored procedures have been standard fare for years. They are preferred for both the database issues mentioned as well as the security issues for web applications that face malware and potential SQL injection etc. Some may feel that there is additional work to build a stored procedure and the code to execute it, but the mitigated risk and flexibility is well worth it.

    Not all gray hairs are Dinosaurs!

  • Miles Neale (8/5/2014)


    ...

    Some may feel that there is additional work to build a stored procedure and the code to execute it, but the mitigated risk and flexibility is well worth it.

    ...more effort to debug embedded SQL statements. It is an unnatural for them to exist. No real chance for tool support.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Gary Varga (8/5/2014)


    Miles Neale (8/5/2014)


    ...

    Some may feel that there is additional work to build a stored procedure and the code to execute it, but the mitigated risk and flexibility is well worth it.

    ...more effort to debug embedded SQL statements. It is an unnatural for them to exist. No real chance for tool support.

    Gaz - I agree. But some still feel that if you have to interupt the flow of their code and logic to go to another tool and language that is is too much. It is easier for them for some reason to do it all in one place. I do not hold that view, nor will I defend it. But I have heard them speak of this approach as if they are right.

    As to it being unnatural for embedded SQL Statements to exist, it is not but I wish it was.

    M.

    Not all gray hairs are Dinosaurs!

  • Bill Talada (8/1/2014)


    Views might work for a small subset of DDL changes since views are rarely update-able when based on more than one table. How can you handle a case where a denormalized child table column gets moved up into a parent table?

    Views are updatable with more than one table in their definition. You just can't update more than one table at a time through a view.

  • Jason Shadonix (8/3/2014)


    We use synonyms here and there, and have never had any issues with them "not working".

    We use SPs over SQL code in our application whenever possible. In some organizations it might shift a lot of load to the DBAs or make the developers have to wait on DBAs to get them changed if needed, but it also allows changes to be made "on the fly" a lot easier in many cases without the need to do a code release. It just depends on the organization/environment you're working in.

    Want to write about how you use synonyms? Always looking for descriptions of real world implementations.

Viewing 15 posts - 16 through 30 (of 31 total)

You must be logged in to reply to this topic. Login to reply