April 20, 2015 at 9:03 am
ddriver (4/20/2015)
So the database is the center of the universe just because turning it off brings the system down? You could say the same thing about a router, a switch, a network cable, a DNS server, a domain controller, your ISP or many other links in the very complex chain that is an enterprise system.I used to be a big believer in using sprocs as an api for all applications. I still think that is a valid approach for some applications and i have written an application that way recently. But on the whole, sprocs represent units of functionality that are difficult to plug into a unit test framework in a programmer's IDE. Anything that raises the barrier to implementing unit-integrating-load-simulation testing is something that should be avoided until you really prove that you need to use it.
Don't be dogmatic, be smart enough to use the right approach for the problem.
I just went through not being dogmatic. We had a major performance issue and it was the ORM that was making a huge mistake. We couldn't just change a stored procedure to fix it. We had to go through a release of front end code because of all the changes that it required in the front end. There was a huge delay in actually fixing the problem because of all that. It only took 30 minutes to write and test the stored procedure for the fix.
I agree that ORMs are the best thing since sliced bread and round wheels for simple C.R.U.D. They're not so good at doing things much more complex than that especially if you consider the reason why they generated such code.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 20, 2015 at 9:06 am
ddriver (4/20/2015)
So the database is the center of the universe just because turning it off brings the system down? You could say the same thing about a router, a switch, a network cable, a DNS server, a domain controller, your ISP or many other links in the very complex chain that is an enterprise system.I used to be a big believer in using sprocs as an api for all applications. I still think that is a valid approach for some applications and i have written an application that way recently. But on the whole, sprocs represent units of functionality that are difficult to plug into a unit test framework in a programmer's IDE. Anything that raises the barrier to implementing unit-integrating-load-simulation testing is something that should be avoided until you really prove that you need to use it.
Don't be dogmatic, be smart enough to use the right approach for the problem.
Hopefully (and there usually is) there's some duplicity in the network to cover such hardware failures. Hopefully there's some duplicity where the database servers are concerned as well. If you turn off a switch or two, there shouldn't be an outage. If the database can't be reached, though, all of that is for naught.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 20, 2015 at 9:13 am
Here is a good article written by Adam Machanic back in 2006 that advocates the use of stored procedures using the concepts of decoupling, cohesion, and encapsulation. Funny how, after several subsequent releases of .NET and SQL Server, things on this front havn't changed that much in almost 10 years.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
April 20, 2015 at 9:15 am
bdenning (4/20/2015)
I started to read this article and didn't get past the first paragraph. Why technical people resort to street language to express themselves is beyond me. I expect that an intelligent person should be able to intelligently articulate their thoughts without the use of such language. I am an avid subscriber to SQL Server Central and have learned a lot from the posts, and would prefer that SQL Server Central NOT reference articles with such language. We are data professionals, and the use of such language is NOT professional. That's my personal opinion, and I suspect that there are others out there that agree with me. I will continue to bypass articles with such language.
I'm right there with you on thinking that's just wrong but I tolerate them because, when someone resorts to such language, it's usually because they're really angry about something and a lot can be learned from an angry person. The only thing (IMHO) that's worse than that is when someone resorts to passive aggressive ad hominem attacks instead of sticking to the point.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 20, 2015 at 9:18 am
RonKyle (4/20/2015)
...But the number crunching belongs somewhere else. I have seen first-hand the improvements in performance that comes from moving business logic into the middle tier, and in my view it is irresponsible to advocate this view. This issue was settled a long time ago.
I think this really depends on what's being crunched and how. There are some logical items in SQL Server that are very efficient and certainly you may want some calculations to be consistent no matter how the data is calculated.
Also, plenty of systems don't have middle tiers. We could argue about whether that's a failing of the design or architecture, but I'd say that many applications don't need this complexity.
April 20, 2015 at 9:27 am
Jeff Moden (4/20/2015)
bdenning (4/20/2015)
I started to read this article and didn't get past the first paragraph. Why technical people resort to street language to express themselves is beyond me. I expect that an intelligent person should be able to intelligently articulate their thoughts without the use of such language. I am an avid subscriber to SQL Server Central and have learned a lot from the posts, and would prefer that SQL Server Central NOT reference articles with such language. We are data professionals, and the use of such language is NOT professional. That's my personal opinion, and I suspect that there are others out there that agree with me. I will continue to bypass articles with such language.I'm right there with you on thinking that's just wrong but I tolerate them because, when someone resorts to such language, it's usually because they're really angry about something and a lot can be learned from an angry person. The only thing (IMHO) that's worse than that is when someone resorts to passive aggressive ad hominem attacks instead of sticking to the point.
I debated about this. I did add the NSFW tag because I wanted people to be aware.
There are (perhaps) better ways to express your feelings, but some people struggle. I certainly have gotten upset about the issue in the past, and I thought the language helped express the emotion of frustration with people that become dogmatic about not using the feature.
April 20, 2015 at 9:32 am
RonKyle (4/20/2015)
I agree with the previous comment regarding language. We should be able to express ourselves more professional. I'm not above swearing, but I don't swear at others or in writing. Wouldn't cross my mind.As for the topic, in my universe, it isn't the developers who don't want the business logic in stored procedures. It is I who don't want business logic in the stored procedures, a DBA who used to be a programmer. Applications should be scalable, and the more business logic that is in the stored procedures, the less scalable the application(s) become. The database is a chokepoint. If the logic is in the middle tier, where it should be, and the number of users grows, I can add more app servers. These servers will do the work and send the results to the database. I'm all for using stored procedures for the data retrieval, updates, and deletes. And I will never count on code to maintain my data integrity. That's what referential integrity constraints are for. But the number crunching belongs somewhere else. I have seen first-hand the improvements in performance that comes from moving business logic into the middle tier, and in my view it is irresponsible to advocate this view. This issue was settled a long time ago.
Edited for word style change.
Well said but I'm frequently torn there. It really does "depend".
In the vein of which you speak, we have an application that requires some pretty good security. So much so that it checks for privs every time someone does a pull-down, tabs to a new field, or changes screens. It was killing the database server with lookups even though the stored procedure to do the lookups was written very well because the stored procedure was being called 40,000 times per hour. I recognized that the underlying tables would only suffer changes maybe 2 or 3 times a day so, to fix the problem, I worked with the development team to cache the results of the proc on the webservers and to only update that cache once per hour. It was truly amazing how much of a load that simple act took off the server and how little impact it had on the webservers.
By the same token, we had another performance problem where the ORM requested a whole bunch of data and the app would summarize it. It was a real pipe clogger from the server all the way out to the users and was of a nature where cached data wouldn't fit the bill all the requests were different. We converted that to a stored procedure and knocked out more than 50 percent of the traffic that started at the database server.
The point that I'm trying to make is that there's a huge amount of dogma on both sides of the fence and that front-end Developers and DBAs need to work together (I sit right in the middle of the Dev team so quick swivel-chair conversations are possible and encouraged) to make the intelligent decisions to support accuracy, performance, and scalability. There are no "ivory towers" where I work and it's an absolute joy to work up close and personal with the Developers. I've learned a whole lot more than I would have if I sat elsewhere and they've learned a whole lot more because I'm an enabler rather than a blocker. If I don't want them to do something, we talk about it and I either see it their way or they see it my way and then we hammer out a solution to get the job done.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 20, 2015 at 9:38 am
There are some logical items in SQL Server that are very efficient and certainly you may want some calculations to be consistent no matter how the data is calculated.
Could you provide some examples of functions that would work better in TSQL vs .Net code that you think are sufficient to justify the moving of the logic code to the data tier? If there are some, it would be good to know what they are.
many applications don't need this complexity.
The only reason this matters is for reasons of scalability. If the application is certain to never be needed to scale, then it doesn't matter. But if the database shares a server with other databases that have to scale, then what that particular application isn't the final determiner.
April 20, 2015 at 10:06 am
I very much appreciated the connection and feedback about Rob's article. I have felt that way at times, too, but never really had the means to discuss my REALLY strong concerns about current design patterns.
I'm not as good discussing why a technical approach is best, especially to non-technical audiences, so I'm going to sign up for Rob's class in PluralSight (http://www.pluralsight.com/courses/speaking-fundamentals) and hope to get better at this.
The amount of code written because of Entity Framework or JSON is staggering. A lot of that came because in the legacy app, everything went in procedures and no one ever took a step back and said, "this logic is really complex, maybe we should look at the table design." Instead we got a boatload of UNION statements and more sub-SELECT and UDFs than you can imagine.
I gave the guys a framework for the new database and application system:
🙂 If you need a UNION, it could be that your data is stored in more than one place and you can look at combining that data together
🙂 If you have a sub SELECT, let's look at how the JOINs are happening
🙂 Denormalization is not a crime, especially for very static data. When was the last time we added a new US state or ZIP code?
🙂 BIGINTs for all identity columns and every table gets them. No "natural" keys ever, though we did allow for UNIQUE indexes
🙂 Every table gets a ModifiedDate, set via a trigger. Can't say how often they recreate data warehouses and reporting data stores because they can't tell what data change. It's cheap and easy and obvious.
🙂 Triggers will always be simple, stupid, only affect the table it's on.
🙂 Views can't be more than two deep, and the lower view has to be obvious and contain no "logic"
So we did things like pre-calculate the Sunday of a given date into a new column (again, why would it change?) and other things that made reporting a lot easier.
That's about it. It's saved an incredible amount of pain and solved problems that hadn't even been considered, but I'm always fond of saying that no one compliments a well-paved road.
Okay, I'm off my preaching pedestal now.
Andre
April 20, 2015 at 10:07 am
Encapsulating the database through stored procedures does require a degree of constant communication and collaboration between the application and database team. For example, I've seen cases where app or BI developers have mis-used a stored procedure for something other than it's intended purpose, rather than going through the process of requesting a new stored procedure be written for the new task.
Years ago, there was this 'Patient Information' stord procedure that would accept a PatientID and then return multiple resultsets; everything from demographics, address, phone, physicians, and dates of clinic admission. It was originally developed to return all information required for populating a complete patient view within a dashboard.
What happened was that someone needed to produce a patient contact report, a paper backup for the call center to reference in case the system was down. The requirements were fairly simple, something like a list of all patients with active cases and their primary phone number. It would have required joining three or four tables. However, what the developer did was leverage LINQ to loop through every patient (about 22,000 total) and for each make a seperate call to the 'Patient Information' procedure described above. They were actually only using a subset of columns from only one of the multiple resultsets returned.
Well, needless to say, it totally slammed the call center database so bad it initially looked like a denial of service attack.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
April 20, 2015 at 10:12 am
There are no "ivory towers" where I work
There are no ivory towers where I work either. But there is the expectation that all know and understand best practices and when they believe a deviation is warranted, to be able to explain why the best practice won't work in this case. I don't always follow the best practice, but in each case I can tell someone what the best practice is and why a deviation was necessary.
I'm not saying this applies in your case, but too often quotes like those above are really an attempt to get around the discipline necessary to create the best applications possible in the time available.
April 20, 2015 at 10:58 am
For example, I've seen cases where app or BI developers have mis-used a stored procedure for something other than it's intended purpose, rather than going through the process of requesting a new stored procedure be written for the new task.
.
Let me say that
Against stupidity even the gods contend in vain (Friedrich Schiller but also Isaac Asimov)
I have enough experience in reviewing other people work (and making enemies) to say that perversions are quite common in both approaches.
Code should be
1. Correct
2. Fast
3. Able to scale
4. Maintainable
Someone argued that stored procedures are not able to scale.
It could be ...
But in my experience badly written (SQL) code is the main guilty.
Me, I always check the performances (in a DW environment) with 10^3, 10^4, 1^5, 10^6 records. If the execution time do not grow at the same pace, there could be a problem:
1. Query badly written
2. Missing indexes
3. Bad parallelism
4. Wrong statistics
April 20, 2015 at 10:59 am
RonKyle (4/20/2015)
There are some logical items in SQL Server that are very efficient and certainly you may want some calculations to be consistent no matter how the data is calculated.
Could you provide some examples of functions that would work better in TSQL vs .Net code that you think are sufficient to justify the moving of the logic code to the data tier? If there are some, it would be good to know what they are.
Row level security
Island/Gap/interval calculations across large sets of data - xfer can overwhelm any CPU/resource savings.
Probably more, but these are off the top of my head.
April 20, 2015 at 11:07 am
Someone argued that stored procedures are not able to scale.
It could be
If stored procedures related to the database in a way that they could be placed on separate servers apart from the database server, then scaling would not be an issue. To understand scaling with stored procedures vs scaling with WCF services, what's important to understand is if the heavy lifting is done in the middle tier, it is always possible to expand the number of middle tier servers to spread the load. This is not possible at the data layer. If the heavy lifting is done at database level, then there will come a point where the number of requests will exceed it's capabilities. This is why they can't scale. It's not about correctly written vs incorrectly written. It's about understanding conceptually why it's better. It has nothing to do with the object itself.
April 20, 2015 at 11:15 am
RonKyle (4/20/2015)
There are no "ivory towers" where I work
There are no ivory towers where I work either. But there is the expectation that all know and understand best practices and when they believe a deviation is warranted, to be able to explain why the best practice won't work in this case. I don't always follow the best practice, but in each case I can tell someone what the best practice is and why a deviation was necessary.
I'm not saying this applies in your case, but too often quotes like those above are really an attempt to get around the discipline necessary to create the best applications possible in the time available.
You've taken the quote incorrectly then because I absolutely agree with you on all points you've mentioned above about "Best Practices". My quote was meant to say that no one get's away with a Draconian "Because I said so" in our company. Folks may be compelled to do something they don't agree with but at least they know the reasons why so that they can support it as if it were their own.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 16 through 30 (of 77 total)
You must be logged in to reply to this topic. Login to reply