August 30, 2016 at 5:06 pm
What Lynn said 🙂
August 31, 2016 at 12:34 am
Eric M Russell (8/30/2016)
...If the app developers and QA want to take care of the CRUD screens and leave the heavy data access tasks to those of us who specialize in database development and T-SQL performance optimization, then that's great...
In hindsight (and it was at the time) my favourite team setup was that the .NET developers would write the stored procedures (SP). They were written to a production level of quality. They were then reviewed by the SQL Server DBA who decided whether the SP was good enough having considered performance and many other aspects. If not the original SP was considered a specification and the SP rewritten. Once a SP was considered at least good enough then it was passed to the Oracle DBA who converted it for the Oracle schema.
As long as the SP was originally written to coding standards there was no shame attached. There was a feedback loop to advance the developers' skills. Also there was kudos to be attained when a SP with complex business and technical requirements was accepted as is (an occurrence that became more and more frequent).
Done as a collaborative effort with positive feedback loops not only allowed for rapid developement (the first cut SPs worked even if non-performant) but also provided an environment encouraging a higher quality of code by targetting the efforts of the most time constrained resource (or, rather, person :Whistling:).
Gaz
-- Stop your grinnin' and drop your linen...they're everywhere!!!
August 31, 2016 at 2:41 am
Lynn Pettis (8/30/2016)
Indianrock (8/30/2016)
Erland's article may not be current. In discussing TVP's and the ability to use them with Entity Framework, one of our developer's pointed out the article below. However, we make almost no use of stored procedures so that would have to be worked around.Erland Sommarskog, SQL Server MVP. Latest revision: 2016-08-21.
http://www.c-sharpcorner.com/UploadFile/78607b/using-table-valued-parameters-in-entity-framework/
Also see Fodsuk's Entity Framework Extras packages which is quite a neat wrapper, and also allows getting output parameters within EF:
https://www.nuget.org/packages/EntityFrameworkExtras.EF6/
Not sure if there is a real article on using it but there is a decent Stack overflow answer.
http://stackoverflow.com/questions/8157345/entity-framework-stored-procedure-table-value-parameter
August 31, 2016 at 6:05 am
Possibly off-topic but here is an exchange I had yesterday with a developer on ORMs, TVPs, abuse of the "IN" operator, huge amounts of update statistics and poor cardinality estimates ( skewed parallelism )
Very well stated Himanshu. And to be more specific, it appears that with a large number of values behind and “IN” operator, even if the statistics are updated, sql server either over or under-estimates the rows to be returned by a specific “leg” of the query plan. So the estimated and actual rows have a wide variance, leading to poor performance. I’m thinking that for many years we have aggressively updated statistics for hours every day ( at full scan ), in some cases to no avail.
“I used Paul Randal's query to find multi-threading queries with cxpacket waits. After finding one, the execution plan showed an index seek with wide variance between estimated and actual rows. 55k estimated versus 2.3 million actual. So I updated that index's statistic with fullscan. removed that plan from cache and re-ran the query. Still shows the same wide variance In the plan,”
From: Himanshu Swami
Sent: Tuesday, August 30, 2016 10:21 PM
To: Subject: RE: Entity framework and LINQ don't support table-valued parameters
OK now I get it. So correct me if I am wrong but as I understand it, the problem stems from the fact that we have a very complex query that uses an inefficient IN clause and would perform better if a TVP was used in place of the data list in the IN clause.
There is a very good reason why Microsoft hasn’t come up with an implementation to support TVPs in LINQ queries even though they support it from within stored procedures. The reason has to do with the way the .NET framework code gets written (in any .NET language – C# ot VB.NET or anything else for that matter) that EF then has to translate to SQL. For those inclined to know about the internals, the Contains extension method is what gets translated into an IN clause today in EF – the same happens in NHibernate.
The problem Microsoft is facing is that of library design, which to some extent trumps feature parity. The challenge here is what extension method can Microsoft add to the framework that would have the exact same signature and semantics to the Contains method but would indicate to the LINQ-to-SQL translator to do something different. So the problem MS faces here is how can they, as the authors of the .NET framework, indicate that an extension method A should translate to SQL construct X while extension method B should translate to SQL construct Y – two methods with different names that seem to perform the exact same thing semantically and even have the same method signatures and possibly identical implementations but translate to different SQL constructs upon query execution. That runs afoul of Microsoft’s own library and framework design guidelines (there is an excellent book published by MS Press on that topic). Now they could introduce extension methods specifically for EF specifically for this use case but let’s talk about NHibernate first.
NHibernate has the same problem for the same reason – see https://stackoverflow.com/questions/3701364/is-it-possible-to-use-sqldbtype-structured-to-pass-table-valued-parameters-in. So this is less about a capability that the current crop of ORM frameworks lack … it is more about competing priorities and which should take precedence for one particular use case.
The other reason for both EF and NHibernate not considering this seeming feature lack as a priority is that the use case is limited to inline SQL written using LINQ – both frameworks support the use of TVPs with stored procs (EF has it built-in while NHibernate can be extended easily – see link above in previous para). So our particular use-case must be a very narrow use-case for it to NOT get as much attention from the ORM-builder’s perspective.
August 31, 2016 at 7:16 am
I'm partial to correctly written stored procedures in the database over ORM and other code of that ilk on the application side. We've proven it to our management, time after time. Doesn't matter the database engine nor the programming language.
August 31, 2016 at 7:30 am
I try and get the developers to see the sprocs as an interface much like a RESTful or api. So easy to setup testing or mocks against and they don't need to worry about the implementation if they don't want to.
The benefits even for simple crud is that the Business Logic can be preserved away from the Data Logic. So If I need to add audits or archiving data to slower disk or more unusual performance tweaks then I can without the risk of asking the developers to change any of the business logic.
There are of course those who seem to believe that orms are the only way to do things and then quote facebook or google as the example of it must be okay if they use. We are of course not the size of google/facebook and are not leading edge.
I don't tar all developers with this brush but I do see a certain amount of myopic lazy coders who seem to thing that it is just C# code and sql is bad and too difficult to learn. As a senior developer pointed out that without the data what is the point of the application.
August 31, 2016 at 7:45 am
I don't tar all developers with this brush but I do see a certain amount of myopic lazy coders who seem to thing that it is just C# code and sql is bad and too difficult to learn. As a senior developer pointed out that without the data what is the point of the application.
I'd rather deal with SQL code than C# code in many cases.
And I'm beginning to prefer F# over C#, since it's more declarative like SQL. (Now if Don Syme and MS would just fix some of the rough edges around dependencies and tooling, I'd be blissful.)
August 31, 2016 at 8:02 am
I strongly suspect that developer antipathy to stored procedures is rooted in organisation structure rather than pure technical arguments.
Then you have the mis-education piece where a surviving brain donor with a big mouth has heard the argument for their camp and the argument against the alternative. They then preach a flawed gospel VERY loudly and sadly convincingly.
The reality is that an open-minded multi-skilled team will quickly determine that some database interaction is ORM friendly where as other stuff is not.
They will gather that calling getGoldCustomers is preferable to a horrendous piece of C#/Java/<insert latest fashionable language here>.
An ORM is code that writes code, just as HIVE is SQL that writes map-reduce code in Java.
ORMs can call stored procs anyway so it is the genius of AND rather than the tyranny of OR.
Maybe you have complicated security requirements. Data security may be easiest to enact with security groups/database roles. Occasionally a more complicated infrastructure based approach may be more desirable.
All professions have tools that in the wrong hands can be dangerous. The problem is the wrong hands, not the tool.
August 31, 2016 at 8:04 am
Is this relevant? Hardware may be cheaper than experienced developers and DBAs who could actually develop the sophisticated procs which could handle complicated business logic.
In many of these highly relational databases, you can't get the job done with a join between two tables. In our case we have a large QA/DEV/Release group along with Accurev and now JIRA for code storage/management. A large move to procs would require people to manage and maintain all of that.
Sure, that may be the way it was done 20 years ago, but it would be an expensive proposition to move back in that direction.
August 31, 2016 at 8:04 am
Yet Another DBA (8/31/2016)
I try and get the developers to see the sprocs as an interface much like a RESTful or api. So easy to setup testing or mocks against and they don't need to worry about the implementation if they don't want to.The benefits even for simple crud is that the Business Logic can be preserved away from the Data Logic. So If I need to add audits or archiving data to slower disk or more unusual performance tweaks then I can without the risk of asking the developers to change any of the business logic.
There are of course those who seem to believe that orms are the only way to do things and then quote facebook or google as the example of it must be okay if they use. We are of course not the size of google/facebook and are not leading edge.
I don't tar all developers with this brush but I do see a certain amount of myopic lazy coders who seem to thing that it is just C# code and sql is bad and too difficult to learn. As a senior developer pointed out that without the data what is the point of the application.
I think this is because database developer type people are seldom included in early design/requirement phases and databases end up as seen as just some place to put data. This means the actual potential for database integration as a legitimate part of project development is either ignored or not designed/implemented properly if it is(EAV data would never exist otherwise).
Stored procedures certainly end up as a victim of that particularly when it comes to things SQL is good at like bulk data manipulations or interacting with objects that might not be part of whatever ORM they are using.
August 31, 2016 at 8:55 am
Eric M Russell (8/30/2016)
Personally, I dislike coding CRUD procedures, because it turns into busy work and becomes a distraction with all the numerous tasks that have to be tracked and coordination with QA. If the app developers and QA want to take care of the CRUD screens and leave the heavy data access tasks to those of us who specialize in database development and T-SQL performance optimization, then that's great. I'm not going to impose my will where it's not really needed. It seems to me that ORM is the database development equivalent of sheet rock walls in building construction; it definately has it's proper place and can greatly reduce the cost and completion time for a project... when it's limits are understood.
If you are still writing these manually then yes I agree that is awful. I wrote a utility close to 15 years ago to create basic CRUD procedures for any table. Writing these manually sounds truly horrendous.
_______________________________________________________________
Need help? Help us help you.
Read the article at http://www.sqlservercentral.com/articles/Best+Practices/61537/ for best practices on asking questions.
Need to split a string? Try Jeff Modens splitter http://www.sqlservercentral.com/articles/Tally+Table/72993/.
Cross Tabs and Pivots, Part 1 – Converting Rows to Columns - http://www.sqlservercentral.com/articles/T-SQL/63681/
Cross Tabs and Pivots, Part 2 - Dynamic Cross Tabs - http://www.sqlservercentral.com/articles/Crosstab/65048/
Understanding and Using APPLY (Part 1) - http://www.sqlservercentral.com/articles/APPLY/69953/
Understanding and Using APPLY (Part 2) - http://www.sqlservercentral.com/articles/APPLY/69954/
August 31, 2016 at 9:32 am
Sean Lange (8/31/2016)
Eric M Russell (8/30/2016)
Personally, I dislike coding CRUD procedures, because it turns into busy work and becomes a distraction with all the numerous tasks that have to be tracked and coordination with QA. If the app developers and QA want to take care of the CRUD screens and leave the heavy data access tasks to those of us who specialize in database development and T-SQL performance optimization, then that's great. I'm not going to impose my will where it's not really needed. It seems to me that ORM is the database development equivalent of sheet rock walls in building construction; it definately has it's proper place and can greatly reduce the cost and completion time for a project... when it's limits are understood.If you are still writing these manually then yes I agree that is awful. I wrote a utility close to 15 years ago to create basic CRUD procedures for any table. Writing these manually sounds truly horrendous.
No I'll leave it to others, because I'd rather not get mired in the muck of constant table changes during development, changing philosophies regarding input/out parameters and error handling, QA process, standup meetings, etc.
But if I were in a position to do it myself, there are plenty of scripts.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
August 31, 2016 at 9:48 am
David.Poole (8/31/2016)
I strongly suspect that developer antipathy to stored procedures is rooted in organisation structure rather than pure technical arguments.
- Stored procs being seen as a DBA thing taking control and flexibility away from the developers.
- ORMs being seen as a developer thing taking control and flexibility away from DBAs.
Then you have the mis-education piece where a surviving brain donor with a big mouth has heard the argument for their camp and the argument against the alternative. They then preach a flawed gospel VERY loudly and sadly convincingly.
The reality is that an open-minded multi-skilled team will quickly determine that some database interaction is ORM friendly where as other stuff is not.
They will gather that calling getGoldCustomers is preferable to a horrendous piece of C#/Java/<insert latest fashionable language here>.
An ORM is code that writes code, just as HIVE is SQL that writes map-reduce code in Java.
ORMs can call stored procs anyway so it is the genius of AND rather than the tyranny of OR.
Maybe you have complicated security requirements. Data security may be easiest to enact with security groups/database roles. Occasionally a more complicated infrastructure based approach may be more desirable.
All professions have tools that in the wrong hands can be dangerous. The problem is the wrong hands, not the tool.
There aren't enough agreement points in the world that I could give you on this. Lots of people are excited about the notion of "DevOps" but few are willing to give up their ivory towers to make it work the way it should have always worked even before the newest buzz-word of "DevOps".
The key is and always has been in the code itself. What works best for a given situation is what every company and the individual teams in a company should strive for.
I also wish that people on both sides of the fence would realize two things... 1) they don't know everything about all the tools and so they're going to gravitate towards what they know and that could actually be a mistake so they need to work with others that do know and 2) when all you're trying to do is drive nails, you actually should use a hammer and it needs to be the right kind of hammer. I'm always thrilled when someone invents a new sledge hammer to drive thumb tacks and then a world of minions insist that's the best thing to do. I also love it when they develop new and improved two headed hammers that are guaranteed to leave a nasty mark in the surface because they're only trying to drive one nail. Then there are those that don't even know what hammer is and insist on using a new and improved saws-all to drive the thumb tacks.
The good part of all that is that there will always be performance and resource issues to solve and the old hammers and skills to use them properly will never fall out of demand because of that. 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
August 31, 2016 at 9:56 am
Indianrock (8/31/2016)
Is this relevant? Hardware may be cheaper than experienced developers and DBAs who could actually develop the sophisticated procs which could handle complicated business logic.In many of these highly relational databases, you can't get the job done with a join between two tables. In our case we have a large QA/DEV/Release group along with Accurev and now JIRA for code storage/management. A large move to procs would require people to manage and maintain all of that.
Sure, that may be the way it was done 20 years ago, but it would be an expensive proposition to move back in that direction.
I would not propose using procs for an app like JIRA, come to think of it I think JIRA is one of those apps for which a document DB may be appropriate.
Then again, why bother installing JIRA at all given that it is available as SaaS? In fact the entire Atlassian toolsuite is available as SaaS.
Hardware is cheaper right up until your data centre is full, then your next server costs millions.
Cloud hardware is cheap provided you design to scale out on the low end kit and don't want it up 24/7/365. Scaling out on RDBMS's becomes complicated very quickly.
If your demand is slower than both your hardware hardware refresh cycle and hardware improvements then fine. If you find yourself jumping up a class of server then watch out because the licensing costs jump too.
August 31, 2016 at 11:02 am
David.Poole (8/31/2016)
Indianrock (8/31/2016)
Is this relevant? Hardware may be cheaper than experienced developers and DBAs who could actually develop the sophisticated procs which could handle complicated business logic.In many of these highly relational databases, you can't get the job done with a join between two tables. In our case we have a large QA/DEV/Release group along with Accurev and now JIRA for code storage/management. A large move to procs would require people to manage and maintain all of that.
Sure, that may be the way it was done 20 years ago, but it would be an expensive proposition to move back in that direction.
I would not propose using procs for an app like JIRA, come to think of it I think JIRA is one of those apps for which a document DB may be appropriate.
Then again, why bother installing JIRA at all given that it is available as SaaS? In fact the entire Atlassian toolsuite is available as SaaS.
Hardware is cheaper right up until your data centre is full, then your next server costs millions.
Cloud hardware is cheap provided you design to scale out on the low end kit and don't want it up 24/7/365. Scaling out on RDBMS's becomes complicated very quickly.
If your demand is slower than both your hardware hardware refresh cycle and hardware improvements then fine. If you find yourself jumping up a class of server then watch out because the licensing costs jump too.
Hardware is also not always the solution. Even if the hardware were magically capable of doubling the performance and halving the resource usage, something that is 10X out of spec will still be 5X out of spec with the potential of not seeing any improvement because the code won't allow improvement by hardware. I've seen that a whole lot in the last 20 years and have actually seen some hardware improvements cause more issues because of contention caused by things running "a little bit faster".
I've also seen where a company spent a half million dollars on all new servers, etc, and, not only was there no improvement in the things that mattered most, the performance actually got a bit worse.
Fixing the code really isn't that big an issue and normally doesn't take a rocket scientist to do it. There real key is like anything else... 90% of the fix is in the correct identification of the problem. While it is true that you need someone with good knowledge of the DMVs or a tool to find these problems without any knowledge of what problems may exist, that's normally not the case. Normally, people KNOW where the problems exist but simply won't spend the bit of time necessary to identify and fix the code, which is what really needs to be done. Sometimes, it may be as simple as adding an index (or, sometimes, getting rid of one). Other times, a section of code may need to be modified to use SARGable predicates (usually an incredibly easy fix requiring no changes in the logic of the code itself). And, yeah... other times, the code was written by folks that weren't so good at it and the code needs to be rewritten.
An any case, the paltry 1X or 2X "improvements" caused by spending potentially 10's of thousands of dollars (or much more as David Poole pointed out) could be improved by 10X, 100X, or even 1,000,000X (my personal record) for a whole lot cheaper by having someone on the team look at the code and fix it. It's normally not the whole app that's bad. It's normally just a couple of things where the ROI of fixing the code itself would be incredibly high.
It also doesn't normally require "sophisticated" code. Like I said, it usually just requires someone that knows the basics well enough to identify something like non-SARGable code whether it's being generated by an ORM, something embedded in managed code, or something in stored procedure.
And, sometimes it's going to require someone to understand that the tool being used, whatever it is, is going to need to be abandoned. That goes for all 4 sides of the house (hardware folks, front end folks, database folks, and management controlling the purse strings). If it's broke, it's gotta be fixed. As a bit of a side bar, it's also amazing how many people have gone through this type of thing and still balk at the idea of doing it right the first time, which takes almost no extra time and can save thousands of hours of research, rework, and retest not to mention avoiding black-eyes in the face of customers. Heh... as a reminder to managers, if you want it real bad, that's usually the way you'll get it. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 15 posts - 106 through 120 (of 142 total)
You must be logged in to reply to this topic. Login to reply