April 26, 2012 at 8:13 pm
You would still need one CASE statement for the ASC sort, and one for the DESC sort for a particular column.
You could use those same two CASE statements to sort a different column of the same type. For instance if you were also sorting on an employment start date you would not need two more CASE statements. Just add a new WHEN to the ASC and DESC cases.
Chuck Hoffman
------------
I'm not sure why we're here, but I am sure that while
we're here we're supposed to help each other
------------
April 26, 2012 at 11:23 pm
Wouldn't something like that do the trick?
DECLARE @OrderbyProcessId int, @OrderByActionKindId int, @OrderByActionDetailId int
SET @OrderbyProcessId = 1 -- ASC
SET @OrderByActionKindId = 0 -- No Order
SET @OrderByActionDetailId = -1 -- DESC
SELECT ProcessId, ActionKindId, ActionDetailId -- All items are ints
FROM Actions
ORDER BY
CASE @OrderbyProcessId WHEN 1 THEN ProcessId ELSE 0 END ASC,
CASE @OrderbyProcessId WHEN 0 THEN 0 ELSE 0 END ASC,
CASE @OrderbyProcessId WHEN -1 THEN ProcessId ELSE 0 END DESC,
CASE @OrderByActionKindId WHEN 1 THEN ActionKindId ELSE 0 END ASC,
CASE @OrderByActionKindId WHEN 0 THEN 0 ELSE 0 END ASC,
CASE @OrderByActionKindId WHEN -1 THEN ActionKindId ELSE 0 END DESC,
CASE @OrderByActionDetailId WHEN 1 THEN ActionDetailId ELSE 0 END ASC,
CASE @OrderByActionDetailId WHEN 0 THEN 0 ELSE 0 END ASC,
CASE @OrderByActionDetailId WHEN -1 THEN ActionDetailId ELSE 0 END DESC
April 26, 2012 at 11:59 pm
ben.kimball (4/26/2012)
I can sorta barely make sense of that, but not a CF guy, so taking your word for how it communicates with the back end SQL server. 🙂My statement was more directed at when people dynamically generate SQL inside of the SQL Stored Proc. Drives me insane, or the procedural guys who write all their Sproc code using cursors and loops. Arrrgghh!! Yes, I once was one of them, and it's a really difficult concept to explain why you shouldn't do that and explain how you should do it different. I was stuck there for a few years and aggravated several people in the mean time.
Then one day, click, you have that epiphany and the better way finally makes sense. Until then, you see the code that's written correctly and participates with the compiler/query plan cache, and wonder why people write stuff that way, as it seems more trouble than it's worth. It's also hard to let go of thinking you need to have 'full control' of what SQL Server does (As if you ever did) and finally realize that it's a pidgin to just precisely explain what data you are wanting and just trust SQL to determine the best way to get you that data and let it do it's thing. (In most cases, and usually when it does something unexpected it turns out that it was you, not it, that was approaching the task incorrectly.)
To me, there's basically not much difference between writing dynamic SQL in a sproc or someone writing embedded T-SQL except that the execution plan in the dynamic SQL stands a better chance of being reused and there's actually some pretty good methods to prevent SQL Injection in dynamic SQL. I also worry a lot when I learn that a well known and widely used product like Linq does implicit conversions to NVARCHAR killing any chance of actually using an index in a simple database lookup if you used VARCHAR in your tables. Yeah, I've heard there's a way around that but it gives you absolutely no warning.
It certainly takes a little longer to write but if you want really fast database code, then write database code in the database. If you want to quickly build an app and you don't need to worry about future scalability or performance and you don't have to worry about looking up more than one row at a time, then do it all in the front end or with ORMs.
Since our group has to write apps that handles all of that, we've learned to be flexible and use the right tool for the right job.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 27, 2012 at 2:03 am
mtassin (4/26/2012)
Dynamically creating that query will also avoid plan re-use. The IN clause will be different every time. In that case, if I were using SQL 2008 I'd pass in a Table Parameter to the stored proc with a list of values and join onto it or use it in an exists test.With SQL 2005 or earlier I'd pass in a comma separated list of values and use a tally table to break it up into a temp table or table variable, and use that for the exists test or as a join.
You could also pass in an xml document with a list of values in it and shred those out, done properly that's a very fast solution as well.
Not quite, but I get what you mean. Here every list element is translated into an adittional query parameter (under the hood). This means that at most you get a parameterized version of the query plan for every number of elements past by the list. Plus every permutation of the other dynamic parts of the query. In this case, it won't be more then a few plans and doing it differently is really not worth the effort. The plans themselves are reused by SQL server whenever it can as long as the structure of the code part and the number/types of arguments are identical.
This situation with lists is a theme of passing a variable number of arguments, either by this construct or by looping yourself and passing on a sinle ID at a time as a parameter. When there is a lot more variability in the number of items in the list, it pays to send the list as a single argument (varchar/xml) and split it on the database side into a temp table/temp variable and use the values from taht table in the query. I agree with that.
April 27, 2012 at 5:26 am
Henry B. Stinson (4/26/2012)
I saw a very clever use of ROW_NUMBER( ) OVER (ORDER BY CASE WHEN ....) as RowNum,in a CTE WITH-clause in a stored proc, where the CASE used an input param to determine which column and whether ASC or DESC to order by.
You can follow this up with a final ORDER BY RowNum. You could also use RANK( ) or DENSE_RANK( ) functions if that offered value.
Yeah, when I started reading the article the first thing that came to my mind was why you'd ever want to try and do this, wouldn't you always order by something in the SELECT clause? Perhaps just me! And I was thinking some kind of CTE / windowed function / case combo.
A quick eg. in the case of trying to sort by ASC or DESC based on the value of a parameter:
SELECT 2 col
INTO #ConditionalSortTest
UNION ALL
SELECT 1
UNION ALL
SELECT 3;
declare @a int
select @a = 2 --ASC, or 1 for
SELECT
col,
case when @a = 1 then row_number() over (order by col desc)
when @a = 2 then row_number() over (order by col asc)
END AS Sort
FROM
#ConditionalSortTest
ORDER BY
Sort
April 27, 2012 at 7:40 am
Jeff Moden (4/26/2012)
It certainly takes a little longer to write but if you want really fast database code, then write database code in the database. If you want to quickly build an app and you don't need to worry about future scalability or performance and you don't have to worry about looking up more than one row at a time, then do it all in the front end or with ORMs.
Since our group has learned that stuff that starts out as simple row-by row processing and ORM coverable tends to grow, I've pretty much mandated stored procs for all new work.
At first there was a little disagreement and grumbling. Now even the grumblers admit that the policy makes sense.
We're dealing with a grown system that runs the business for 1000 users, there's a lot of practices that obviously came from when the business managed itself using Dbase3 applications instead of SQL. So a lot of our bugs come from misunderstandings of the database by all of us. Having the data and data access easily updatable because what we thought was the rule for how the data acted 100% of the time really only covered it 92% of the time, is a blessing.
April 27, 2012 at 8:02 am
The funny thing is that personally, I moved away from doing all data access and manipulation by means of stored procedures. For me, the change has been a blessing as flexibility has been improved by leaps and bounds. Where I work, we can do this as we make the applications and host them ourselfs and have access to both application code and the database. Equally important is that fixing a query in the application code does not require a recompile or re-deployment or generate any downtime of any kind with the systems we use. In compiled application enviroments like .net, not being so flexible in the application layer would problably make it a real pain and undo nearly all (or more) of the gains.
Important however is using efficient database communication (ORMs generally fail at this) to limit the number of query plans and improve plan resuse. Also important is that the application layer is capable of building SQL in a way that is natural to someone knowing SQL. It would be terrible to have a programmer place unnamed '?' marks inside his SQL code, and fill in the actual values somewhere else...as that would be guaranteed to go wrong many times.
Without good support in the application layer, I would not have made the move and kept all code in the database. Nowdays I use stored procedures only in specific cases. An example is when a third party needs access to one of our our databases. And keeping frequently changing code in the database can also be a versioning hazzard. These days, there must be a clear and strong reason to put code in the database in my view. For us here, there is seldom a reason for it, but in other runtime/development environments that can be quite different.
As usual...it depends 🙂
April 27, 2012 at 12:56 pm
Great article, Chuck. Can you update it as the other poster (kevriley) found this too?
I did want to add additional info on something though. When you say "No they do not cause a syntax error because somehow CASE forgives this." It's actually not CASE that forgives it, what happens is that the case evaulates to NULL (this always happens when no cases are matched), and the expression ends up being ORDER BY NULL ASC
The order by is the thing that is not disrupted by that type of expression. Anyone is feel to test it out themselves: ie:
select top 10 * from sysobjects order by null asc.
In order to have people understand the process a little bit better, could you update the article with this information?
April 27, 2012 at 4:11 pm
ryan.betker (4/27/2012)
Great article, Chuck. ...
@ryan-2: You're wrong and have clearly not read any of the other comments posted already. Better alternatives have been presented and your additions have been suggested already too.
Instead, even though I do want to credit the author for the effort he clearly put into it, the article should be removed from this board a.s.a.p. The technique presented in the article should simply never be used: it is error prone, a performance time-bomb and much better alternatives exist.
April 28, 2012 at 12:05 am
However, if your organization's programmers insist on embedded SQL and the database team doesn't enforce such boundaries, then at least make sure all such SQL is sent as parameterized commands/queries (using Command object for example in ADO.Net). Some ORM code generators enforce that already, setting up SQL as parameterized commands. But not using parameterized commands or queries flushes out the buffer pool of already-compiled execution plans, because, say you have one application that is sending out thousands of the same simple query every day but with different values in the where conditions, most RDBMSs see these as new queries and generate a new execution plan for them. So a seemingly simple, fast, not-resource-intensive query can cause performance problems elsewhere in the system, sometimes severely slowing things down overall.
Interestingly, Sybase ASE 15 automatically reformats all SQL queries & commands sent to it as parameterized SQL, avoiding that problem. But using stored procs for everything eliminates any such dangers as I described above. I won't even get into the ease of making changes that stored procs provides compared to embedded SQL (although some new ORM tools use XML SQL configuration files that can be modified without compiling and deploying a new version of the application. Ibatis, Ibatis.Net, Hibernate, NHibernate, Entity Framework, etc, I believe all allow config files to be deployed along with the application and not compiled into the binary executable. I'm not expert in C# & java, so maybe some of you can verify or correct me on this. I'm not sure what Linq does.
April 28, 2012 at 12:12 am
The issue of passing a parameter to a stored procedure that contains a list of values to be used in an IN-clause used to confound me, but now one can pass in a table object or an XML text as a parameter (and Oracle allows passing in "array" parameters which is same thing as passing in a table object variable in SQL Server). If passing in XML (can be done as a VARCHAR or NVARCHAR), the stored proc can convert that to a local table variable and the IN statement can reference that table variable. I've been away from that for 4 months (out of work for 4 months), so maybe someone else can put in an example to illustrate.
April 28, 2012 at 6:48 am
Good Article,
Some misstatements but with the responses the misinterpreted issues have been brought to light. The correct code should have no issue running correctly in future MSSQL releases.
Per the issue with Data and Character data the dates could have been converted to character allowing the combined case to be used.
Brhan
April 28, 2012 at 9:31 am
mtassin (4/27/2012)
Jeff Moden (4/26/2012)
It certainly takes a little longer to write but if you want really fast database code, then write database code in the database. If you want to quickly build an app and you don't need to worry about future scalability or performance and you don't have to worry about looking up more than one row at a time, then do it all in the front end or with ORMs.Since our group has learned that stuff that starts out as simple row-by row processing and ORM coverable tends to grow, I've pretty much mandated stored procs for all new work.
At first there was a little disagreement and grumbling. Now even the grumblers admit that the policy makes sense.
We're dealing with a grown system that runs the business for 1000 users, there's a lot of practices that obviously came from when the business managed itself using Dbase3 applications instead of SQL. So a lot of our bugs come from misunderstandings of the database by all of us. Having the data and data access easily updatable because what we thought was the rule for how the data acted 100% of the time really only covered it 92% of the time, is a blessing.
I was being generous. I actually prefer 100% stored procedures even if they might take a little longer to develop especially after being burned by the NVARCHAR bug nicely provided by things like Linq to SQL and totally non-optimized table design by tools that are supposedly smarter. Like I've said so many times, "If you want something real bad, that's the way you'll normally get it."
--Jeff Moden
Change is inevitable... Change for the better is not.
April 30, 2012 at 2:40 pm
Please kick lingaga off the forum. that is pure SPAM.
April 30, 2012 at 3:00 pm
Jeff Moden (4/28/2012)
I was being generous. I actually prefer 100% stored procedures even if they might take a little longer to develop especially after being burned by the NVARCHAR bug nicely provided by things like Linq to SQL and totally non-optimized table design by tools that are supposedly smarter. Like I've said so many times, "If you want something real bad, that's the way you'll normally get it."
Here's another benefit. I just ran the SQL 2012 upgrade analyzer against my DB. It flagged stored procs using features (xp_sendmail in some dead procs we need to clean out) that don't exist for 2012.
You can do this with a trace, but if all of your SQL calls are in stored procs, you don't have to worry if your trace has captured all of the SQL.
Viewing 15 posts - 31 through 45 (of 63 total)
You must be logged in to reply to this topic. Login to reply