May 5, 2008 at 8:57 am
Jeff Moden (5/2/2008)
Matt Miller (5/2/2008)
Grant Fritchey (5/2/2008)
How did you know what I had planned for the week-end... 😀I have to ask though, what's drowning breath exercise? While we do incorporate a lot of traditional stuff, including sweet, sweet katana, we're basically a street-oriented kenpo school.
Drowning breath (or GiGong) is a deep-breathing exercise. Simple: inhale through nose, exhale through mouth. The catch is that you are looking to slow down your breathing as low as you can stand it (and technically - pushing it a little), so you're looking to trade the typical 15 breaths per minute with something more along the lines of one solid inhalation (stretched over 30-45 secs) and one really long exhale (also 45 secs +). really slow in BOTH directions. Takes a bit of focus, and is actually a very intense exercise.
Let's just say this causes a very interesting sensation - rather mind-focusing. Thus the name
Gosh... 15 breath's per minute is typical? I'd pass out from hyper-ventilation if I took that many breaths in a minute.
There does seem to be a lot of variation on the numbers. Anywhere from 10 to 20. Apparently it depends on who you're testing.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
May 5, 2008 at 9:05 am
Matt Miller (5/5/2008)
Jeff Moden (5/2/2008)
Matt Miller (5/2/2008)
Grant Fritchey (5/2/2008)
How did you know what I had planned for the week-end... 😀I have to ask though, what's drowning breath exercise? While we do incorporate a lot of traditional stuff, including sweet, sweet katana, we're basically a street-oriented kenpo school.
Drowning breath (or GiGong) is a deep-breathing exercise. Simple: inhale through nose, exhale through mouth. The catch is that you are looking to slow down your breathing as low as you can stand it (and technically - pushing it a little), so you're looking to trade the typical 15 breaths per minute with something more along the lines of one solid inhalation (stretched over 30-45 secs) and one really long exhale (also 45 secs +). really slow in BOTH directions. Takes a bit of focus, and is actually a very intense exercise.
Let's just say this causes a very interesting sensation - rather mind-focusing. Thus the name
Gosh... 15 breath's per minute is typical? I'd pass out from hyper-ventilation if I took that many breaths in a minute.
There does seem to be a lot of variation on the numbers. Anywhere from 10 to 20. Apparently it depends on who you're testing.
And how hard you're testing them... 😀
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 6, 2008 at 8:07 am
{I was encouraged to discuss my position a little better}
I'll attempt to give an example to illustrate (btw, this also illustrates a candidate for an appropriate use of cursors):
Think about a stored procedure to transpose a table, and how you'd accomplish that.
May 6, 2008 at 8:18 am
dfalso (5/6/2008)
{I was encouraged to discuss my position a little better}I'll attempt to give an example to illustrate (btw, this also illustrates a candidate for an appropriate use of cursors):
Think about a stored procedure to transpose a table, and how you'd accomplish that.
"What" also illustrates a candidate for an appropriate use of cursors? The BizzBuzz test??? :blink:
--Jeff Moden
Change is inevitable... Change for the better is not.
May 6, 2008 at 8:26 am
Jeff Moden (5/6/2008)
dfalso (5/6/2008)
{I was encouraged to discuss my position a little better}I'll attempt to give an example to illustrate (btw, this also illustrates a candidate for an appropriate use of cursors):
Think about a stored procedure to transpose a table, and how you'd accomplish that.
"What" also illustrates a candidate for an appropriate use of cursors? The BizzBuzz test??? :blink:
Transposing a table (sorry if I didn't make that clear)
May 6, 2008 at 9:03 am
Ummm... you certainly don't need a cursor to transpose a table. Cross-tabs, dynamic cross-tabs, and the occasional use of the Pivot function do the job in a nice set based manner.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 6, 2008 at 9:32 am
Jeff Moden (5/6/2008)
Ummm... you certainly don't need a cursor to transpose a table. Cross-tabs, dynamic cross-tabs, and the occasional use of the Pivot function do the job in a nice set based manner.
That's true, they do. (For me, anyway) the problem came about when trying to write the procedure to create the dynamic SQL to create the PIVOT. I hit some performance issues.
(do you mean using Reporting Services or Excel crosstabs? I was meaning for this to be done as a stored procedure specifically)
May 6, 2008 at 12:03 pm
Jeff Moden (5/6/2008)
Ummm... you certainly don't need a cursor to transpose a table. Cross-tabs, dynamic cross-tabs, and the occasional use of the Pivot function do the job in a nice set based manner.
Sorry, I got pulled away by some things.
But I'd assume that most people would look at this problem, and start asking questions. Why am I doing this operation (what's it for)? Would this be better done in another application/environment? Etc.
And the more you asked, the more you'd be able to target your answer. Designing code to generate a pivot isn't too bad for a smaller table, but if you look at a 1000 column table, creating the dynamic SQL itself can be a problem. But most people probably don't go there, because they think, "This code will generally run on a 100 column table at most, because a) it's just a report and b) we don't have tables with more than 100 columns anyway."
So I'm trying to draw an analogy to my original point about context.
I just threw in the point about cursors not always being bad because in this case they can make sense. Think about crafting the SQL statement to create the result set.
May 6, 2008 at 8:27 pm
dfalso (5/6/2008)
Jeff Moden (5/6/2008)
Ummm... you certainly don't need a cursor to transpose a table. Cross-tabs, dynamic cross-tabs, and the occasional use of the Pivot function do the job in a nice set based manner.That's true, they do. (For me, anyway) the problem came about when trying to write the procedure to create the dynamic SQL to create the PIVOT. I hit some performance issues.
(do you mean using Reporting Services or Excel crosstabs? I was meaning for this to be done as a stored procedure specifically)
No... no Reporting Services or Excel required. Take a peek in Books Online for "cross-tab reports" in SQL Server 2000 or follow the following URL...
http://msdn.microsoft.com/en-us/library/aa172756(sql.80).aspx
They're very effective, can be made to return more than PIVOT in 2k5, and, in the presence of the correct indexes, just absolutely fly. They're also pretty easy to make in the form of Dynamic SQL and, with the advent of VARCHAR(MAX) in 2k5, can be easily made to return more columns than you'd ever need.
--Jeff Moden
Change is inevitable... Change for the better is not.
May 6, 2008 at 9:11 pm
dfalso (5/6/2008)
Jeff Moden (5/6/2008)
Ummm... you certainly don't need a cursor to transpose a table. Cross-tabs, dynamic cross-tabs, and the occasional use of the Pivot function do the job in a nice set based manner.Sorry, I got pulled away by some things.
But I'd assume that most people would look at this problem, and start asking questions. Why am I doing this operation (what's it for)? Would this be better done in another application/environment? Etc.
And the more you asked, the more you'd be able to target your answer. Designing code to generate a pivot isn't too bad for a smaller table, but if you look at a 1000 column table, creating the dynamic SQL itself can be a problem. But most people probably don't go there, because they think, "This code will generally run on a 100 column table at most, because a) it's just a report and b) we don't have tables with more than 100 columns anyway."
So I'm trying to draw an analogy to my original point about context.
I just threw in the point about cursors not always being bad because in this case they can make sense. Think about crafting the SQL statement to create the result set.
Heh... yes... let's think about crafting the SQL statement to create the result set... if you use a cursor to, say, cross databases, then you not only need the cursor, you also need a temp table so that you store each pass of the cursor in it so you can actually return a single result set. With VARCHAR(MAX) and some prudent Dynamic SQL, you need not the cursor, nor the loop, nor the temp table, nor any of the RBAR that goes along with the process. Once you've learned how to do it, it's actually faster to think about and write than most cursors because it's a lot shorter code and you don't need to describe the cursor, define the temp table, and write the loop code.
And, who knows? It might even be accidently performant and scalable simply because it didn't use RBAR. 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
May 7, 2008 at 5:52 am
I've been staying out of this one, but I have to agree with Jeff. The more you practice writing set based queries the easier they get. Add to that the fact that they frequently can be wrapped up in a single statement and suddenly all that work to create cursors looks sloppy and confusing.
Finally, just because something may require a junior level DBA or a new guy to stretch their abilities a bit doesn't mean it should be thrown out. I'd feel better laying under that bus, the one that runs down developers all the time, knowing that I've written code that will scale reasonably well so that the new guy might not have to muck with it for a few weeks, months or years.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
May 9, 2008 at 9:01 am
Jeff Moden (5/6/2008)No... no Reporting Services or Excel required. Take a peek in Books Online for "cross-tab reports" in SQL Server 2000 or follow the following URL...
http://msdn.microsoft.com/en-us/library/aa172756(sql.80).aspx
They're very effective, can be made to return more than PIVOT in 2k5, and, in the presence of the correct indexes, just absolutely fly. They're also pretty easy to make in the form of Dynamic SQL and, with the advent of VARCHAR(MAX) in 2k5, can be easily made to return more columns than you'd ever need.
{I apologize, I was away from the office}
Ah, I know of that technique (and use it), I just didnt have a formal name (I just think of it as a technique to pivot things). Thanks for the tip.
Yes, I do appreciate that one can make a PIVOT statement effectively (in fact, that's how I currently solve this problem). I just feel that there should be a CURSOR solution more efficient. My reasoning is in two parts:
1) if you look at the process to build and run your SQL, you make one pass over the table to get all the row id values (which goes into a variable). Then, you make another pass over the information_schema or sys.columns view to create the SQL for each column, and finally you actually run the SQL which then issues a seperate PIVOT statement for each column in the table (but UNIONing them all together). So you've evaluated all the rows, then all the columns, to create a statment to evaluate each row of each column. There seems like there should be a more efficient way to go there.
2) the problem is not a set-level problem, its a cell-level problem. That is, there's no common operation applied to each set member. There's a common heuristic, sure, but each column is evaluated seperately, and each row for that column is evaluated seperately still, so in effect each cell has it's own custom operation (regardless of the syntax we use). Clearly, an array-based solution is the best, so I *think* a cursor is the closest to that. (How well does CLR-stored procedure code do? Can it access SQL Server's memory directly for I/O, or is it another layer? Because if it's efficient, that's probably the best choice...I havent' done work with CLR stored procs yet, I confess)
May 9, 2008 at 9:17 am
Grant Fritchey (5/7/2008)
I've been staying out of this one, but I have to agree with Jeff. The more you practice writing set based queries the easier they get. Add to that the fact that they frequently can be wrapped up in a single statement and suddenly all that work to create cursors looks sloppy and confusing.Finally, just because something may require a junior level DBA or a new guy to stretch their abilities a bit doesn't mean it should be thrown out. I'd feel better laying under that bus, the one that runs down developers all the time, knowing that I've written code that will scale reasonably well so that the new guy might not have to muck with it for a few weeks, months or years.
I don't think you're quite understanding my point (and I apologize if I'm not making myself clear enough). I'm not saying that I don't know how to use set-based queries; hopefully you can see in my previous posts that I do (but I'll be happy to take any tests:)). I'm also not saying that all code should be written for the lowest common denominator. Rather, I'm arguing that the context of the situation should be considered. For instance, if I'm at a place where there are many coders, but none of them have the level of math background I have, then I'm not going to employ a complex mathematical solution to a problem, because it's just not supportable. My original point was that, given the correct context, I'd consider the original WHILE loop or CURSOR solution to the problem. And that context involves my interpretation of its scaled usage, the user who would like to maintain it, and performance. And my point was that if this is non-production code, and expected to stay a small loop, I'd prefer something that's more readable by my intended audience since performance would be about equal. (BTW, that's especially true if you expect the problem to scale in a different way; like if the sequence count iterate over is a variable, say).
I agree with the viewpoint that one should look for a set-based solution first for many reasons, and understand a senior programmer who puts a policy of no cursors into place to force their developers and new coders to practice finding such a solution. I disagree with those who say they should never be used; it just depends on need and context. And I was also attempting to argue that there are other considerations to keep in mind than just performance when coding.
May 9, 2008 at 9:27 am
dfalso (5/9/2008)
I was also attempting to argue that there are other considerations to keep in mind than just performance when coding.
I agree... just not if the solution requires a cursor expecially in 2k5. 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
May 9, 2008 at 9:40 am
dfalso (5/9/2008)
I don't think you're quite understanding my point (and I apologize if I'm not making myself clear enough). I'm not saying that I don't know how to use set-based queries; hopefully you can see in my previous posts that I do (but I'll be happy to take any tests:)). I'm also not saying that all code should be written for the lowest common denominator. Rather, I'm arguing that the context of the situation should be considered. For instance, if I'm at a place where there are many coders, but none of them have the level of math background I have, then I'm not going to employ a complex mathematical solution to a problem, because it's just not supportable. My original point was that, given the correct context, I'd consider the original WHILE loop or CURSOR solution to the problem. And that context involves my interpretation of its scaled usage, the user who would like to maintain it, and performance. And my point was that if this is non-production code, and expected to stay a small loop, I'd prefer something that's more readable by my intended audience since performance would be about equal. (BTW, that's especially true if you expect the problem to scale in a different way; like if the sequence count iterate over is a variable, say).I agree with the viewpoint that one should look for a set-based solution first for many reasons, and understand a senior programmer who puts a policy of no cursors into place to force their developers and new coders to practice finding such a solution. I disagree with those who say they should never be used; it just depends on need and context. And I was also attempting to argue that there are other considerations to keep in mind than just performance when coding.
I do understand where you're coming from. I agree, very much, in avoiding absolute statements like "never use a cursor." The problem I have is that I'd put it "never use a cursor unless you've got a terribly good reason." That assumes, unfortunately, incorrectly in far too many cases, that the person making the decision has enough smarts to know that they've got a good reason as opposed to them simply not understanding the problem or TSQL adequately to write the query correctly in the first place. So, I find myself saying "never use a cursor." Even though I know that there are situations where it's OK.
There are always considerations to keep in mind other than performance.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 15 posts - 226 through 240 (of 309 total)
You must be logged in to reply to this topic. Login to reply