January 3, 2008 at 11:29 pm
Jeff Moden (1/3/2008)
No... my turn to disagree... execution plan showed 10,000 rows... there were only 25 in the result set...
There are only 25 in the final result set, as shown by the left-most arrow on the exec plan (the one that runs to the SELECT operator). I'm talking about the result sets passed from one operator in the query to another
10 000 rows were retrieved from the index scan. That internal resultset was then passed to the compute scalar that operated on each of those 10 000 rows (reading another 400 or so for each of those 10 000, but not showing that info anywhere) and output 10 000 rows.
The result set from the compute scalar (still 10 000 rows) then went to the distinct operator that operated on those 10 000 rows and outputted a result set of 25.
That result set was then passed to the SELECt operator and hence returned to the client.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 3, 2008 at 11:40 pm
Not sure why you're whackin' at me on this one, Gail... real fact of the matter is that 4 million internal rows were generated and they don't show on the execution plan... that's what I meant by the execution plan lied... that's all I was trying to make people aware of... 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
January 3, 2008 at 11:52 pm
Just trying to clarify some details of why the exec plan doesn't show the 4 million rows. If you'd prefer, I'll remove the posts
I agree that the exec plan doesn't show everything. The root problem is that no details of what compute scalar did ever shown, even if that compute had an execution plan of its own. I don't believe the cost % of the comute scalar can ever be trusted either. The row counts as they are shown are correct
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
January 4, 2008 at 6:21 am
Hi Jeff,
Just wanted to say that I too think this is a great article - well written with plenty of food for thought.
I've got several complex queries I need to look into improving and will be thinking now of the difference between DISTINCT and GROUP BY and the possible need to use subqueries or DTEs as well as considering the UDF call overhead (which I was already very aware of).
The final query in the article has also prompted me to mentally raise the priority on my task to look into what can be done with XML. It looks like there could be some useful tricks there.
Keep up the *very* good work.
Thanks
Derek.
Derek
January 4, 2008 at 6:57 am
Awesome... thanks for the feedback, Derek!
--Jeff Moden
Change is inevitable... Change for the better is not.
January 4, 2008 at 7:42 am
Hi Jeff,
I've seen a lot of discussions here but no one mentioned indexes and choice of clustered one. What you can find in practice is that db designers usually create primary key and put it as clustered without thinking on usage of the table.
OK, if you change indexes in your example and set IX as clustered one instead of PK you will see the difference (even if estimation plan says it is more expensive).
On my 2005 box (like yours but with 2GB RAM) it runs 2 sec comparing to 10 sec with original one.
January 4, 2008 at 7:55 am
Yes, thank you for the feedback and I agree that a clustered index will, in fact, make the concatenation run much faster... except that most folks aren't going to dedicate the power and functionality of the clustered index to a concatenation function.
--Jeff Moden
Change is inevitable... Change for the better is not.
January 8, 2008 at 1:08 pm
Jeff Moden (1/3/2008)
Martin Bastable (1/3/2008)
Jeff:I loved your comments on embedded documentation. I do that all the time, I find it helps me loads! (And other developers here when needs be). Even though I get some ear ache from developer friends who strongly belive in not commenting, for various reasons 🙂
Martin
Thanks for the feedback, Martin... yeah... I just wish more folks felt like we do... life would be a lot easier in the code world. And, yeah, I've got "friends" like that, too. Someday they'll learn... 😀
Jeff,
Another great article. I like how you made it more personable and not quite so textbook. I enjoy reading your articles and your posts.
I completely agree that a lot of performance issues can be traced back to the code. I worked on some code last week that would run 2 select statements (by the way it was SELECT *) for every employee to get 3 values! In one case it was running against 140 employees. I was able to get all of it down to one SELECT statement.
Fortunately, there are some programmers (like myself) that try to comment as much as possible and hate lack of documentation. Reading code (often poorly written) with no comments or proper formatting can be frustrating (I have to deal with that now). I have seen too many SPs that would benefit from embedded documentation.
Ian.
"If you are going through hell, keep going."
-- Winston Churchill
January 8, 2008 at 5:03 pm
Thanks for the great feedback, Ian... especially about keeping it casual (ie. fun) and the remarks about documentation... We've actually put sime "documentation standards" into play at work. They're not so limiting as to suppress innovation, but they do state what must be documented within the code. Basically, it's the way I've always written code because I forget stuff (done... next problem please!). 😉
--Jeff Moden
Change is inevitable... Change for the better is not.
January 9, 2008 at 7:14 am
Jeff Moden (1/8/2008)
We've actually put sime "documentation standards" into play at work. They're not so limiting as to suppress innovation, but they do state what must be documented within the code. Basically, it's the way I've always written code because I forget stuff (done... next problem please!). 😉
I know I have seen some of your commenting in your posts, but could you provide some of the "documentation standards" you use. After working with some of our legacy stored procedures, I would like to try to implement something like that at my office.
Also, in the article there was a code block that didn't show up properly. It was after
The next thing folks might try, is a full "covering" index...
Could you provide what was put there?
Thanks,
Ian.
"If you are going through hell, keep going."
-- Winston Churchill
January 9, 2008 at 3:56 pm
Awesome article. There are not nearly enough resources focusing on code efficiency and even fewer on making code legible in my opinion, and this one was well written and articulate.
---
Timothy A Wiseman
SQL Blog: http://timothyawiseman.wordpress.com/
January 9, 2008 at 4:49 pm
Also, in the article there was a code block that didn't show up properly. It was after
The next thing folks might try, is a full "covering" index...
Could you provide what was put there?
Funny how these things work out... I can see it just fine... but here's what it says...
CREATE INDEX IXC_TestData_Cover1 ON dbo.TestData(SomeID,SomeCode,RowNum)
I know I have seen some of your commenting in your posts, but could you provide some of the "documentation standards" you use. After working with some of our legacy stored procedures, I would like to try to implement something like that at my office.
Sure... I gotta get it from work...
--Jeff Moden
Change is inevitable... Change for the better is not.
January 9, 2008 at 4:56 pm
timothyawiseman (1/9/2008)
Awesome article. There are not nearly enough resources focusing on code efficiency and even fewer on making code legible in my opinion, and this one was well written and articulate.
Thanks, Timothy :blush:
On the same note of "making code legible", it's just amazing to me how many people don't take pride in their code. I believe I've said on these forums before... my four major rules of writing code are "Make it work, make it fast, make it pretty, and it ain't done 'till it's pretty". 😛
Heh... Lot's of folks think I do the documentation and readability thing for them... it's not that at all... I'm old and can't remember too much more without forgetting something else so I have to document the code so I remember what I did. :hehe:
Again, thanks for the great compliment.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 12, 2008 at 1:28 am
More interesting thing on how to format the concatenated string
http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=81254#344547
N 56°04'39.16"
E 12°55'05.25"
February 12, 2008 at 6:33 am
Just an update from my earlier email and our friends with the count(*)'s 😀
The developers `moved some tables` and `made some tweaks`, gave us an update 🙂
15 minute process now takes 1 😛
I wonder if their other customers noticed!
Martin 🙂
Viewing 15 posts - 46 through 60 (of 82 total)
You must be logged in to reply to this topic. Login to reply