November 1, 2016 at 7:40 am
BWFC (11/1/2016)
Grant Fritchey (11/1/2016)
Hugo Kornelis (11/1/2016)
BWFC (11/1/2016)
I've got a barrel of cider maturing in the cellar, can I come?Just send the barrel. We'll review your application later. :Whistling:
SECOND!
All in favor?
One barrel coming up. I'll keep the contents though 😛
Deng! I knew I was overlooking some important detail.
November 1, 2016 at 9:51 am
Great seeing so many of you last week. It was hectic and crazy,but lots of fun in Seattle.
November 1, 2016 at 10:05 pm
Yep last week was a lot of fun. And tiring.
The freecon was really good, and I hope Jason and Wayne do another one.
I thought overall the sessions (well the ones I went to) were very good.
It was good how the community stepped in with events for the Thursday evening.
Steve if you do it again, I think I will have to attend game night. Couldn't quite fit it in. The dinner idea worked a treat.
I still have yet to go through my photos.
Had a fun 1/2 day with Gail on Saturday.
It was funny how both my King Crimson T-shirt and beanie got the most attention!
Had a fun chat with Erland about them. Girl at the hotel liked the picture. Then a shop owner in Pike Street chatted to me about them. Made a nice change from the usual comments I get - hehe.
I've had a couple of nice days in NYC. Went up the Empire State Building, found B&H and discovered just how big a camera shop can. Oh and what a 600 mm zoom lens looks like on the end of a camera. Sadly, even at the second hand price, way out of my league. Went to Macys and bought a few things (well, you have to). Next week when I come back to NYC I will do the rest of the tourist attractions I have on my list.
Anderson, Rabin & Wakeman (all ex-Yes) were very good at the Beacon tonight. Chatted to a couple of people about Marillion. Off to Boston tomorrow for the next part of my trip.
Cheers,
Rodders...
November 2, 2016 at 5:50 am
I'd like to request a beating from the Threadizens.
I have a blog post[/url] that I'm informed doesn't adequately test the question I'm attempting to answer. You guys will likely recognize the paraphrased question from that weird collection of performance tips that was shared here earlier. I took about five of them and wrote blog posts that, at least in my mind, dispelled the silly or stupid statement in the tip. The complaint I'm getting is that the tests don't spell out all the hardware in use, lay out the statistics in question, provide a detailed explanation of exactly why there is a 5-10 micro-second disparity (on average) between some of the queries.
Please, beat me. Do I need to explain a 5 micro-second difference, or can we put that down to simple variation or even margin of error (I really haven't found documentation on the exact margin of error for extended events, if someone has that, I'd appreciate it)? For such simple queries, does it really matter how fast my CPU is in this situation (especially since all the tests were run using the same CPU)? Do I need to list every possible caveat about performance differences?
I'm serious in these questions. I want to know whether or not I'm doing people a disservice. If I'm as off base as has been suggested, I sure need to know that. I trust you all to be vicious where necessary.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
November 2, 2016 at 6:11 am
Grant
Whoever is making these wacky claims, it's his or her case to make, not yours to rebut. If you're going to do a service to the community by rebutting them anyway (and thanks for doing so, by the way), I think you should test against a clean copy of a generally available database (AdventureWorks or something like that). That way, the CPU moaners can repeat the tests themselves and see whether they get the same results.
John
November 2, 2016 at 6:28 am
Hi Grant,
Of course views are faster than stored procedures. And that cars are faster than laptops.
When a statement is as off-base as that statement about views and stored procedures, disproving them can be very hard. For a really fair comparison you should look at two things: (1) "SELECT * FROM View;" (without WHERE clause); and (2) "EXEC Proc;" (without parameter, and with the stored proc in the query being equal to the view definition).
After that comparison, you can then explain why the comparison is really nonsense, because of the different use cases. You can then show what happens when you select less columns from the view (ideally such that a few joins are optimized out) or add a filter (preferably on a selective and indexed column) to show how a view performs better and provides more flexbility than a stored procedure for those scenarios. And then show a few use cases of stored procedures, like e.g. creating a multi-step stored procedure and explain how the same functionality is (almost) impossible with views.
On to your specific questions:
* Specify hardware - no, not needed as long as you are comparing different code on the same hardware. (Specifying hardware might even be a violation of the DeWitt Clause!). Do make sure that your tests are all done under as equal circumstances as possible (e.g.do not start Word during one of the tests if SQL Server is running on the local machine).
* A 5 microsecond difference on a single execution means nothing, this canbe caused by e.g backgroud processes using resources. However, if you use large numbers of executions then those influences should average out. If the difference is consistent, then I would like to know (out of academic interest) the reason. There is no practical implication for such small differences, unless you run an extreme high-end OLTP system (in which case you should be moving to Hekaton instead of reading Grant's blog)
* One thing that is important: Make sure to cut out as much overhead as possible. You'll also see this in the comments on one of chapters in the new exec plans book. Either do not send results to the server (SELECT ... INTO #Oblicion is what I usuallly use), or use the SSMS options to not display results after execution. Do not request the execution plan for the executions you measure. Do not include the first execution unless you *want* to test the effect of cold cache. Do not use SET STATISTICS (you won't see its results anyway if you use that SSMS option). Do use Include Client Statistics, on a batch with a GO (suitably high number). (Or use Extended Events - though I think that this still has a tiny bit more overhead than the Include Client Statistics, which is measured by SSMS - especiallly if the server and SSMS are on different machines). Do not run tests like this on VMs, and especially not in the cloud. Etc
I hope this helps
November 2, 2016 at 6:35 am
John Mitchell-245523 (11/2/2016)
GrantWhoever is making these wacky claims, it's his or her case to make, not yours to rebut. If you're going to do a service to the community by rebutting them anyway (and thanks for doing so, by the way), I think you should test against a clean copy of a generally available database (AdventureWorks or something like that). That way, the CPU moaners can repeat the tests themselves and see whether they get the same results.
John
Of course it's not my responsibility to rebut them, but, come on. Those "tips" need a swift kick, followed by several more after they're on the ground. I do use a common database, AdventureWorks on the old days, now, WideWorldImporters.
Thanks for the feedback. It's appreciated.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
November 2, 2016 at 6:42 am
Hugo, you are awesome!
It doesn't matter what everyone else says about you.
Kidding. Totally kidding. Thanks. A couple of comments inline.
Hugo Kornelis (11/2/2016)
Hi Grant,Of course views are faster than stored procedures. And that cars are faster than laptops.
When a statement is as off-base as that statement about views and stored procedures, disproving them can be very hard. For a really fair comparison you should look at two things: (1) "SELECT * FROM View;" (without WHERE clause); and (2) "EXEC Proc;" (without parameter, and with the stored proc in the query being equal to the view definition).
After that comparison, you can then explain why the comparison is really nonsense, because of the different use cases. You can then show what happens when you select less columns from the view (ideally such that a few joins are optimized out) or add a filter (preferably on a selective and indexed column) to show how a view performs better and provides more flexbility than a stored procedure for those scenarios. And then show a few use cases of stored procedures, like e.g. creating a multi-step stored procedure and explain how the same functionality is (almost) impossible with views.
On to your specific questions:
* Specify hardware - no, not needed as long as you are comparing different code on the same hardware. (Specifying hardware might even be a violation of the DeWitt Clause!). Do make sure that your tests are all done under as equal circumstances as possible (e.g.do not start Word during one of the tests if SQL Server is running on the local machine).
Good. That's how I work it.
* A 5 microsecond difference on a single execution means nothing, this canbe caused by e.g backgroud processes using resources. However, if you use large numbers of executions then those influences should average out. If the difference is consistent, then I would like to know (out of academic interest) the reason. There is no practical implication for such small differences, unless you run an extreme high-end OLTP system (in which case you should be moving to Hekaton instead of reading Grant's blog)
What I published is the average across about 500 executions for each variation. I'm executing them using Powershell and the ExecuteNonQuery mechanism which basically calls the query, but doesn't retrieve the results, so they get dumped. The averages did show a 5 microsecond difference, but views were both slower and faster. Slower for the view itself, faster for the view in the procedure. I'm pretty sure that these are just random variations or possibly the margin of error on the extended event (I'm trying to find out what that margin of error is).
* One thing that is important: Make sure to cut out as much overhead as possible. You'll also see this in the comments on one of chapters in the new exec plans book. Either do not send results to the server (SELECT ... INTO #Oblicion is what I usuallly use), or use the SSMS options to not display results after execution. Do not request the execution plan for the executions you measure. Do not include the first execution unless you *want* to test the effect of cold cache. Do not use SET STATISTICS (you won't see its results anyway if you use that SSMS option). Do use Include Client Statistics, on a batch with a GO (suitably high number). (Or use Extended Events - though I think that this still has a tiny bit more overhead than the Include Client Statistics, which is measured by SSMS - especiallly if the server and SSMS are on different machines). Do not run tests like this on VMs, and especially not in the cloud. Etc
I hope this helps
All useful. I'm pretty much following all your advice (maybe not all the time though).
Thank you!
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
November 2, 2016 at 7:10 am
Now I *really* want to know what everyone else says about me. :crazy:
5 microseconds is an insanely short period of time; and especially when I see no logical explanation I'd be tempted to call it statistical variance.
Onthe other hand, if the 5 microsec difference is *consistent*, then it does have significance. If you think it's important enough, you could repeat the "test over 500 executions" a few times. There will always be *some* difference. If you see the **SAME** difference each time, it might by relevant. If you see different winners every time, and the margins stay in the single-digit microsecs, then just ignore it.
I'm pretty much following all your advice (maybe not all the time though).
Definitely not in chapter 9: 😛
November 2, 2016 at 7:46 am
Grant Fritchey (11/2/2016)
I'd like to request a beating from the Threadizens.I have a blog post[/url] that I'm informed doesn't adequately test the question I'm attempting to answer. You guys will likely recognize the paraphrased question from that weird collection of performance tips that was shared here earlier. I took about five of them and wrote blog posts that, at least in my mind, dispelled the silly or stupid statement in the tip. The complaint I'm getting is that the tests don't spell out all the hardware in use, lay out the statistics in question, provide a detailed explanation of exactly why there is a 5-10 micro-second disparity (on average) between some of the queries.
Please, beat me. Do I need to explain a 5 micro-second difference, or can we put that down to simple variation or even margin of error (I really haven't found documentation on the exact margin of error for extended events, if someone has that, I'd appreciate it)? For such simple queries, does it really matter how fast my CPU is in this situation (especially since all the tests were run using the same CPU)? Do I need to list every possible caveat about performance differences?
I'm serious in these questions. I want to know whether or not I'm doing people a disservice. If I'm as off base as has been suggested, I sure need to know that. I trust you all to be vicious where necessary.
I read your post and I don't think there is anything in it that requires a beating. To address the specific complaints:
1. Specify the hardware - as long as you are running the queries on different hardware I don't think it is necessary to share, but since it probably isn't that much effort to share that, I probably would at least in a comment.
2. Lay out the statistics in question - I'm not sure what the person making that comment means. Do they mean the run times that you have provided or the statistics SQL is using to pick the execution plan. The only thing I'd suggest is specifying that you are getting the statistics from an extended events session and maybe provide a link to the code to create the XEvents session.
3. Detailed explanation of the 5-10 microsecond difference - I don't know how you could provide a detailed explanation without possibly spinning up the debugger. It could be anything from some processes competing for resources that vary per run. You might have been better served by adding Min and Max run times in addition to average to show the variation within each access method. I agree with Hugo about how small a time 5-10 microseconds is, would that even be noticeable to a human being?
I do think you could have more clearly stated that if used in the same use case (parameters used, columns returned) as view and a stored procedure are likely to have close to identical performance and, in the case you showed, the exact same execution plan (same query hash and plan hash), and that one of the queries re-used the plan from the other one (RetrievedFromCache is true in one case). At least I'm making the assumption on plan re-use.
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
November 2, 2016 at 8:23 am
Jack Corbett (11/2/2016)
and that one of the queries re-used the plan from the other one (RetrievedFromCache is true in one case). At least I'm making the assumption on plan re-use.
Sorry, but I don't think that property means what you think it means.
When I first saw this property, I was enthousiastic because I thought it could be used to check whether a new plan was created or an old plan reused from cache. Boy was I wrong.
I have at one time had a firm understanding of what this property actually does mean. But I have forgotten it now. The only thing I do now still firmly remember is: do not be tempted to believe that this property tells you whether the plan was retrieved from cache.
November 2, 2016 at 8:38 am
Hugo Kornelis (11/2/2016)
Jack Corbett (11/2/2016)
and that one of the queries re-used the plan from the other one (RetrievedFromCache is true in one case). At least I'm making the assumption on plan re-use.Sorry, but I don't think that property means what you think it means.
When I first saw this property, I was enthousiastic because I thought it could be used to check whether a new plan was created or an old plan reused from cache. Boy was I wrong.
I have at one time had a firm understanding of what this property actually does mean. But I have forgotten it now. The only thing I do now still firmly remember is: do not be tempted to believe that this property tells you whether the plan was retrieved from cache.
Well that sucks. I was excited to see it for the same reason. Joe Sack has a blog post[/url] about it from a few years ago where I think he says it means that a plan was cached and could be re-used (my interpretation of what Joe's post says). I get this from the fact that it is false when you use sp_recompile or the OPTION(RECOMPILE) hint.
Also found this pdf of a presentation by Kevan Riley that explains it like this:
RetrievedFromCache attribute
? Was plan retrieved from cache?
?
Was plan retrieved from cache?? Was the plan cached?
? Plan was in cache => True
? Generated plan now in cache => True
? Generated plan not in cache => False
Jack Corbett
Consultant - Straight Path Solutions
Check out these links on how to get faster and more accurate answers:
Forum Etiquette: How to post data/code on a forum to get the best help
Need an Answer? Actually, No ... You Need a Question
November 2, 2016 at 11:29 am
I'm not looking for a job, but I take a look at offers from LinkedIn every now and then just to see what's being offered/requested.
That said, can anyone explain to me how is this a Jr position? https://www.linkedin.com/comm/jobs/view/218716553
November 2, 2016 at 11:49 am
Luis Cazares (11/2/2016)
I'm not looking for a job, but I take a look at offers from LinkedIn every now and then just to see what's being offered/requested.That said, can anyone explain to me how is this a Jr position? https://www.linkedin.com/comm/jobs/view/218716553
I'm not even sure some the heavy-hitters around here (Grant, Gail, Jeff Moden, Steve) would meet the qualifications...
Maybe they're trying to keep the salary low by calling it a "Jr" position?
Of course, the responses they're likely to get are either going to be actual Jr DBAs with an overblown belief in their skillset, or the sort of "Jr/Sr" DBA who meets the requirements but isn't really all that good at it...
Either way, I think they'll be in for a rude awakening down the line...
November 2, 2016 at 11:54 am
jasona.work (11/2/2016)
Luis Cazares (11/2/2016)
I'm not looking for a job, but I take a look at offers from LinkedIn every now and then just to see what's being offered/requested.That said, can anyone explain to me how is this a Jr position? https://www.linkedin.com/comm/jobs/view/218716553
I'm not even sure some the heavy-hitters around here (Grant, Gail, Jeff Moden, Steve) would meet the qualifications...
Maybe they're trying to keep the salary low by calling it a "Jr" position?
Of course, the responses they're likely to get are either going to be actual Jr DBAs with an overblown belief in their skillset, or the sort of "Jr/Sr" DBA who meets the requirements but isn't really all that good at it...
Either way, I think they'll be in for a rude awakening down the line...
Hey! I'm qualified doggone it.
OK, I absolutely don't meet the undergraduate degree requirement, but I'm pretty sure I meet all the others. However, this:
Disaster recovery planning experience.
Is not a junior DBA. A bunch of the rest would disqualify as a junior DBA position, but that's the big one for me.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 15 posts - 56,476 through 56,490 (of 66,712 total)
You must be logged in to reply to this topic. Login to reply