April 14, 2016 at 9:36 am
Grant Fritchey (4/14/2016)
I've seen this argument before, and I actually lean towards what Jeff & Brent are advocating... if your queries are primarily seek. If you've tuned the system so you're largely avoiding scans, then defragmenting is lots of work for very little return. On the other hand, scans are going to get worse and worse and worse the more those indexes fragment. So, funny enough, if you have a well managed system, you can stop managing it, but if you have a poorly managed system, you need to manage more.
That's what I thought, as well, but even on the predominately "batch-needs-scans" databases, performance hasn't suffered. Heh... of course, that could be because that code sucks to begin with as I've verified many performance issues with that code (written by different people than the GUI code even if it is mostly stored procedures). But, then their's Kendra Little's video on the subject (still trying to resurrect the URL for that) and her revelation that she's usually been able to get the same effect by rebuilding stats on batch related stuff instead of doing index rebuilds, which inherently rebuild the stats.
And, no... I don't blame you, Kevin, or anyone else for thinking that I'm also drinking bong water like I thought Brent was but, by golly, it's been working for me and no one on this planet is more surprised than I am.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 14, 2016 at 2:34 pm
Jeff Moden (4/14/2016)
That's what I thought, as well, but even on the predominately "batch-needs-scans" databases, performance hasn't suffered. Heh... of course, that could be because that code sucks to begin with as I've verified many performance issues with that code (written by different people than the GUI code even if it is mostly stored procedures). But, then their's Kendra Little's video on the subject (still trying to resurrect the URL for that) and her revelation that she's usually been able to get the same effect by rebuilding stats on batch related stuff instead of doing index rebuilds, which inherently rebuild the stats.And, no... I don't blame you, Kevin, or anyone else for thinking that I'm also drinking bong water like I thought Brent was but, by golly, it's been working for me and no one on this planet is more surprised than I am.
I know you're not drinking the bong water. Not at all. However, I have seen scans hurt performance when the indexes were fragmented. Rebuilding helped, and not because of the stats update. We tried that first. Kevin has a point on the memory thing, but I have yet to see that be the cause of performance pain.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
April 14, 2016 at 4:35 pm
Grant Fritchey (4/14/2016)
Jeff Moden (4/14/2016)
That's what I thought, as well, but even on the predominately "batch-needs-scans" databases, performance hasn't suffered. Heh... of course, that could be because that code sucks to begin with as I've verified many performance issues with that code (written by different people than the GUI code even if it is mostly stored procedures). But, then their's Kendra Little's video on the subject (still trying to resurrect the URL for that) and her revelation that she's usually been able to get the same effect by rebuilding stats on batch related stuff instead of doing index rebuilds, which inherently rebuild the stats.And, no... I don't blame you, Kevin, or anyone else for thinking that I'm also drinking bong water like I thought Brent was but, by golly, it's been working for me and no one on this planet is more surprised than I am.
I know you're not drinking the bong water. Not at all. However, I have seen scans hurt performance when the indexes were fragmented. Rebuilding helped, and not because of the stats update. We tried that first. Kevin has a point on the memory thing, but I have yet to see that be the cause of performance pain.
The memory part I mention is (mostly) an indirect pain of a greater physical IO requirement than would otherwise be required to process the given workload. And THAT is a BAD THING for virtually every client I come across. I note that the vast majority of clients I come across are not on the Enterprise Edition of SQL Server either, meaning RAM is even more precious resource than many high-end/experienced people who do get to play with EE will typically come across.
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
April 21, 2016 at 6:45 pm
TheSQLGuru (4/14/2016)
Grant Fritchey (4/14/2016)
Jeff Moden (4/14/2016)
That's what I thought, as well, but even on the predominately "batch-needs-scans" databases, performance hasn't suffered. Heh... of course, that could be because that code sucks to begin with as I've verified many performance issues with that code (written by different people than the GUI code even if it is mostly stored procedures). But, then their's Kendra Little's video on the subject (still trying to resurrect the URL for that) and her revelation that she's usually been able to get the same effect by rebuilding stats on batch related stuff instead of doing index rebuilds, which inherently rebuild the stats.And, no... I don't blame you, Kevin, or anyone else for thinking that I'm also drinking bong water like I thought Brent was but, by golly, it's been working for me and no one on this planet is more surprised than I am.
I know you're not drinking the bong water. Not at all. However, I have seen scans hurt performance when the indexes were fragmented. Rebuilding helped, and not because of the stats update. We tried that first. Kevin has a point on the memory thing, but I have yet to see that be the cause of performance pain.
The memory part I mention is (mostly) an indirect pain of a greater physical IO requirement than would otherwise be required to process the given workload. And THAT is a BAD THING for virtually every client I come across. I note that the vast majority of clients I come across are not on the Enterprise Edition of SQL Server either, meaning RAM is even more precious resource than many high-end/experienced people who do get to play with EE will typically come across.
Yep. I agree... I thought memory usage would start to go haywire for the very reasons you mention, as well. So far, it hasn't. I will admit that I've got 128GB of RAM on the box and that, as logical-read intensive (really bad code sometimes getting to more than a billion reads on a relatively small database of only 50 million pages (not quite 400GB) as the batch runs are, neither performance nor memory has suffered (actually, both continue to improve). Perhaps it's just perfect "weather" rather than a perfect storm but I'm liking this a whole lot.
--Jeff Moden
Change is inevitable... Change for the better is not.
April 21, 2016 at 6:54 pm
Grant Fritchey (4/14/2016)
I know you're not drinking the bong water. Not at all. However, I have seen scans hurt performance when the indexes were fragmented. Rebuilding helped, and not because of the stats update. We tried that first. Kevin has a point on the memory thing, but I have yet to see that be the cause of performance pain.
Like I said, it may actually be because the code is so hammered to begin with but the group responsible for the batch code and monitoring it have stated that the performance of the batch jobs has stayed the same in many cases and has improved in many more. They keep some pretty good performance logs that I'll need to do a deep dive on to make sure that they're not merely responding to suggestion.
--Jeff Moden
Change is inevitable... Change for the better is not.
Viewing 5 posts - 16 through 19 (of 19 total)
You must be logged in to reply to this topic. Login to reply