February 23, 2009 at 4:57 pm
Adam Machanic (2/23/2009)
I'm definitely a fan of instrumentation for debugging purposes. Quite a while ago I wrote something called TSQLAssert (http://datamanipulation.net/tsqlmacro/) for that very purpose... It's based on the assertion frameworks found in C and C++, and like them is built on top of a macro preprocessor (called TSQLMacro). These tools proved to be totally unpopular with most SQL Server people I've introduced them to, but I'm still quite happy with them 😎
Just a very quick glance at your fine code seems to indicate a limit of 80,000 characters for the objects being altered. Although that seems more than enough for most folks, some of the legacy code I've seen on many systems has quite a bit more than 80K characters. I'm I reading that limit correctly and, if so, does the code warn you if the limit is exceeded?
--Jeff Moden
Change is inevitable... Change for the better is not.
February 23, 2009 at 5:32 pm
It seems that you are a developer, not a DBA, and certainly not employed in infra-structure. Developers love to debug in production, debug? did I say debug? DEVELOP in production.
So, explain to me why we have duplicate environments for development and QC?
No development in production. That's my opinion and I stand by it.
Thank-you,
David Russell
Any Cloud, Any Database, Oracle since 1982
February 23, 2009 at 6:21 pm
Jeff Moden (2/23/2009)Just a very quick glance at your fine code seems to indicate a limit of 80,000 characters for the objects being altered. Although that seems more than enough for most folks, some of the legacy code I've seen on many systems has quite a bit more than 80K characters. I'm I reading that limit correctly and, if so, does the code warn you if the limit is exceeded?
Hi Jeff,
No, no warnings--not something I ever thought of, to be honest. 80,000 characters seemed big enough, and while I'm sure there is SQL out there that big or bigger I can't say I've seen it recently (if ever)... The biggest single unit of code I've worked on in recent memory is Who is Active? (http://sqlblog.com/blogs/adam_machanic/archive/2009/02/18/who-is-active-v7-30.aspx), and the most recent version is only 46,795 characters--over 1400 lines of T-SQL, plus a lot of comments. Which begs the question, what are these 80K+ character legacy procs actually doing??
--
Adam Machanic
whoisactive
February 23, 2009 at 6:44 pm
Heh... I agree... the code is way to long, way too slow, and way too poorly written. The code does accounts receivable for 1.5 million customers on a daily basis and is about 12,000 lines long. Needless to say, I'm trying to get them to boil that down with some more effective code.
Anyway, thanks for the feedback, Adam. Just wanted to make sure I was interpreting your code correctly.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 23, 2009 at 10:20 pm
Worked on something similar doing something similar.
Lesson learned (while refactoring) was same as VB et al code - break 1 big in to parent with lots of smaller sections.
When debugging (developing / production-hacking) you can run individual units - something akin to SSIS checkpoints.
And, of course, you have to log exactly how far the build got.
And only cut across to the result right at the end.
Good luck!
Regards
Andy Davies
February 23, 2009 at 10:23 pm
It's Schroedinger's darn cat again - the moment you start testing (debugging) you interfere with the state and alter it 🙁
I've so often watched people get it all working in DEV, then change a whole bunch of stuff and release (now untested) to PROD 😮
Problem is issues that appear in PROD but not in DEV...so you have to do something, and @Debug built-in is the lowest risk.
But a risk nonetheless!
Regards
Andy Davies
February 24, 2009 at 10:14 am
Andrew (2/23/2009)
It's Schroedinger's darn cat again - the moment you start testing (debugging) you interfere with the state and alter it 🙁
Yes - but I'd rather know that the cat was dead rather than trying to infer the cat's state from a lack of noise and movmement!
It's easy to say that everyone's Test environment should mirror the Prod environment, but difficult to enforce due to costs and resources. So, the only choice to to try to ascertain the execution states and times without too much hulabaloo. On the SQL Server side, Profiler is a great tool for that purpose. It can help direct your efforts in a focused manner. But, you still have to tinker with the code to fix it.
Instrumentation (prior to deployment) is a fair approach, but unecessary with today's tools. Moreover, it requires that the developer have a fair idea of what is going to cause heartburn prior to deployment. It's fine to say that a stored proc executed in 45 seconds (ugh!), but (1) it tells us nothing about what is causing the slow execution and (2) can be directly determined by using Profiler without any instrumentation. Taking instrumentation to the statement level is just plain busy-work. Profiler can provide a statement by statement picture of execution times.
February 24, 2009 at 4:55 pm
In all the talk, the key reason to instrument is being overlooked, which is the after-the-fact benefit. Tools may help anticipate or identify current problems, but they are of little benefit once a problem has already occurred. A report may be running slow because of a missing index, or because three other customers were running massive reports at the same time, a condition that cannot be identified by any tools. On a well instrumented system, you can do post-mortems going back indefinitely, and see how small problems that were overlooked months ago foreshadow bigger ones, and gather a history of how the system performs as well as how customers use it and how the interactions affect performance. You simply can't gather this kind of information if it isn't continuously collected.
I'm reminded of two situations in my past where appropriate logging was critical in solving problems. In one case, I avoided having to fly to Norway in the dead of winter (which I was not looking forward to even they were having a heat wave with one day as high as four degrees!) I had them send me the installation log, which showed the CE hadn't properly connected one of the cables. In another, I identified a system at a bank that was shutting down once a month because they were running end-of-the-month copy jobs next door, which drove the temperature up so high that the system shut down.
It's these sort of unexpected interactions that proper instrumentation can resolve.
February 24, 2009 at 6:21 pm
Andrew (2/23/2009)
Worked on something similar doing something similar.Lesson learned (while refactoring) was same as VB et al code - break 1 big in to parent with lots of smaller sections.
When debugging (developing / production-hacking) you can run individual units - something akin to SSIS checkpoints.
And, of course, you have to log exactly how far the build got.
And only cut across to the result right at the end.
Good luck!
Just breaking code up has no real benefit if there's no chance for code reuse anywhere... the problem is that no one knows which sections of the code are poor performers and they have to add some instrumentation to figure out where the low hanging fruit actually is. Intra procedure checkpoints work just fine. Breaking it up just for the sake of breaking it up frequently adds unnecessary complications.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 24, 2009 at 9:56 pm
Late to the party, but I've used a @debug variable in code that we turn on when there are issues. Typically we've had it available as a switch that the application turns on to send through the parameter. This then sends data to log tables with timestamps for tracking things down.
Profiler gives me a good idea, but the detailed instrumentation we built in really helped find issues.
One other thing is that I wouldn't instrument everything. That's too much work and there are lots of procedures and processes that just don't matter. We tended to learn the system, look for places where performance could be an issue (or is critical) and then instrument those places.
Viewing 10 posts - 16 through 24 (of 24 total)
You must be logged in to reply to this topic. Login to reply