October 6, 2014 at 7:53 am
If you're tasked with debugging/troubleshooting?
Specifically:
Assigning workable values to parameters/finding out where they're passed from or stored
Heavy use of temp tables/table variables/real tables that get truncated
Verifying results
Bonus points if it's for a company with fairly high developer turnover or use of consultants so you don't have anyone to ask
Double bonus points if those developers and consultants clearly never read SSC before hitting F5
:Whistling:
Thanks
October 6, 2014 at 8:20 am
sqldriver (10/6/2014)
If you're tasked with debugging/troubleshooting?Specifically:
Assigning workable values to parameters/finding out where they're passed from or stored
Heavy use of temp tables/table variables/real tables that get truncated
Verifying results
Bonus points if it's for a company with fairly high developer turnover or use of consultants so you don't have anyone to ask
Double bonus points if those developers and consultants clearly never read SSC before hitting F5
:Whistling:
Thanks
1) First thing I do is go KAAACCCHHHHIIINNNNGGGG!!! Since I am a consultant companies bring in to clean up the messes others' create. Think Winston Wolfe. 😎
2) Replace all table variables with temp tables unless you can observe exceptionally high call rates that are leading to PFS/SGAM latch issues in tempdb. And if you DO see that, address it first by other means associated with tempdb and trace flag(s). Oh, the one other use case for table vars is if they roll back transactions and need to keep data for audit/other purposes. My guess is that you don't see a single BEGIN TRAN anywhere anyway though. 🙂
3) Profiler is your friend. Capture workload(s) and use aggregate trace analysis to look for issues and also to find BAD parameters that cause issues.
4) Along the same lines, check for significant data value skew which leads to opportunities to PREVENT parameter sniffing from killing you. Several mechanisms available here.
5) Simply ask the users what their pain points are and go fix them specifically.
6) Verifying results and preventing regressions can be done with testing you set up using the awesomeness at tsqlt.org.
7) Hire me to come in and help everyone (including you) out. Mostly kidding there - I am crushed at the moment!! 😛 But I DO LOVE this type of client because there are always SOOO many pieces of low-hanging fruit that it makes me look like I walk on water right out of the box!! 😎
Best of luck! I am envious that you get to play in such a target-rich environment!!
Best,
Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru on googles mail service
October 6, 2014 at 8:44 am
TheSQLGuru (10/6/2014)
sqldriver (10/6/2014)
If you're tasked with debugging/troubleshooting?Specifically:
Assigning workable values to parameters/finding out where they're passed from or stored
Heavy use of temp tables/table variables/real tables that get truncated
Verifying results
Bonus points if it's for a company with fairly high developer turnover or use of consultants so you don't have anyone to ask
Double bonus points if those developers and consultants clearly never read SSC before hitting F5
:Whistling:
Thanks
1) First thing I do is go KAAACCCHHHHIIINNNNGGGG!!! Since I am a consultant companies bring in to clean up the messes others' create. Think Winton Wolf. 😎
2) Replace all table variables with temp tables unless you can observe exceptionally high call rates that are leading to PFS/SGAM latch issues in tempdb. And if you DO see that, address it first by other means associated with tempdb and trace flag(s). Oh, the one other use case for table vars is if they roll back transactions and need to keep data for audit/other purposes. My guess is that you don't see a single BEGIN TRAN anywhere anyway though. 🙂
3) Profiler is your friend. Capture workload(s) and use aggregate trace analysis to look for issues and also to find BAD parameters that cause issues.
4) Along the same lines, check for significant data value skew which leads to opportunities to PREVENT parameter sniffing from killing you. Several mechanisms available here.
5) Simply ask the users what their pain points are and go fix them specifically.
6) Verifying results and preventing regressions can be done with testing you set up using the awesomeness at tsqlt.org.
7) Hire me to come in and help everyone (including you) out. Mostly kidding there - I am crushed at the moment!! 😛 But I DO LOVE this type of client because there are always SOOO many pieces of low-hanging fruit that it makes me look like I walk on water right out of the box!! 😎
Best of luck! I am envious that you get to play in such a target-rich environment!!
I wish this were just a performance tuning thing, I'd at least have someplace to start. What I'm working on is a data archival process that's half in SSIS and follows all kinds of weird pile-on logic to figure out if a project is active or not. It stopped working suddenly when half of the move paths started turning NULL, and since the whole thing is built on temp tables, figuring out where it happened is like asking a drunk where they parked.
I hate SSIS.
Thanks
October 6, 2014 at 8:55 am
I'd approach it kind of like a piece of unclaimed luggage: Isolate it, then put a fireproof container around it and try to see if it detonates.
All right so I'm being dramatic, but the idea is the same. If you can at all afford to, pull it off of production until you know what it does, what makes it tweak and what makes it puke all over itself. Talking from experience, it's better to scale back and build up thant to try to build a road through a live minefield.
I do like Kevin's list a fair amount, although I'd start with 5) and 3) before jumping into ANY changes. You have to know why you're being asked to "troubleshoot/debug": what is the issue, what is supposed to happen that isn't, what are the performance concerns etc.... Just like pretty much all continuous improvement models will tell you, you need to know what the issues are and the baseline behavior is before recommending any changes.
In addition to profiler - it may be worth starting any additional useful logging which would shine a light on what parts of the code are being used, and whther the execution flow is really what you expect it to be. I walked into a scenario like you mentioned (all the way down to the double bonus round), with 3 levels of cursors (cursors within cursors within cursors), GOTO statements, and of course (nolock) on every join: simply knowing which branches were being executed, how may times they actually were running in a given execution, etc... was dramatically beneficial. You can to some degree decipher that from Profiler, but I found I covereed a lot more ground by supplementing it with some adding logging of my own.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
October 6, 2014 at 9:35 am
My steps, in order:
1. WHAT is the problem?
2. WHY is it a problem?
3. Is it worth it?
Then, once I get those question answered, I start in this manner. Remember, it's ALWAYS about performance:
1. Look at the code. is it understandable?
If it is understandable:
1. Determine some baselines. See where the pain points may be.
2. Bite off the easy parts. Remove the non-sargable where clauses. Look at the index usage. Etc.
3. Repeat. And with each pass, document, document, document.
If it is NOT understandable, actually the same as above, with some other steps:
1. Re-name the variables, aliases, temp tables, etc. that do not make sense to something that does. Bite off the easy parts, work your way up.
2. Step through the code, if possible.
3. Go to step #1, above.
Almost all of the code I have to re-work was written by people who no longer work here, with no documentation, and it performs poorly.
Throughout this process, I also have learned to question any "logic" in the code. I have had far too many cases where the logic was incorrect, and had been incorrect for some time. Some of these were disastrous. One of the worst was an underbilling situation that had been occurring for almost 3 years. That added up to 1.2 million, which could not be recovered. The companies total revenues were 19 million, so that was a big hit.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
October 6, 2014 at 9:47 am
sqldriver (10/6/2014)
If you're tasked with debugging/troubleshooting?Specifically:
Assigning workable values to parameters/finding out where they're passed from or stored
Heavy use of temp tables/table variables/real tables that get truncated
Verifying results
Bonus points if it's for a company with fairly high developer turnover or use of consultants so you don't have anyone to ask
Double bonus points if those developers and consultants clearly never read SSC before hitting F5
:Whistling:
Thanks
Inheriting a 5,000 line stored procedure with no functional documentation and little code commenting is not just an occasional gripe but situation normal for me. The first thing I do is block off an entire day just to read the code from top to bottom, adding my own comments for each block, and sometimes typing up a wireframe with psuedo-code into a document as I go along.
Also, if the procedure involves a lot of conditional code blocks, branching logic, and dynamic sql driven by parameters and meta-data configuration, then I have a stored procedure I use to perform runtime step logging by inserting into a [RunLog] table. Of course I comment out the logging before deploying to production, but I can always go back and uncomment it when I need to debug something. Depending on the size of the stored procedure, I may end up inserting a hundred or more such log records. It's an upfront investment of time, but in the end I have a very good idea of how the logic flow works and what specific steps need performance tuning.
So, in the example below it will insert one row indicating what table is being inserted, another row containing the dynamic sql statement being executed, and the 3rd row indicating the number of rows inserted.
exec dbo.usp_RunLog_Insert @Run_ID, 1620, 'INSERT #PersonName';
exec dbo.usp_RunLog_Insert @Run_ID, 1630, '@personname_sql', @personname_sql;
insert into #PersonName . . .
exec ( @personname_sql );
exec dbo.usp_RunLog_Insert @Run_ID, 1640, '@@rowcount', @@rowcount;
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
October 6, 2014 at 10:29 am
At what point do you decide to throw it in and start over from scratch? This thing uses 38 tables, countless temp tables, 47 stored procs of varying length, and 8 scalar functions, and that's before I even get to the for each loop container in SSIS that does a bunch of VB voodoo.
There aren't enough green bottles from Islay to get me through this.
Thanks
October 6, 2014 at 10:54 am
sqldriver (10/6/2014)
At what point do you decide to throw it in and start over from scratch? This thing uses 38 tables, countless temp tables, 47 stored procs of varying length, and 8 scalar functions, and that's before I even get to the for each loop container in SSIS that does a bunch of VB voodoo.There aren't enough green bottles from Islay to get me through this.
Thanks
You need functional documentation and consensus amoung the end users and management about how it's supposed to work. Without that; it's a lost cause.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
October 6, 2014 at 11:00 am
Agree with Eric. This sounds like something that evolved over time. Little patch here, little patch there, little patch everywhere.
Document, and if possible, clean up the parts you can make sense of, and get some consensus on how the rest of it needs to work.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
October 6, 2014 at 11:05 am
Eric M Russell (10/6/2014)
sqldriver (10/6/2014)
At what point do you decide to throw it in and start over from scratch? This thing uses 38 tables, countless temp tables, 47 stored procs of varying length, and 8 scalar functions, and that's before I even get to the for each loop container in SSIS that does a bunch of VB voodoo.There aren't enough green bottles from Islay to get me through this.
Thanks
You need functional documentation and consensus amoung the end users and management about how it's supposed to work. Without that; it's a lost cause.
Try to lay out the "cost of doing nothing" vs "cost to fix" vs "cost to replace". This would START with the functional documentation mentioned above, but would likely entail talking through the issues and what those issues cause with mgmt and the user community.
----------------------------------------------------------------------------------
Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?
October 6, 2014 at 11:07 am
Eric M Russell (10/6/2014)
sqldriver (10/6/2014)
At what point do you decide to throw it in and start over from scratch? This thing uses 38 tables, countless temp tables, 47 stored procs of varying length, and 8 scalar functions, and that's before I even get to the for each loop container in SSIS that does a bunch of VB voodoo.There aren't enough green bottles from Islay to get me through this.
Thanks
You need functional documentation and consensus amoung the end users and management about how it's supposed to work. Without that; it's a lost cause.
There are no end users. It's just a process that was kicked off ad-hoc. There are only people who have a vague understanding of what the outcome was: databases from two servers were detached and moved to the job folders they're associated with, and a file of ndmp paths to feed to our tape backup software was produced.
I found one comment:
/*
By Alexei
*/
There's also a logging table with one entry from 2010 marked with TEST all the way across.
I can't help but think I could do this better, but you're right about management consensus.
October 6, 2014 at 11:15 am
Michael L John (10/6/2014)
Agree with Eric. This sounds like something that evolved over time. Little patch here, little patch there, little patch everywhere.Document, and if possible, clean up the parts you can make sense of, and get some consensus on how the rest of it needs to work.
Yeah, it has been updated every time a new piece of in-house software has been introduced to the ecosystem to account for a different data set created elsewhere, or a different naming convention.
October 6, 2014 at 11:22 am
sqldriver (10/6/2014)
Eric M Russell (10/6/2014)
sqldriver (10/6/2014)
At what point do you decide to throw it in and start over from scratch? This thing uses 38 tables, countless temp tables, 47 stored procs of varying length, and 8 scalar functions, and that's before I even get to the for each loop container in SSIS that does a bunch of VB voodoo.There aren't enough green bottles from Islay to get me through this.
Thanks
You need functional documentation and consensus amoung the end users and management about how it's supposed to work. Without that; it's a lost cause.
There are no end users. It's just a process that was kicked off ad-hoc. There are only people who have a vague understanding of what the outcome was: databases from two servers were detached and moved to the job folders they're associated with, and a file of ndmp paths to feed to our tape backup software was produced.
I found one comment:
/*
By Alexei
*/
There's also a logging table with one entry from 2010 marked with TEST all the way across.
I can't help but think I could do this better, but you're right about management consensus.
From what you describe, it sounds like a back office DBA operation; just taking databases offline and backing them up to network folders; which could be a 100 line PowerShell script. I don't understand how "38 tables, countless temp tables, 47 stored procs of varying length, and 8 scalar functions... SSIS that does a bunch of VB voodoo" would come into play. However, perhaps tracing the process running with SQL Profiler will provide insight into what all it's doing. Also, since it's an ad-hoc process, asking under what circumstances it's typically run, who gets notified, and what happens next would provide clues into how it fits into the bigger picture. Hopefully, your research will reveal that it's an obsolete legacy process that can just be discarded.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
October 6, 2014 at 12:00 pm
Eric M Russell (10/6/2014)
sqldriver (10/6/2014)
Eric M Russell (10/6/2014)
sqldriver (10/6/2014)
At what point do you decide to throw it in and start over from scratch? This thing uses 38 tables, countless temp tables, 47 stored procs of varying length, and 8 scalar functions, and that's before I even get to the for each loop container in SSIS that does a bunch of VB voodoo.There aren't enough green bottles from Islay to get me through this.
Thanks
You need functional documentation and consensus amoung the end users and management about how it's supposed to work. Without that; it's a lost cause.
There are no end users. It's just a process that was kicked off ad-hoc. There are only people who have a vague understanding of what the outcome was: databases from two servers were detached and moved to the job folders they're associated with, and a file of ndmp paths to feed to our tape backup software was produced.
I found one comment:
/*
By Alexei
*/
There's also a logging table with one entry from 2010 marked with TEST all the way across.
I can't help but think I could do this better, but you're right about management consensus.
From what you describe, it sounds like a back office DBA operation; just taking databases offline and backing them up to network folders; which could be a 100 line PowerShell script. I don't understand how "38 tables, countless temp tables, 47 stored procs of varying length, and 8 scalar functions... SSIS that does a bunch of VB voodoo" would come into play. However, perhaps tracing the process running with SQL Profiler will provide insight into what all it's doing. Also, since it's an ad-hoc process, asking under what circumstances it's typically run, who gets notified, and what happens next would provide clues into how it fits into the bigger picture. Hopefully, your research will reveal that it's an obsolete legacy process that can just be discarded.
Spot on about the back office DBA operation.
It's very poorly done, and there's a different staging table for each kind of process, and process step, along with validation and manipulation (replacing '\' in system paths with '/' in ndmp paths, for instance). Don't forget the while loop file path parser.
So 10 real steps for each of three processes and then a few additional staging and finishing tables, where everything gets brought back together. The temp tables in between are used, oddly, to assign uniqueifiers to things (casting various numeric row data as a varchar string and adding like CHECKSUM('f') to the end)
The SSIS portion moves the files and writes the .csv file, and whatnot.
From what I've been told, it was kicked off roughly once a week by the guy who wrote it, and he'd take the file and slowly archive the data off live disks to tape. No one else had a hand in it.
Unfortunately it's necessary, otherwise no one would clean up the databases from these servers. When I first started auditing data back in March, there were ~3500 databases on one server, and ~6000 on the other. I've got it down to ~1000 and ~2500. It can't stay online, but it can't be dropped because clients may need it restored for an investigation or a different search/manipulation. Seven years on the retention.
You should see the process that goes in to tracking this data down to restore from tape :hehe:
October 7, 2014 at 11:55 am
I've inherited tragedies similar to what you describe.
My first step is to open Visio. My second step is cancel my meetings for a week.
I create a hybrid DFD/Processing Diagram. I'm not sure if anyone but myself would get to that detail level but it's the only way I can keep something like that straight by the time I'm done. Go through the process, inch by inch, detailing everything.
These are the kinds of diagrams you wallpaper with, so you know. They're overkill, and designed to be so. You can't see the patterns unless it slaps you in face, repeatedly.
From there, I collapse the diagram. I figure out where reptitions are occuring (Transforming the data from location A in 8 different places without the original version(s) being needed between the steps), and determine the overall shape of the process. From that, I write up the functional rules and then the tech spec/definitions after the initial collapse of the obvious 'missed optimizations'.
From those functional rules and specs, I put what I found into a nice coffin, bury it with honors, and start over.
Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.
For better assistance in answering your questions[/url] | Forum Netiquette
For index/tuning help, follow these directions.[/url] |Tally Tables[/url]
Twitter: @AnyWayDBA
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply