March 11, 2019 at 9:15 pm
Comments posted to this topic are about the item Dealing with Technical Debt
March 12, 2019 at 3:56 am
Very interesting topic. Just like most of the backend stuff, Technical debt is also quite abstract and hard to measure.
I do like the idea of using a measurement system for all Technical Debt. Unfortunately, we don't have any.
In our firm, it is usually reviewed (or paid off) for a particular module/application when there is an enhancement required or an upgrade planned.
March 12, 2019 at 4:26 am
I (as I'm the only one here doing BI things) have a very basic measurement about Technical Debt:
Once existing applications exceed their target runtime, it's time to revisit the SSIS Package and make it better. Usually for me these packages have been existing for years and when I have to re-visit them I start asking the usual questions like "Do we really need to store a Date as Int?" Once I get a "no" for something like that, my work starts on the package.
In the last 8 Months I've reduced daily run times by a total of 9 hours but almost nothing new I've done, import 2 flat files and make a simple fact table.
Next up for me will most likely be upgrade SSIS 2012 -> 2017 with everything that goes along with that, upgrade Package model, check if Performance is viable and such.
I feel like all I'm doing is having the customer pay for their technical debt.
March 12, 2019 at 9:26 am
I appreciate this article Steve and the one you linked to. Actually assigning some measure to technical debt is a great idea.
I've considered the technical debt I've added to the software I've written here. I'd say it's between 10% to 15%. That's not so much because I'm just a brilliant developer, its more because at least half of what I do is maintain something that someone else wrote 10 or more years ago. And there are very serious consequences for modifying code without first getting permission, which is hard to get.
I think you touched upon something related when you spoke about how an organization could measure their technical debt. I'm referring to doing code reviews. Not directly addressing technical debt, but certainly related to it. We don't do any code reviews here. In fairness to where I work now, I'll say that of all the places I've worked, none of them have been open to doing code reviews.
Kindest Regards, Rod Connect with me on LinkedIn.
March 12, 2019 at 9:43 am
I was surprised at the amount of debt they calculated. It translated to about $4600 which is 70-80 hours of work @ $65/hour. I think it was underestimated but calculating value in this way makes it very easy to understand the impact.
He was also primarily focused on old debt. Unless we calculate the future value of debt we are presently incurring we will have a very incomplete picture. That said, if we could value new debt incurred in current projects we would have a more realistic way of explaining the true cost of an application.
------------
Buy the ticket, take the ride. -- Hunter S. Thompson
March 12, 2019 at 10:05 am
What is the difference between this metric and unit testing? In other words, if you're just refactoring because you can, and the basic functionality you need from it still works, then is that time well spent? If it's truly broken then shouldn't your testing reveal that?
-------------------------------------------------------------------------------------------------------------------------------------
Please follow Best Practices For Posting On Forums to receive quicker and higher quality responses
March 12, 2019 at 10:41 am
jonathan.crawford - Tuesday, March 12, 2019 10:05 AMWhat is the difference between this metric and unit testing? In other words, if you're just refactoring because you can, and the basic functionality you need from it still works, then is that time well spent? If it's truly broken then shouldn't your testing reveal that?
I am not familiar with metric testing, so I can't answer your first question. My bad.
Concerning refactoring, I see value in doing it, especially if it is results in the code doing exactly what it should be doing as before. If someone just refactors so they can pass the time, then I'd agree that's a waste of time and effort. But one of the goals of refactoring code is to make it adhere more to the SOLID principles. For example, I've seen lots of code in which a module is meant to do one thing, but when you look through the code you see that it's doing half a dozen or more things, as it accomplishes doing that one thing. That violates the "S" in SOLID (Single Responsibility Principle). So refactoring code like that will leave you with routines which do the one thing they're supposed to do and that one thing is only done by one routine, rather than being done by several routines scattered throughout the code.
Kindest Regards, Rod Connect with me on LinkedIn.
March 12, 2019 at 11:23 am
All this very thought provoking. We've all experienced it, but now there's a name for it.
March 12, 2019 at 11:24 am
Rod at work - Tuesday, March 12, 2019 10:41 AMjonathan.crawford - Tuesday, March 12, 2019 10:05 AMWhat is the difference between this metric and unit testing? In other words, if you're just refactoring because you can, and the basic functionality you need from it still works, then is that time well spent? If it's truly broken then shouldn't your testing reveal that?I am not familiar with metric testing, so I can't answer your first question. My bad.
Concerning refactoring, I see value in doing it, especially if it is results in the code doing exactly what it should be doing as before. If someone just refactors so they can pass the time, then I'd agree that's a waste of time and effort. But one of the goals of refactoring code is to make it adhere more to the SOLID principles. For example, I've seen lots of code in which a module is meant to do one thing, but when you look through the code you see that it's doing half a dozen or more things, as it accomplishes doing that one thing. That violates the "S" in SOLID (Single Responsibility Principle). So refactoring code like that will leave you with routines which do the one thing they're supposed to do and that one thing is only done by one routine, rather than being done by several routines scattered throughout the code.
"this metric" = technical debt. I was just wondering if there was really any value in trying to decide somewhat subjectively how "bad" existing code is, when you could use the mechanism of unit tests to determine if something was doing its job. Over time, as all the other unit tests are passing, you could add in things for quality checks like SOLID
-------------------------------------------------------------------------------------------------------------------------------------
Please follow Best Practices For Posting On Forums to receive quicker and higher quality responses
March 12, 2019 at 11:54 am
From the article:
It takes so little to actually do things the right way. Don't fall prey to the ridiculous "it's ok this time" mentality or "we have a schedule to meet" mentality. If you don't do it right the first time then, just like compound interest, you'll pay many times more trying to fix the problem (and no one fixes technical dept until it becomes a problem) including, perhaps, the loss of reputation in the eyes of the customer and future customers because bad news travels faster than the elevator someone is travelling in or the golf ball they just power to the hole.
To put it more bluntly, it's never ok to feed the baby a turd so stop trying to justify it.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 12, 2019 at 3:03 pm
jonathan.crawford - Tuesday, March 12, 2019 10:05 AMWhat is the difference between this metric and unit testing? In other words, if you're just refactoring because you can, and the basic functionality you need from it still works, then is that time well spent? If it's truly broken then shouldn't your testing reveal that?
Only if you cover the broken part well with tests. A single test often is looking at only a piece of functionality in a limited way. You usually need more testing to cover the feature well, and you might have a testing deficiency.
March 12, 2019 at 4:16 pm
Steve Jones - SSC Editor - Tuesday, March 12, 2019 3:03 PMOnly if you cover the broken part well with tests. A single test often is looking at only a piece of functionality in a limited way. You usually need more testing to cover the feature well, and you might have a testing deficiency.
Right, I get that part. But if you can identify it as "technical debt", and assign some value to it, can't you just as easily design a test around it? I'm trying to figure out if I should try to wrap my head around a separate ongoing effort for my team that looks at the quality scores and tries to make sure everything reaches a high enough score IN ADDITION TO the "pass all your unit tests" bar that we've already set. Or, really, are they the same thing?
Either it's 'debt' that doesn't really mean it's broken, in which case don't bother? Or it's Jeff's lovely depiction of a turd, in which case it shouldn't have passed testing, and when caught, should be included
-------------------------------------------------------------------------------------------------------------------------------------
Please follow Best Practices For Posting On Forums to receive quicker and higher quality responses
March 13, 2019 at 2:00 am
I'm with Jeff on this one. As an intellectual construct tech debt sounds plausible. So does the good debt/bad debt concept. I struggle to think of an example where I have seen tech debt paid off. No matter how "good" good debt may be if it is not paid it will become bad debt.
Whenever I hear the promise that tech debt will be paid I think that someone is being played for a mug.
If you know that tech debt will never be paid that influences your design decisions. I've seen systems that are massively overcomplicated generating more log entries than business data because the people designing and building the system were trying to defend against the consequences of the tech debt they were forced to accept
March 13, 2019 at 2:22 am
I definitely am with Jeff, too. My customer is willing to accept (in my opinion) quite a lot of debt as long as things run smoothly. We've been through a "unusable" scenario by now which we didn't have to if the customer would've listened to me.
It's not like my customer is against any change I propose, however sorting debt like dates stored as integer values might not prove instant relief or even change but as a professional I know it's necessary once you're looking at processing 100.000.000 rows every night. Answers like "oh well, order more RAM" is on the one side great if you can afford more resources but eventually you'll run into processing limits again.
The hardest part is making the business see the value right away that you've spent 5 days (or about to) changing things for the better, easiest way to get around this part for me was to change something while tasked with something else which did result in an instant benefit ( 30 minutes less processing time). But there definitely is a lot of potential to resisting such changes completely because everything still is running somehow.
I also think going the extra mile during testing with processing much more data than you have to anticipate in production is an excellent way to make sure whatever you built will meet it's targets for a long time. Rather than having to revisit your own code which will most likely feel like "legacy code" by then.
March 13, 2019 at 2:45 am
I think that what a lot people don't realise is that technical debt like any other debt, accrues interest.
Start with a small issue that you're willing to live with at the time, just to get the product out. Several years later, the database is fifty times larger, with more transactions going through than they would have ever hoped for and that little debt is now a monster that is threatening the performance of the system and the reputation of the company.
Of course, the people that agreed to the debt and created it have moved on and it is now the job of the others to understand it and remove it - with the increased overhead within a busy system that implies.
Not that I'm bitter, you understand.
In my experience technical debt is noticed as the impact upon the system's performance. Generally the cost of correcting that debt doesn't really surface. What tends to get in the way is the ever-constant pressure to continue development.
What really stops technical debt being reduced is the generation of more.
Viewing 15 posts - 1 through 15 (of 47 total)
You must be logged in to reply to this topic. Login to reply