Being Responsible for Code

  • Peter Maloof (4/30/2012)


    Matt Miller (#4) (4/30/2012)


    A given metric might sound good form the top of the ivory tower, but be completely unrealistic at trench level.

    And sometimes metrics become the end-all; for example, the duration of the call being more important than whether or not the customer's problem is fixed.

    When only one is used, or one is favored above all others, that's exactly what tends to happen: the metric eclipses the higher purpose it was supposed to support.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • This line in thinking leads only to one conclusion - all defective software is the fault of the QA process. Boy it's good to be a developer if you have someone to blame for all your shortcomings. This attitude is nothing new. It goes back to the earliest days of software development. I've been doing software development for most of my life. I remember having a discussion with my boss over 30 years ago when he said he'd never work in QA because they get blamed for all the problems while the developers get praised for everything that goes right.

    The reality is that QA is just one piece of the development/delivery process and the focus of all the participants is to deliver software that is as free of defects as possible while working within budget and time (delivery) constraints. Exactly how you go about doing this is beyond the scope this discussion. But, it's a subject that should be seriously researched and discussed.

    By the way, notice that I didn't use the words "defect-free" because I don't think that's possible. I wish I knew how to prove this is some rigorous mathematical way. But my experience and the experiences of many other serious developers seems to verifies this.

  • I worked at a place like this and it worked very well. They had the highest standards of any company I've worked for and they had a good work environment. When developers brought their applications down, this company added additional embarrassment by having a siren go off and lights flash so that everyone in the data center would know something was wrong (in the days before cell phone notifications).

    Many of the responses to this post make an assumption that this would be the only means to assess employees. The reason it worked at this company is because it was one of the many factors taken into consideration during performance reviews, so the developer who added 5000 lines of code that year and made the system go down twice was rated better than the developer who never caused problems but never promoted anything to production. Also, there wasn't a QA department and developers did their own testing.

    LinkedIn - http://www.linkedin.com/in/carlosbossy
    Blog - http://www.carlosbossy.com
    Follow me - @carlosbossy

  • I'll start by saying that I think that in general it would be a good idea if everyone accepted responsibility for what they do; not just people who write code, but everyone in the whole process from the CEO down to the office cleaner.

    But I think it's fairly rare for that to happen: managers distort the system by playing CYA and/or favorites while developers distort the system by blaming QA or Architects or Designers or the Requirements Team, everyone blames someone else, and the final allocation of blame (leading to joblessness or low payrise) will be pretty arbitrary because it will be made either by someone too distant from the event to know what's going on or by someone so close to it that their primary objective is to avoid any hint of blame.

    The cynical bit being over, nnow comes the long rant.

    Total avoidance of error is generally impossible. Formal systems help, but will not get you all the way there, and anyway the teams qualified to use any of the useful formal development verification systems out there can be counted on the fingers of two hands (not including the thumbs). A typical piece of software is sufficiently complex that exhaustive testing is not economically feasible (there are lots of small bits and pieces where it is, of course, but for anything big it's out of the question). It's also true that the attitude that all errors that get out are the responsability of management or of QA rather than of the developer is unmitigated nonsense; it's the developers job to do enough unit testing of his code that it works to a reasonable standard; QA is not there to remove bugs introduced by the developer which he should have spotted, it's there to spot bugs that the developer couldn't reasonably be expected to find; some of these will be bugs that arise from misunderstanding of the interface between two components that come from different developers (or from different development teams), and those are perhaps the fault of whoever wrote the interface specification; others will also be errors in interaction between differently sourced components, but not through misunderstanding, rather through misdesign - which is either the responsibility of the designer or of whoever wrote teh requirements specification. Of course some errors will be the responsibility of QA, because they've failed to detect something they shoud have detected, while some will be the responsibility of the developer because there's no way they should have got past any decent unit test (usually caused because the developer "knew" that bit was OK so it didn't need unit testing). Often errors will be the result of management decisions to truncate timescales so that either requirements refinement, architecture/design verification, development and unit test, or integration, system test, and QA has been cut short before it can complete its share of the tasks; these decisions are either management errors or essential reaction to commercial reality - release it now and survive in the marketplace (albeit with pain) or don't and go bust.

    Tom

  • L' Eomot Inversé (5/2/2012)

    ...the attitude that all errors that get out are the responsability of management or of QA... is unmitigated nonsense;

    it's the developers job to do enough unit testing of his code that it works to a reasonable standard; QA is ... there to spot bugs that the developer couldn't reasonably be expected to find...

    Well said!

    I was trying to figure out how to say the part I quoted above, but I just wasn't happy with what I wrote.

Viewing 5 posts - 31 through 34 (of 34 total)

You must be logged in to reply to this topic. Login to reply