A Buggy Release

  • Comments posted to this topic are about the item A Buggy Release

  • Yes. So many times I have an argument (it rarely remains a discussion) that old code MUST be removed. I keep hearing "but we may need it" and "it is useful documentation" to which I usually explain that, in my rarely humble opinion, they are highlighting the value of source code control systems and that historical code just confuses maintenance and is dangerous as the assumptions that were in force when the code was live may no longer apply.

    All old code shows is that once upon a time this code existed. It doesn't even show that it was valid or even ever made production. Source code control repositories can answer these questions. Especially if unit tests at that time highlight that the code was tested too.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Everyone makes mistakes but I contend that there will be less mistakes if you have a more flexible and knowledgeable workforce. Whether thats sys admins / developers doing more operations or operators doing more sys admin / development. I'm all for it.

    cloudydatablog.net

  • Gary Varga (7/26/2016)


    Yes. So many times I have an argument (it rarely remains a discussion) that old code MUST be removed.

    Amen brother!

    I had the experience of trying to deprecate a system for which there had been 3 subsequent generations of that system. In the DB we could see the occasional call to the ancient system and that was enough to block the switch off and deprecate of that ancient system. From painful experience we knew that just disabling ancient systems could result in an almighty blowup.

    Our problem was that no-one could work out where the calls were coming from. There was nothing in the code repositories and no obvious server source.

    The people who concocted the embuggerance had long since left the company.

    The implications were that we had to maintain, backup rehearse DR for a system that was of no earthly use to man nor beast simply because it's absence would probably break something critical.

  • I wouldn't blame a botched deployment at a national financial services corporation on one single person. When it comes to enterprise scope deployments, you definately want more than just one person involved in planning and building the deployment. It also helps to have a set of post-deployment SQL and PowerShell scripts to perform sanity checks. Also have people review dashboards to confirm that none of the key performance metrics (both operational and financial) have changed in an unexpected way. Maybe that means a slight delay after each deployment and the total yearly uptime drops from 99.99 to 99.9, but that's better than having the deployment go live with inaccurate data.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • I think this also highlights the need for built in 'sanity checks' for all code, especially critical stuff, to take emergency action and notification. The sanity check should have nothing direct to do with the code, but respond to any bizarre result.

    I remember reading quite a few years ago about traffic lights, where having inconsistent red/green displays is a serious no-no. There is an override mechanism, completely separate from the normal light switching, which responds to any inconsistent state by going into appropriate blinking mode.

    ...

    -- FORTRAN manual for Xerox Computers --

  • Dalkeith (7/26/2016)


    Everyone makes mistakes but I contend that there will be less mistakes if you have a more flexible and knowledgeable workforce. Whether thats sys admins / developers doing more operations or operators doing more sys admin / development. I'm all for it.

    I agree

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply