March 23, 2017 at 9:44 pm
Comments posted to this topic are about the item Change Approvals
March 23, 2017 at 11:58 pm
We have a standard... no one promotes their own code... period.
We also have another standard that identifies the development, testing, and production deployment processes including peer review, QA and UAT. For emergencies, we do the exact same thing but faster. Each person is on the spot to take action as soon as the code responsibility is passed to them.
Finally, code doesn't move until the stakeholder(s) and the Dev Manager sign off on the ticket. For emergencies, all of that has taken as little as 10 minutes depending on the needed change.
The good part is, because of the process that we stick to, we might have only 1 or 2 emergencies a year... usually, it's none per year.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 24, 2017 at 2:52 am
I 100% agree with Jeff...nobody should deploy their own code.
Sadly, the reality I've encountered is code release is an afterthought. DevOps is a buzz word, but many organisations still deploy manually (and badly). Even when an automated process is in place, it is not actively maintained, leading to inevitable failures as the code base evolves but the automated deploy does not. As you might imagine, I see organisations encounter many more emergencies than Jeff does!
Happily, I've seen an improvement over the past two years and many organisations are now actively pursuing a DevOps strategy...I can't help but feel we're still some years away from this being the norm though.
March 24, 2017 at 6:27 am
I came up with a DB deployment strategy years ago when we were a 3-person team, and we still use it now that the team has grown to about 15 or so.
We have a QA environment that we make like live (copy down from live) at the start of each sprint test cycle. It is tested before deployment (old code, old DB). We generate deployment scripts from source control against that environment and run them. Testing is now old code, new DB (as that will be a situation we have on live come final deployment). Code is then deployed (new code, new DB testing). The scripts are saved - and we may end up with multiple scripts per database as bug fixes or late work items get added to QA.
Once the testing is complete, we make the UAT environment like live, and again test old code/old DB then deploy all the accumulated scripts to that and test old code/new DB, then deploy code and test new/new.
Following that we deploy all those scripts to Stage, which has not been made 'like live' and instead exists in the same update-cycle as live to give one last check that the scripts will not cause errors on final deployment to live, which is the final step.
We shortcut the process for emergency patches - as QA will be in the 'next sprint' state already we test patches on UAT and follow the process from there.
This has worked well for a few years now. I know TFS, Red Gate tools etc. offer a 'deploy from source code to any environment' option, but I/we prefer the knowledge that we have thoroughly tested the exact deployment scripts both for accuracy/functionality and the deployment process itself has been multiply tested too. It could simplify deployments to use these tools, but at the cost of some security.
March 24, 2017 at 6:36 am
If DevOps is cooperation between Developers and Operations admins then would a one person IT department by definition be DevOps? 🙂
In a one person IT department, in a company who gives no !#^$s about QA (because *nobody* has time to do more than complain about bugs) actually need a deployment plan? 🙂
In a one person IT department, where automated code tests are generally the only tests (other than by live-fire from users in production) does continuous integration make any sense?
I can't be the only lone wolf in the readership. I read about all the lovely support given by big IT departments to all the latest buzz-word techniques and I wonder what it must be like to have people dedicated to doing nothing but testing code, automating scripts for deploying 5-changes-per-day code, and I sigh, then look back at my looming deadlines...
Even small IT deparments (perhaps < 5 employees) probably don't have the kind of resources needed for adequate QA *testing*, much less the rest of it.
I'd be interested in their opinions on this topic. Speak up! :hehe:
March 24, 2017 at 7:12 am
The developer (typically an app developer but sometimes a DBA if it's something like a stored procedure or SSIS package) will create a Change Order with script(s) attached and a completed form containing details like: who, what, when, where, why, and how. There is a deployment date/time (can't be the same business day), and it then must get sign off from management. The deployment scripts get executed through Octopus, DbUp, or the DBA. If the deployment date is the same business day, then it's an Emergency Change Order, and permission must be acquired from management prior to submission, which also involves more scrutiny and possibly a meeting for discussion.
Speaking in general, not in regard to my current employment, when a developer hypothetically pushes their own code outside the normal change order process, it's called a BlackOps deployment. I can neither confirm nor deny the existence of any such deployments.
"Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho
March 24, 2017 at 7:14 am
As with any business process, implementing changes can be under-managed or over-managed. Several years before I left a DBA position I discovered that a particular set of about a dozen self-requested online reports on volume performance by various entities nationwide contained an SQL coding error which made them consistently produce erroneous statistical information. The fix was fairly simple because of the commonality across the set.
The company practice was to release code formally to a QA group for testing before release. This process was followed and the code was released to QA. But it never seemed to be convenient to include these fixes in code releases. As a result, the fixes that I checked on several years later had never been implemented for the life of the application.
This was simply a case of IT middle management deciding to avoid implementation 'risk' at the expense of valid information. It seemed that if it ran without failing, it was better to leave it that way than worry about accuracy.
Rick
Disaster Recovery = Backup ( Backup ( Your Backup ) )
March 24, 2017 at 7:31 am
mike.mcquillan - Friday, March 24, 2017 2:52 AMI 100% agree with Jeff...nobody should deploy their own code.Sadly, the reality I've encountered is code release is an afterthought. DevOps is a buzz word, but many organisations still deploy manually (and badly). Even when an automated process is in place, it is not actively maintained, leading to inevitable failures as the code base evolves but the automated deploy does not. As you might imagine, I see organisations encounter many more emergencies than Jeff does!
Happily, I've seen an improvement over the past two years and many organisations are now actively pursuing a DevOps strategy...I can't help but feel we're still some years away from this being the norm though.
Heh.... don't mistake what I said for support of what people call "automated deployments" for database code. It may work just fine and support the notion of multiple daily builds for front-end code but I've never actually seen work "as advertised" for database code except in the simplest of cases especially if you have an environment where the databases in Dev, QA, UAT, Staging, and Prod are all named differently to keep people with high privs from make the "Oops, didn't mean to promote that to prod" mistake. Yeah... I'm working on privs, as well. 😉
I also don't believe that DevOps is a methodology. To me, it's a culture that supports interaction and communication between seeming disparate groups that each touch upon a project in one way or another. It's nothing new to me either. I've been fortunate that most companies that I've worked for (Including the U.S.Navy) easily embrace interaction and communication because it causes enablement and gets the job done so much faster that an "ivory tower" or "silo" culture does.
--Jeff Moden
Change is inevitable... Change for the better is not.
March 24, 2017 at 7:44 am
We have a fairly standard process, separate environments for development, quality assurance. user acceptance, production, and appropriate approvals between each level, with a change management meeting occurring before UA things are promoted to PROD. I suppose the problem I have right now is the way that the promotion of changes happens where I'm at between levels.
The developers and QA people use source control merges to promote code between DEV, QA, UA, PROD branches of source control in TFS, then deploy the result of the merged code to the next level. What has happened a number of times is that the merge can change the code in unintended ways, so occasionally even to PROD we end up deploying code that has bugs or is just plain wrong. In such cases on the database side I've typically been able to get people to promote the TSQL code that is in the UA environment, since that is what the users tested and accepted. I wish it was easier but how things are going to be deployed always seems to be an afterthought and I scramble around the day before trying to ensure things will go as smoothly as possible in my manual process.
March 24, 2017 at 7:59 am
I've worked for the same company for over 10 years and though I've never had DBA in my job title, I do all of the database work for in-house applications--from setting up the databases, permissions, writing stored procedures to setting up SQL jobs. Plus all of the SSIS work. I also do application development. In some aspects, I know more about SQL Server than the guy who has DBA in his title.
Until about 3 years ago, we didn't have a formal process for releasing changes and I routinely would go and promote changes into production myself. Now we have a whole change management process and I'm no longer allowed to promote changes myself. As part of the change management process, I have to write up instructions for deployment for the DBA. It takes me longer to write those instructions than it would to deploy it myself. I've lost track of how many times the DBA put the incorrect values in for SSIS configurations. It's frustrating but I'm adapting to the new way of living:-)
March 24, 2017 at 8:17 am
mike.mcquillan - Friday, March 24, 2017 2:52 AMI 100% agree with Jeff...nobody should deploy their own code.Sadly, the reality I've encountered is code release is an afterthought. DevOps is a buzz word, but many organisations still deploy manually (and badly). Even when an automated process is in place, it is not actively maintained, leading to inevitable failures as the code base evolves but the automated deploy does not. As you might imagine, I see organisations encounter many more emergencies than Jeff does!
Happily, I've seen an improvement over the past two years and many organisations are now actively pursuing a DevOps strategy...I can't help but feel we're still some years away from this being the norm though.
Many years.
March 24, 2017 at 8:25 am
roger.plowman - Friday, March 24, 2017 6:36 AMIf DevOps is cooperation between Developers and Operations admins then would a one person IT department by definition be DevOps? 🙂In a one person IT department, in a company who gives no !#^$s about QA (because *nobody* has time to do more than complain about bugs) actually need a deployment plan? 🙂
In a one person IT department, where automated code tests are generally the only tests (other than by live-fire from users in production) does continuous integration make any sense?
I can't be the only lone wolf in the readership. I read about all the lovely support given by big IT departments to all the latest buzz-word techniques and I wonder what it must be like to have people dedicated to doing nothing but testing code, automating scripts for deploying 5-changes-per-day code, and I sigh, then look back at my looming deadlines...
Even small IT deparments (perhaps < 5 employees) probably don't have the kind of resources needed for adequate QA *testing*, much less the rest of it.
I'd be interested in their opinions on this topic. Speak up! :hehe:
A few comments, since I have worked like this.
DevOps isn't just cooperation between Dev and Ops. It's more and goes to the idea of getting better over time. I think many one-person shops either get better, or they get worse. If they're getting better and looking to improve how they build the software, then they are following the principles. That's the quick answer.
Does CI make sense? Sure, because you may forget to run automated tests. Having a system ensure all tests are run when you commit is a good idea. Far, far too easy for most humans to forget test execution when they get busy. Most of the busy shops don't have to get dedicated QA people to test things. Lots of the Visual Studio/TFS code written isn't tested by humans. Some is, because they want other eyes to look for visual issues, but in some of these companies like MS, Google, etc., they really do end up with lone developers building, testing, and releasing code. They may get code reviews from others, but with automated testing, they work alone on some features.
The difference is there are 100s or 1000s of them, so lots of features get released. Having 50 releases a day usually means you have > 50 developers. Not every developer releases every day. A developer might release 4 times, because they changed something, might find a bug, or get feedback that the feature needs to be slightly altered, and release again.
No one has the resources for adequate QA testing, really. Our software is too complex. What a good DevOps shop does, one person or 1000, is continue to add tests to cover new cases, throw old tests away that don't make sense, and improve the way they work and the code they write on a regular basis.
March 24, 2017 at 8:29 am
Where we work, nobody releases their own code changes to live. With source control, we have a "don't release your own changes to master" policy, but it does happen by accident sometimes.
But with things going live, if it is a SQL change, then a DBA must review it, the supervisor must approve the change going live and then a DBA must release it. When I say DBA, 99% of the time that is me as I have a rough understanding of most of our systems so I know how to release most of the stuff with minimal downtime and with minimal risk.
With non-SQL changes, it is a similar process as with SQL changes, except that any developer (except the one who made it) can review and release it. Still needs the supervisor's approval though. Those often fall onto me as well as I know how to release most non-SQL things as well.
Our review process is:
if the script is NEW, review the entire thing and look for things that could be optimized without compromising the code (like can the cursors be removed?). If the script is a change to an existing one, we compare the launch and rollback to make sure that the changes described in the boilerplate match what the code changes do. And the last 2 things I check are:
1 - is the code re-runnable? That is, does the code drop the objects if they exist before creating them?
2 - are permissions granted on the new objects? This is something that has caused releases to fail before
Our supervisor prefers the DROP and CREATE method for altering objects wherever reasonable (ie adding a coloumn to a table would result in data loss if we did a drop and create, so this is OK to just use an alter, but stored procedure changes are all drop and create).
Recently, we (ie me) decided our release documentation was out of date so I updated that. My next documentation to update is the SQL coding standard documentation. Our document is decent, but it is lengthy without benefit. There are also a few redundancies that I'd like to clean up. Plus, after doing so many releases, I've learned a bunch of stuff that slows down go-live. Things like having 100 different SQL scripts to run (and save the results of) on a single database vs 1 script with 100 new objects in it. The first one is horribly slow as you need to open each script, run it and save the results. AND in the event that 1 of the 100 scripts fail, you could have 99 scripts to roll back. Plus if you need to verify the scripts (launch and rollback), then loading up 2 giant scripts vs 200 tiny ones is a lot easier in tools like winmerge. Well, as long as the objects are defined in the same order in both the launch and rollback. If not, then that can get painful.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!
I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.
March 24, 2017 at 8:30 am
Jeff Moden - Friday, March 24, 2017 7:31 AMI also don't believe that DevOps is a methodology. To me, it's a culture that supports interaction and communication between seeming disparate groups that each touch upon a project in one way or another. It's nothing new to me either. I've been fortunate that most companies that I've worked for (Including the U.S.Navy) easily embrace interaction and communication because it causes enablement and gets the job done so much faster that an "ivory tower" or "silo" culture does.
It's not a methodology. DevOps isn't prescriptive.
It's what you said. A culture following principles. Core, simple principles of getting better over time. Some is common sense, some is desire to be better, some comes from lean, some from the Toyota Production System, some from other areas.
Viewing 15 posts - 1 through 15 (of 34 total)
You must be logged in to reply to this topic. Login to reply