Sunday, 29 July 2018

Implementing a Continuous Delivery Pipeline - The Basics

Continuous Delivery The Basics

Part 1 of my posts around Continuous Delivery.

This post will cover the basics of Continuous Delivery (CD) and also the tools/technologies I have used to build a CD pipeline.

Let's start with a nice definition:

What is Continuous Delivery?

As detailed on Wikipedia here, Continuous Delivery is an "approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software with greater speed and frequency".

Sounds alright, doesn't it? But you may be all:

Why should I care?

If you want frustrating, big-bang, on-a-knife-edge release nights to be a thing of the past, you should strive to implement CD practices. The good thing about striving for this is that even though you may not get all the way, all the changes and good practices that you have implemented in trying to get there will see your team benefiting from more automation, quicker releases to your environments and confidence in that what you are preparing to release is actually fit.
From my experience I can't think of another approach that has produced so much benefit to a software delivery team as implementing good CD practices.

If you're interest is piqued, you may be thinking:

What are the basic tools/methods I will need?

In order to get closer to a CD approach, it is essential to have:
  • Source control - but you know that already, right? Preferably we want to use Git, the reasons for this we will cover later.
  • A "good" branching strategy - I am an advocate of feature branching, though this is not a one size fits all solution. This by Dave Farley is an excellent read on this and more - including how doing anything other than always checking into master negates true CI (I haven't worked in a team that has achieved this yet).
  • Continuous integration (CI) - when developers check in, this is our first "quality gate" in the entire release pipeline. We want to run unit tests to ensure that the developer's commit will not break the build of trunk/master.
  • Testing coverage - comprising of unit tests, automated functional and component tests plus manual exploratory testing.
  • The ability to compose release definitions, using something like TFS.
  • DevOps - you're going to need help with such joys as server access, AD, databases and other infrastructure related delights. You need a good relationship with your Ops representative, and DBA.
Let's go into more detail about quality gates.

Quality Gates

As I mentioned in the context of CI above, CD must have quality gates to work. Having these quality gates at the rights points ensures less wasted effort and confidence in your release artefact. I would set quality gates at these points:

  • Developer checks in code - unit tests ensure this new code won't break anything
  • Merge to trunk/master - run unit tests against the merged version of the master branch to ensure the new code doesn't break this branch. If it does, this becomes a priority for the developer, if not the team, to fix. See such approaches as this.
  • Automated tests - be it functional or component, a failed automated test must mean the artefact under test is not fit for release and does not go onto the next stage.
  • Manual testing - if your tester's find a bug, the artefact is not fit for release and does not go any further.
  • Post release testing - if your post release tests find an issue, you may either view this as worth investigating with the opportunity to fix forward, or you may be so mature that you automatically roll back through automation.
With these points in mind, let's look at how these gates fit into the release pipeline:

Continuous Delivery quality gates

See how this saves on wasted effort? Most obviously for the beloved manual testers - who won't have to waste time testing an artefact that is not fit for release as previous quality gates would have already flagged it as unfit.

Why Git?

Git helps to ensure that the above quality gate methods are being conducted against the same artefact every step of the way. Git is faster and easier when using a feature branching strategy as creating a new feature branch is completed via a simple command.
Additionally, we can set up various automation to happen only against the context of master. For example deploying code automatically to the development environment once code is merge to master, and the build succeeds.
Git encourages more frequent commits and merges. Why is that important?
 

Developer check in blast radius

To minimise the blast radius! 

We want developers to check in as frequently as possible. Use CI! It is meant to be continuous! And here we think of it as continuous feedback. Use the CI build definition to get fast feedback via unit tests (if you're doing TDD, you'll likely have a nice large suite of tests to rely on). Also, the quicker developers are committing, the less change there is to merge to master. and the less chance of conflicts. This is doubly important when working in a large development teams that can see sets of developers working on the same component at once.

Tangent alert - branching strategy experiences

I must mention a particular branching strategy I have worked with before. We were using Team Foundation Server version control, and for each component solution had a Dev and Main branch. The idea here was that devs would do their work against the Dev branch, and release to the development environments from there. Only when they had tested their changes in that environment would they merge to Main. The release candidate would then be built from that Main branch.
This is an antipattern - ensure you promote at a binary level, rather that at a source code level. Why? By promoting at a binary level, you are increasing the testing exposure to that release candidate right from the get go, and interrogating its fitness for release through the entire deployment pipeline. I see this as one of the pillar philosophies of CD. Promoting at source level, as we did with that Dev and Main model, means you are creating an entirely new release candidate after the dev has merged their code to Main, meaning this new release candidate will not have gone through the initial dev CI goodness, or been tested on the development environment. SAD!

So that's probably enough in a "The Basics" post around source control, branching strategies and quality gates. So let's move on.

Finally! So, you mentioned DevOps...

Yes. DevOps is essential to implementing CD. What is it? You can read about it here.
As you will be deploying to environments faster using automation, and will no doubt be looking to implement IaC, you are going to need a friendly Ops companion on your journey through permissions, access and related wonders. You simply won't be able to get very far without having an Ops resource to assist with this stuff.
The ideal working model here is to have your Ops resource sitting in the team (doesn't have to be permanently, but certainly encourage as much access as possible) pairing with devs as issues arise. If boundaries wish to be maintained, the Ops resource should be the one to approve pull requests for changes to environment configuration (e.g. changes to firewall or AD).
It is important to try to achieve buy in from your Ops resource, lest you face feeling like you are just irritating them whenever you need something doing. They should be in the trenches with you every step of the way, striving for the same goals. Try to avoid a pull model, but strive for collaboration and equality to such an extent that your Ops resource is coming up with solutions before they have become an issue for you - this is so beneficial, and a pleasure to work with.

...And the DBAs?

All what I have mentioned for the Ops resource should also apply to a DBA resource. In my experience, DBA teams or individuals are a bottleneck and strained, so it is that much harder to drive a DevOps culture here. But again, the benefits are massive. Imagine having a DBA attending your refining and planning sessions, sitting with and advising the team as database changes are coded, and being a part of your release. Contrast that with the typical model - fights for the permissions to release database changes, actually getting hold of someone to discuss it, short emails around permission of least privilege etc... it doesn't have to be this way!

Ok, that's enough for an introduction. Hopefully you can see the benefits of CD, and have a rough idea of what is involved to get started. Check back soon for more detailed posts on Powershell DSC, Pester tests, TFS release management, DevOps, database deployments and more.

No comments:

Post a Comment