Continuous delivery is how your code base gets sent to different environments, like QA or Production. With continuous integration in place, setting up continuous delivery is expected by industry standards. It’s rare that you will see the two separated, but some places do have manual deployments for their own reasons. With CI/CD together, you don’t have to worry about manual deployments because they allow you to automate everything.
When your team is finished with code changes, all of their branches will be ready to merge into develop or some other branch you use. As soon as you get all of the pull requests approved, you can immediately start your CI/CD pipeline and you won’t have to touch another thing unless there is an issue. The continuous integration part of the pipeline will create a build artifact that will get sent to the continuous delivery part of the pipeline which will then publish the artifact in the selected environment.
This is the general way you will see CI/CD pipelines implemented. There are a lot of details that go into creating these pipelines which means not every pipeline is equal. You can do the same job numerous ways and some will naturally be better than others. There are certain attributes that you want your CI/CD pipeline to have because it will improve the quality of your releases and give you as well as the business side the confidence to release changes to users more often. Here are a few attributes of a good CI/CD pipeline.
A CI/CD pipeline can have a lot of moving parts. There’s unit testing, integration testing, creating he build artifact, and more. It should not take all day for your code to get through the pipeline. Having a fast pipeline means you can fit more deployments into your day and you can find and fix problems with your code more efficiently. It shouldn’t take more than a few minutes for your integration to finish or send you feedback on your code.
In order to get that fast feedback, we typically put any unit tests or code linters as close to the beginning of the pipeline as possible. That way if there is a problem with your code, you don’t have to wait for much of the pipeline to execute. You get an automated message that will tell you the build failed or something similar and you can go check the continuous integration logs. Finding out about issues early is one of the best ways to improve the speed of your pipeline.
It uses the same processes and artifacts
After the continuous integration part of your pipeline generates that new build artifact for you, it’s time to let the continuous delivery part take over. The artifact that you get from the build can be the source code or even a Docker container. Regardless of the format of your artifact, you should be using this same artifact across all of your environments. By using the same artifact across all the environments, you can test your code in QA or Stage and deploy it to Production with more confidence that it will work because it’s the exact same artifact you used to test with.
My favorite part of continuous delivery is that it keeps the process for deployments consistent. There isn’t a pivotal person that you need to do deploys because there are no secret or manual steps. You can write a script to move the artifact to the right location in the cloud or on your server and it will execute the same way every time. This saves you the pain of remembering all the steps you need to go through to deploy and it takes out a good bit of potential human error.
You can deliver any version of the code at any time
If you’ve ever had to deploy major changes on a Friday afternoon, you can appreciate this. As long as you have an artifact, you can deploy it. Take caution if there have been any data changes between artifact versions. Major issues pop up when you are referencing an older or newer model of a database. Most of the time, you’ll only keep two or three artifacts on hand, but you can always create a new build of a previous version of the code.
So if you end up having some weird deploy issues on Friday night and the business side calls you panicking, you can confidently tell them you will roll-back the changes and look into the problem on Monday. Having this capability will give everyone the ability to deploy changes without many worries because if there’s a problem, they can deploy the last working version.
There’s little manual interaction
Automation makes continuous integration and deployment worth the extra effort. It might take some time to build a good pipeline, but this is one of the most crucial things to watch for. No one should have to do more than one or two manual steps, like clicking a button. Once a developer pushes a commit and their pull request gets approved, the changes should deploy on their own. There are no other steps. You commit your code, it gets approved and the rest is done for you.
The goal of CI/CD is to give the business side control over when features get released without needing to go through the development team. Do you really want them to have to remember the deploy process? Can you even remember the deploy process and do it with absolutely no errors each time they ask you? Most of us have a story about the one time we messed up the deploy and it’s usually because we forgot something. A good CI/CD pipeline will make sure that you don’t have to worry about forgetting something because it has everything in place to run automatically.
Building a good CI/CD pipeline is worth the effort because of how much time and swearing it saves you. You won’t have to hunt down the bugs in Production because you already found them in QA so you just build a new artifact with the fixes. It can take a lot of pressure off of developers, but I wonder if it adds any. Have you ever felt like you are being pushed to get more done in a shorter amount of time since because of CI/CD?
Hey! You should follow me on Twitter because reasons: https://twitter.com/FlippedCoding