Note: Published in DZone 2014 Continuous Delivery research report
Note: A more up-to-date version of this article is here: What Is Continuous Delivery
Our highest priority is to satisfy the customer through early and continuous delivery of valuable software
Since the publication of the seminal book “Continuous Delivery” by Dave Farley and Jez Humble in 2010, Continuous Delivery has become a widely discussed topic within the IT industry and an essential competitive advantage for technology companies such as Etsy, Facebook, and Netflix. But where did Continuous Delivery come from, what does it offer, and how does it work?
Beyond Continuous Integration
When Kent Beck published the inaugural eXtreme Programming paper in 1998, his proposal that developers should “integrate and test several times a day” was revolutionary. Enshrined in XP as the Integrate Often rule, the frequent integration of mainline code allows developers to rapidly discover integration problems and reduce development costs, and has proven so successful over time it is now a mainstream development practice known as Continuous Integration. However, as Continuous Integration is focussed upon development it can only benefit a fraction of the end-to-end release process, which in the majority of IT organisations remains a high-risk, labour-intensive affair akin to the following:
Such a release process will likely involve extensive documentation, overnight scheduling, unversioned configuration management, ad hoc server management, and/or large numbers of participants. In this situation software releases inevitably become high cost, high risk events susceptible to human error, and given prominent failures such as the Knights Capital $440 million glitch there can be an understandable reluctance to release software frequently. However, there is always an opportunity cost associated with not delivering software, and this was recently highlighted by reports of Microsoft’s decade of e-book/smartphone opportunity costs. This poses a seemingly intractable business problem for many organisations – how can the risk of delivering software be reduced, while simultaneously delivering new features to customers faster?
Inspired by the Agile Manifesto stating “our highest priority is to satisfy the customer through early and continuous delivery of valuable software“, Continuous Delivery is a delivery method that advocates the creation of an automated Deployment Pipeline to release software rapidly and reliably into production. The goal of Continuous Delivery is to adopt a holistic end-to-end delivery perspective and optimise cycle time – the average time between production releases – so that development costs become lower, the risk of release failure becomes minimal, and customer feedback loops become smaller. The result is an automated release workflow similar to the below:
In order to guide Continuous Delivery adoption within an organisation, Dave Farley and Jez Humble defined the following principles:
- Repeatable, Reliable Process: use the same deterministic release mechanism in all environments
- Automate Almost Everything: automate acceptance testing, deployment tasks, configuration management, etc.
- Keep Everything In Version Control – store all code, configuration, schemas etc. in source control
- Bring Pain Forward – shrink feedback loops for time-consuming, error-prone operations
- Build Quality In – fix defects in development as soon as they occur
- Done Means Released – do not consider features complete until released to production
- Everybody Is Responsible – align teams and individuals with the release process
- Continuous Improvement – continuously improve the people and technology involved
These principles are promoted by the Deployment Pipeline pattern, which has been described by Dave Farley and Jez Humble as “Continuous Integration taken to its logical conclusion” and lies at the heart of Continuous Delivery. The Deployment Pipeline is an automated implementation of the build/deploy/test/release cycle that enables self-serviced releases of any application version into any environment. In “Continuous Delivery”, Dave Farley and Jez Humble visualise a Deployment Pipeline as follows:
In the Deployment Pipeline, the commit stage is triggered by a source code or configuration change and is responsible for compiling code, running unit tests, running static analysis checks, and assembling the application binaries for the binary repository. A successful run automatically triggers the acceptance stage, which runs the automated acceptance tests against that application version. If the acceptance tests pass then that application version can progress to manual exploratory testing, automated capacity testing, and production usage. This is in accordance with the Deployment Pipeline best practices:
- Build Your Binaries Only Once – create immutable binaries to eliminate recompilation errors
- Deploy The Same Way – use the same automated release mechanism in each stage
- Smoke Test Deployments – assert deployment success prior to usage
- Deploy Into Production Copy – create a production-like test environment
- Instant Propagation – immediately make application version available to next stage upon stage success
- Stop The Line – immediately make application version unavailable to pipeline upon stage failure
The creation of a Deployment Pipeline establishes a pull-based release mechanism that reduces development costs, minimises the risk of release failure, and allows the production release process to be rehearsed thousands of times. It provides visibility into the production readiness of different application versions at any point in time, and drives continuous improvement of the release process by identifying bottlenecks in the system.
Organisational Change and DevOps
While a Deployment Pipeline reduces costs and risk, the reality is that Continuous Delivery is vulnerable to Conway’s Law and dependent upon organisational change to achieve a significant reduction in cycle time. In the majority of organisations a Deployment Pipeline will be used by non-aligned siloed teams, meaning that lead times will be substantially inflated by handover delays between teams regardless of pipeline execution time.
Dave Farley and Jez Humble have repeatedly warned “where the delivery process is divided between different groups… the cost of coordination between these silos can be enormous“, and this is reflected in the Everybody Is Responsible and Done Means Released principles. Testing and operational tasks must become intrinsic development activities rather than discrete work phases, and the people involved must be integrated into the product development team in what is often a slow and painstaking process of change.
It is for this reason that the parallel growth of the DevOps philosophy has been welcomed by the Continuous Delivery community. DevOps has been defined by Damon Edwards as “aligning development and operations roles and processes in the context of shared business objectives“, and aims to increase communication and collaboration between IT divisions such as Development and Operations. Continuous Delivery and DevOps have evolved independently, but are interdependent upon one another – a Deployment Pipeline can act as a focal point for DevOps collaboration, and the DevOps integration of Agile principles with Operations practices can eliminate the handover delays between Development and Operations teams.
While Continuous Integration has become a mainstream development practice, Continuous Delivery goes much further and is poised to become a mainstream IT practice. By creating an automated Deployment Pipeline a repeatable and reliable delivery mechanism becomes available that enables an organisation to increase product revenues by releasing new features to customers more frequently without fear of failure. However, Continuous Delivery is far more reliant upon organisational change than technology change if it is to be truly successful.
Leave a Reply