John Hill Automation Engineer - Ansible
Out of the Continuous Delivery movement authored by Jez Humble in 2011
A deployment pipeline is, in essence, an automated implementation of your application's build, deploy, test, and release process.
Pipelines likely came out of Operations because they were the last ones holding the baton.
Gitlab Pipeline Graph
Gitlab Pipeline Code (yaml)
CircleCI (Workflows)
CircleCI (Workflows) (yaml)
AWS CodeDeploy, AppVeyor, CodeShip, Openshift.io, Spinnaker, Shippable, GoCD, CDMove, TravisCI (beta), Drone, Heroku, Concourse CI, Lambda CD, Bitbucket
Choose your audience
Test results are feedback
Not everyone needs the same level of feedback
RACI
Local Developers aren't the only ones making changes
Other features or Repositories
Open Source, Upstream contributors
Infrastructure Changes (Server, DB Upgrades)
Libraries and Dependencies (npm, python versions)
Breakages can come from anywhere -- even ourselves
(Pipelines as Code)
Not every test can run on every change.
Developers don't want to wait 12 hours to merge code PR
Maybe we don't run ALL the tests immediately
Run the “most valuable” and fastest tests earlier in the pipeline.
"Value" is subjective
Let the most important tests run first and FailFast to get Faster Feedback
Run the more expensive tests as a scheduled pipeline (hourly, nightly, weekly).
Continue to keep a collection of commits which have occurred between runs to associate change with results.
Onramp and Test Onboarding
If the input and the output are linked, then the only thing that changes is result of the test.
Need to visualize the all pipelines
View health, View status....
Essentially a pipeline becomes another stage
Vendor Solutions
Jenkins
*crickets*We could leverage Data Engineering Tools
Look towards Kubernetes to figure this out. https://prow.k8s.io/
Need to move away from Jenkins and Gitlab
Today: Auto-test Library Channel dependencies
https://github.com/fabric8-updatebot/updatebot
Logical Extension: Declare more than just upstream libraries