Carlos María Cornejo Crespo
Software craftsman. Big on devops and CI/CD
Carlos Cornejo 2016
Problems and challenges
History and evolution towards containers
Reorganizing roles and responsibilities
Docker ecosystem
Docker first steps
Three types of virtualization technologies:
Containers (apps containers)
hardware (cpu, ram, disk, etc.) is emulated
virtualization with same hardware
e.g., VmWare, Virtualbox, Xen..
An execution environment is virtualized
e.g., Solaris Zones, Linux LXC, Docker
vs
Containers are isolated, but share OS and where appropriate bins/libraries ...
Containers are an encapsulation of an application with its dependencies.
Containers are fundamentally changing the way we develop, distribute, and run software.
Developers can build software locally, knowing that it will run identically regardless of host environment.
Containers share resources with the host OS, which makes them an order of magnitude more efficient.
Containers can be started and stopped in a fraction of a second.
Applications running in containers incur little to no overhead
to applications running natively on the host OS.
The portability of containers has the potential to eliminate a whole class of bugs caused by subtle changes in the running environment
—it could even put an end to the age-old developer refrain of
“but it works on my machine!”
Containers are an old concept.
For decades, UNIX systems have had the chroot command
that provides a simple form of filesystem isolation. FreeBSD, Solaris, etc..
First in 2001 Virtuozzo container technology, then Google with CGroups and The Linux Containers (LXC) project started in 2008.
Finally, in 2013, Docker brought the final pieces to the
containerization puzzle, and the technology began to enter the mainstream.
Eliminate inconsistencies between development, test and production environments.
Support segregation of duties.
Significantly improves the speed and reliability of continuous deployment and continuous integration systems.
Because the containers are so lightweight it addresses significant performance, costs, deployment and portability issues normally associated with VMs.
Provides a CLI to quickly provision a Docker Host.
Core component for the docker ecosystem to manage the life cycle of a container.
Docker Machine installs and configures Docker hosts on local or remote resources.
Describe your stack with one file.
Docker Compose is a tool for building and running applications composed of
multiple Docker containers.
Store and distribute your docker container images.
Docker Hub makes it easy: hub.docker.com
Docker’s Orchestration, Clustering and Management solution.
Exposes several Docker Engines as a single virtual Engine.
Anatomy of a Dockerfile
FROM java:8-jre
ENV CATALINA_HOME /usr/local/tomcat
ENV PATH $CATALINA_HOME/bin:$PATH
RUN mkdir -p "$CATALINA_HOME"
WORKDIR $CATALINA_HOME
ENV TOMCAT_TGZ_URL https://www.apache.org/tomcat/tomcat-latest.tar.gz
RUN set -x \
&& curl -fSL "$TOMCAT_TGZ_URL" -o tomcat.tar.gz \
&& tar -xvf tomcat.tar.gz \
&& rm tomcat.tar.gz*
EXPOSE 8080
CMD ["catalina.sh", "run"]
Docker architecture
How images get built
Docker + Jenkins
We build new images through Dockerfiles and via the docker build command
user@local# docker build -t efc/fo-web .
#
# Super simple example of a Dockerfile
#
FROM ubuntu:latest
MAINTAINER Carlos M. Cornejo "ccornejo@blah.com"
RUN apt-get update
RUN apt-get install -y python python-pip wget
RUN pip install Flask
ADD hello.py /home/hello.py
WORKDIR /home
Image Layers
Each instruction in a Dockerfile results in a new image layer
The new layer is created by starting a container using the image of the previous layer, executing the Dockerfile instruction and saving a new image
They're like git commits or changesets for filesystems.
Docker caches each layer in order to speed up the building of images
The cache is used for an instruction if:
there is a layer in the cache that has exactly the same instruction and parent layer (even spurious spaces will invalidate the cache).
Also, in the case of COPY and ADD instructions, the cache will be invalidated if the checksum or metadata for any of the files has changed.
If you need to invalidate the cache, you can run docker build with the
--no-cache argument
It's important to decide which base image to start from
This step is very important and need to be thought through
We want to set up Jenkins so that:
Any code change triggers the pipeline
Builds and provisions new docker images
Runs integration test and if all passes ok then it pushes to a docker registry.
Promotes those stable images to QA/Sandbox so the relevant people can test it
Builds the artifact/s
Runs unit test and code quality checks
Docker Hub Notification: Triggers downstream jobs when a tagged container is pushed to Docker Hub
Docker Traceability: identifies which build pushed a particular container, displays on Jenkins builds page
Docker Custom Build Environment: specifies customized build environments as Docker containers
Docker: Use a docker host to dynamically provision a slave, run a single build, then tear-down
Build and Publish : builds projects that have a Dockerfile and pushes the resultant tagged image to Docker Hub
Docker Traceability
Pipeline plugin
Continuous Delivery as Code
Configuration in Source Repositories
Reusable Definitions
stage 'compileAndUnit'
node {
git branch: 'master', url: 'https://github.com/lordofthejars/starwars.git'
gradle 'clean test'
stash excludes: 'build/', includes: '**', name: 'source'
stash includes: 'build/jacoco/*.exec', name: 'unitCodeCoverage'
step([$class: 'JUnitResultArchiver', testResults: '**/build/test-results/*.xml'])
}
stage 'codeQuality'
parallel 'pmd' : {
node {
unstash 'source'
gradle 'pmdMain'
step([$class: 'PmdPublisher', pattern: 'build/reports/pmd/*.xml'])
}
}, 'jacoco': {
node {
unstash 'source'
unstash 'unitCodeCoverage'
gradle 'jacocoTestReport'
}
}
Less click-and-type, more code
Docker Pipeline integration
docker.withRegistry('https://lordofthejars-docker-continuous_delivery.bintray.io', 'd4fc3fa9-39f7-47ea-a57c-795642f90989') {
git 'git@github.com:lordofthejars/busybox.git'
def newApp = docker.build "lordofthejars-docker-continuous_delivery.bintray.io/lordofthejars/javatest:${env.BUILD_TAG}"
newApp.push()
}
docker.image('lordofthejars/javatest').withRun {c ->
sh './executeTests.sh'
}
That's all folks!! any questions???
By Carlos María Cornejo Crespo