Asmir Mustafic
(and a couple of other tools)
May 2017 - NomadPHP
Software Engineer and Consultant
Germany (Berlin), Italy (Venice), Bosnia
How to
"From my local application
to the world?"
FTP / SFTP / ssh / rsync / bash
and so on...
More machines, environments, tests, configurations....
Docker automates the deployment of applications inside software containers.
Docker containers can be deployed on any server that has docker support
Docker divided application dependencies from
operating system dependecies
Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications, whether on laptops, data center VMs, or the cloud.
docker-compose is a tool for defining and running multi-container Docker applications
Docker Machine is a tool that lets you install Docker Engine on virtual hosts, and manage the hosts with docker-machine commands
Docker Share is a tool that allows to export to a single tar.gz file configurations regarind a machine created/imported via docker-machine
https://github.com/grinnery/machine-share
Create a local development environment
with Docker
# docker-compose.yaml
version: '2'
services:
db:
image: postgres:9.6
ports: # used to debug
- "5050:5432"
volumes:
- .docker_data/pg-data:/var/lib/postgresql/data/pgdata
www:
image: goetas/nginx:${TAG}
build: ./docker/nginx
ports: # used to debug and develop, http://localhost:8080/
- "8080:80"
depends_on:
- php
volumes_from:
- data
php:
image: goetas/php:${TAG}
build: ./docker/php-fpm
volumes_from:
- data
data:
image: goetas/data:${TAG}
build: .
volumes:
- .:/var/www
# docker/php/Dockerfile
FROM php:7-fpm
RUN curl https://getcomposer.org/composer.phar > /usr/local/bin/composer \
&& chmod a+x /usr/local/bin/composer
# customized ini directives for my app
COPY ini/app.ini /usr/local/etc/php/conf.d/app.ini
WORKDIR /var/www
# docker/nginx/Dockerfile
FROM nginx:1
COPY conf /etc/nginx
# Dockerfile
FROM busybox
COPY . /var/www
VOLUME /var/www
ENTRYPOINT busybox tail -f /dev/null
Docker volumes are a good alternative...
Staging, test and so on...
Create locally your new application
and...
Push your code to a VCS server
Configure you VCS system to trigger a build on some server
github/bitbucket/gitlab
have already integrations with almost every CI server as
travisCI, circleCI, Jenkins, bamboo...
get dependencies
run tests
prepare images
(and push images to the registry [step 3])
run next steps (optional)
trigger deploy
(running build means just running some bash commands)
# infrastructure deps
docker-compose up -d --build
# application deps
docker exec php_container_name composer install -o
# docker exec node_container_name npm install
# docker exec other_container_name other command
Dependencies
# load some test data/fixtures into database
magic command here
# run tests
phpunit
Tests
You have tests, right?
# rebuild to include composer vendor folder (depends on your app)
docker-compose build data
# login to docker registry
docker login -e $EMAIL -u $USERNAME -p $PASSWORD
# set the target version
export TAG="$BRANCH_NAME"
# push to docker registry
docker-compose bundle --push-images
Step 3. push to docker registry
Decide if deploy or not
based mostly on
commit content / branch name / or something else
# create our instance, if not already done
docker-machine create --driver amazonec2 aws01
# syntax docker-machine
# docker-machine create [options] --driver [driver-options] machine-name
# export machine credentials to a file named aws01.tar.gz
machine-export aws01
This code can be executed on the CI server or manually
(depends on workflow)
# get docker machine credentials
machine-import aws01.tar.gz
# tell docker client to use instance aws01
docker-machine env aws01
# set the deploy target
export TAG="$BRANCH_NAME"
# download latest docker images
docker-compose -f docker-compose.live.yml pull
# start your fresh application
docker-compose -f docker-compose.live.yml up -d
This code is executed on the CI server
# docker-compose.live.yaml
version: '2'
services:
db:
image: postgres:9.6
volumes:
- /var/pg-data:/var/lib/postgresql/data/pgdata
data:
image: goetas/data:${TAG}
php:
image: goetas/php:${TAG}
volumes_from:
- data
www:
image: goetas/nginx:${TAG}
ports:
- "80:80"
depends_on:
- php
volumes_from:
- data
docker-compose.live.yml
Database migrations
# after deploy
# run database migrations ot other tasks post deploy
docker exec container_name command
.dockerignore
docker
tests
docker-compose*
circle.yml
Dockerfile
.docker_data
.git
**/.git
.gitignore
.idea
web/app_dev.php
**.css.map
var/cache/*
var/logs/*
var/sessions/*
var/uploads/*
machine-import
# https://github.com/grinnery/machine-share
machine-export <machine-name>
>> exported to <machine-name>.tar
machine-import <machine-name>.tar
>> imported
no downtime deploy
# deploy.sh [tag]
# nginx proxy + docker-gen (https://github.com/jwilder/docker-gen)
# ${name} is the project name
if [ $(docker ps --filter=name="${name}Va" --format "{{.ID}}" | wc -l) = "0" ]; then
project_up="${name}Va"
project_down="${name}Vb"
else
project_up="${name}Vb"
project_down="${name}Va"
fi
export TAG=$1
docker-compose -p $project_up pull
docker-compose -p $project_up up -d
# do health-check on $project_up and invoke docker-gen
docker-compose -p $project_down down -v
Docker swarm
If your target machine is a swarm master, everything works as usual
except that you are distributing your application across a cluster of nodes! :)