DEVOPS OVERVIEW
adam meghji
Co-Founder and CTO at @universe
Entrepreneur & hacker.
Passionate about DevOps & APIs.
http://djmarmalade.com.
Always inspired!
the universe
tech stack
OVERVIEW
WEB APPS, MOBILE CLIENTS
PUBLIC-FACING MICROSERVICES
INTERNAL MICROSERVICES
DATABASES, PERSISTENCE
CHALLENGES for devs
Different languages:
Ruby, Node, Python
Different API frameworks:
Rails, Sinatra, Express, Flask
Different JS build tools:
Ember v1, Ember CLI, Webpack
THE INEVITABLE QUESTIONS ..
"How can I run the server?"
"How can I run the test suite?"
"How can I setup a blank DB schema?"
"How can I populate my DB with seed data?"
makefiles!
THE INEVITABLE QUESTIONS ..
"How can I run the server?"
make run
"How can I run the test suite?"
make test
"How can I setup a blank DB schema?"
make install
"How can I populate my DB with seed data?"
make data
Makefile
"How can I run the server?"
-
rails s
-
npm start
-
ember server
- python web.py
- !?!?!?!?
make run
Makefile
"How can I run the test suite?"
-
rspec spec
- npm test
- ember test
- sniffer
- !?!?!?!?
make test
MAKEFILE template
MAKEFILE: NODEJS (MINIMAL)
MAKEFILE: NODEJS (full)
How can I run
the full stack?
FOREMAN
"How can I run the full stack?"
PROCFILE
Example:
foreman start
BUILD PIPELINE
BUILD PIPELINE
- Developer writes code and sends it to Github
-
CircleCI tests the commit
-
2nd Developer peer reviews the code, and merges to (master|staging) when ready
-
2nd Developer issues a deployment via ChatOps: "@uniiverse web production deploy"
-
Hubot kicks off a deploy on build server
-
Build Server shells out to custom deploy script: "./launch.rb deploy web production"
DEPLOY SCRIPT
- BUILD: Assemble code & artifacts
- UPLOAD: Copy build to S3
- SYNC: Distribute to EC2 Instances
- MIGRATE: Execute data migrations
[1/4]: BUILD
-
pulls git HEAD
- "bundle install --path vendor/bundle" if ruby
- "npm install node_modules/" if node
- "rake assets:precompile" if rails
- <etc. for any supported target language>
[2/4]: UPLOAD
- tar -czvf artifacts created in BUILD step
- Upload to S3
aws s3 cp app.tgz s3://...
[3/4]: SYNC
-
identify EC2 instances tagged Name=web_production
-
broadcast a command to initiate the deploy:
chef-client -j /etc/chef/first-boot.json
[4/4]: MIGRATE
- identify 1 random instance tagged Name=web_production
- run migration scripts:
docker exec web_production rake db:migrate
platform
INFRASTRUCTURE
EMbracing devops
How can we empower developers to
collaborate on platform infrastructure?
How can we safely issue cloud configuration changes?
How to audit changes as we iterate and evolve?
How do we curate an amazing DevOps culture?
THE OLD WAY ...
console.aws.amazon.com + 1,000,000 mouse clicks
THE NEW WAY ...
terraform!
terraform makefile
TERRAFORM Base.tf
terraform app cluster
USER-data.sh
TL;DR - bootstrap chef & execute role[sidekiq]
APP-specific AMI
-
ubuntu 14.04 LTS via AWS AM
- cookbooks rendered to custom AMI via Packer
- [base] include_recipe 'apt'
- [base] include_recipe 'apt::unattended-upgrades'
- [base] include_recipe 'fail2ban'
- [base] include_recipe 'unii-chef'
(deregister from chef.io on shutdown) - [base] include_recipe 'unii-papertrail'
- [base] include_recipe 'unii-diamond::install'
- [docker] include_recipe 'aufs'
- [docker] include_recipe 'awscli'
APP-specific AMI
-
configure .dockercfg private repo authentication
- pull docker uniiverse/web-production
(first time, thus entire image) - write upstart configuration:
/etc/init/web_production.conf/usr/bin/docker run --rm --name web_production -v /home/ubuntu/deploy/web/production:/home/app/web -w /home/app/web -P -p 80:80 uniiverse/web-production 2>&1 | logger -t api_production
- write monit & docker memory health check
(restart if > 75% MEM)
AUTOSCALING SEQUENCE
RE-APPLY COOKBOOK TO DEPLOY
DOCKER
CONTAINERS
Docker containers used to encapsulate an app's runtime environment & all dependencies
For example:
- Ruby + bundler
- NodeJS + npm
- ImageMagick + libraries
- supervisord + nginx
- private keys + certs
(secrets, passbook, etc)
strategy #1:
thin containers
THIN CONTAINERS
"Container runs a fully-baked app server ENVIRONMENT"
DOES NOT include application code.
DOES NOT include application gems, node_modules, etc.
Instead, app code & vendorized libs reside on host instance's FS,
and exposed to container's process via shared volumes.
Containers are much lighter, and infrequently built.
strategy #2:
container "binaries"
DOCKERFILE: nodejs (MINIMAL)
DOCKERFILE: RAILS (MINIMAL)
DOCKERFILE: RAILS (FULL)
supervisord.conf
METRICS
collector: diamond
Diamond is a python daemon that collects system metrics and publishes them to Graphite. It is capable of collecting cpu, memory, network, i/o, load and disk metrics.
Additionally, it features an API for implementing custom collectors for gathering metrics from almost any source.
CHARTING: GRAFANA
Gorgeous metric viz, dashboards & editors for Graphite, InfluxDB & OpenTSDB
MONITORING:
Pingdom
LOG AGGREGATION:
Papertrail
helpful
devops tools
./ssh.sh
quick CLI to SSH into any instance:
./ssh.sh <app> <environment> [command]
./ssh.sh web staging
./ssh.sh web staging free -m
Resolves EC2 instances by
Name tag
Solves the problem of server discovery for remote access
Uses the Jump Server to access the instance
./attach.sh
quick CLI to SSH into any app container:
./attach.sh <app> <environment> [command]
./attach.sh web staging
./attach.sh web staging bundle exec rails console staging
./attach.sh web staging bundle exec rake cache:clear
Uses ./ssh.sh and docker exec
Simplifies connecting to the running container.
Perfect for opening an interactive console, etc.
THE
OUTCOME?
BENEFITS
A consistent way to ship microservices in Containers despite underlying language, framework, or dependencies
Services are multi-region, can auto-scale with traffic, and auto-heal during failure
Developers have 1 way of shipping code, with gory details neatly abstracted. No longer daunting to add a new microservice.
Helpful tools and scripts can be written once and reused everywhere.
AREA OF OPTIMIZATION #1
OPTIMIZE COST:
Requires lots of ELB instances
(2 environments * N microservices)
ONE SOLUTION?
1 ELB & 1 ASG of HAProxy machines
incl. subdomain routing,
health detection, etc.
AREA OF OPTIMIZATION #2
OPTIMIZE UTILIZATION:
EC2 instances are single-purpose
(only run 1 docker container)
ONE SOLUTION?
Marathon: execute long-running tasks via Mesos & REST API
All instance CPU & RAM is pooled, and tasks (i.e. containers) are evenly distributed by resource utilization
Kubernetes?
happy hacking :)
https://universe.com
@AdamMeghji
adam@universe.com
Universe DevOps Overview
By adammeghji
Universe DevOps Overview
- 2,383