Introduction to Ansible
Alejandro Guirao Rodríguez
Architecture overview
Source: sysadmincasts
Environment installation
Follow the instructions at
Vagrantfile
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
config.ssh.insert_key = false
config.vm.provider :virtualbox do |vb|
vb.customize ["modifyvm", :id, "--memory", "256"]
end
# Web server 1
config.vm.define "web01" do |web|
web.vm.hostname = "web01"
web.vm.box = "ubuntu/trusty64"
web.vm.network :private_network, ip: "10.0.15.21"
end
# Web server 2
config.vm.define "web02" do |web|
web.vm.hostname = "web02"
web.vm.box = "ubuntu/trusty64"
web.vm.network :private_network, ip: "10.0.15.22"
end
# Database server.
config.vm.define "db01" do |db|
db.vm.hostname = "db1"
db.vm.box = "ubuntu/trusty64"
db.vm.network :private_network, ip: "10.0.15.23"
config.vm.provider "virtualbox" do |vb|
vb.customize ['createhd', '--filename', 'disk.vdi', '--size', 5 * 1024]
vb.customize ['storageattach', :id, '--storagectl', 'SATAController', '--port', 1, '--device', 0, '--type', 'hdd', '--medium', 'disk.vdi']
end
end
end
Host file (inventory)
[web]
10.0.15.21
10.0.15.22
[db]
10.0.15.23
# Variables that will be applied to all servers
[all:vars]
ansible_ssh_user=vagrant
More info and options: docs.ansible.com/intro_inventory.html
Ad Hoc Commands
- Throw-away, one-time actions
- Execution of an Ansible module in each host of a group
- Run by the "ansible" command using this syntax:
ansible group -m module -a options -i host_file
Let's assume that the host file is located at /etc/ansible/hosts
Hello World!: Ping
(ansible)alex ~ $ ansible all -m ping
10.0.15.22 | success >> {
"changed": false,
"ping": "pong"
}
10.0.15.21 | success >> {
"changed": false,
"ping": "pong"
}
10.0.15.23 | success >> {
"changed": false,
"ping": "pong"
}
ansible all -m ping
Let's check the hostnames
(ansible)alex ~ $ ansible all -a "cat /etc/hostname"
10.0.15.22 | success | rc=0 >>
web02
10.0.15.21 | success | rc=0 >>
web01
10.0.15.23 | success | rc=0 >>
db1
ansible all -a "cat /etc/hostname"
If no module is specified, then the command module is used
Let's install ntp, first in the db
(ansible)alex ~ $ ansible db -s -m apt -a "name=ntp update_cache=yes"
10.0.15.23 | success >> {
"changed": true,
"stderr": "",
"stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following extra packages will be installed:\n libopts25\nSuggested packages:\n ntp-doc\nThe following NEW packages will be installed:\n libopts25 ntp\n0 upgraded, 2 newly installed, 0 to remove and 62 not upgraded.\nNeed to get 473 kB of archives.\nAfter this operation, 1676 kB of additional disk space will be used.\nGet:1 http://archive.ubuntu.com/ubuntu/ trusty/main libopts25 amd64 1:5.18-2ubuntu2 [55.3 kB]\nGet:2 http://archive.ubuntu.com/ubuntu/ trusty-updates/main ntp amd64 1:4.2.6.p5+dfsg-3ubuntu2.14.04.2 [418 kB]\nFetched 473 kB in 5s (82.2 kB/s)\nSelecting previously unselected package libopts25:amd64.\n(Reading database ... 60960 files and directories currently installed.)\nPreparing to unpack .../libopts25_1%3a5.18-2ubuntu2_amd64.deb ...\nUnpacking libopts25:amd64 (1:5.18-2ubuntu2) ...\nSelecting previously unselected package ntp.\nPreparing to unpack .../ntp_1%3a4.2.6.p5+dfsg-3ubuntu2.14.04.2_amd64.deb ...\nUnpacking ntp (1:4.2.6.p5+dfsg-3ubuntu2.14.04.2) ...\nProcessing triggers for man-db (2.6.7.1-1ubuntu1) ...\nProcessing triggers for ureadahead (0.100.0-16) ...\nSetting up libopts25:amd64 (1:5.18-2ubuntu2) ...\nSetting up ntp (1:4.2.6.p5+dfsg-3ubuntu2.14.04.2) ...\n * Starting NTP server ntpd\n ...done.\nProcessing triggers for libc-bin (2.19-0ubuntu6.5) ...\nProcessing triggers for ureadahead (0.100.0-16) ...\n"
}
ansible db -s -m apt -a "name=ntp update_cache=yes"
The -s option runs the module with sudo
Let's install ntp in every machine
(ansible)alex ~ $ ansible all -s -m apt -a "name=ntp update_cache=yes"
10.0.15.23 | success >> {
"changed": false
}
10.0.15.22 | success >> {
"changed": true,
"stderr": "",
"stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following extra packages will be installed:\n libopts25\nSuggested packages:\n ntp-doc\nThe following NEW packages will be installed:\n libopts25 ntp\n0 upgraded, 2 newly installed, 0 to remove and 62 not upgraded.\nNeed to get 473 kB of archives.\nAfter this operation, 1676 kB of additional disk space will be used.\nGet:1 http://archive.ubuntu.com/ubuntu/ trusty/main libopts25 amd64 1:5.18-2ubuntu2 [55.3 kB]\nGet:2 http://archive.ubuntu.com/ubuntu/ trusty-updates/main ntp amd64 1:4.2.6.p5+dfsg-3ubuntu2.14.04.2 [418 kB]\nFetched 473 kB in 7s (67.3 kB/s)\nSelecting previously unselected package libopts25:amd64.\n(Reading database ... 60960 files and directories currently installed.)\nPreparing to unpack .../libopts25_1%3a5.18-2ubuntu2_amd64.deb ...\nUnpacking libopts25:amd64 (1:5.18-2ubuntu2) ...\nSelecting previously unselected package ntp.\nPreparing to unpack .../ntp_1%3a4.2.6.p5+dfsg-3ubuntu2.14.04.2_amd64.deb ...\nUnpacking ntp (1:4.2.6.p5+dfsg-3ubuntu2.14.04.2) ...\nProcessing triggers for man-db (2.6.7.1-1ubuntu1) ...\nProcessing triggers for ureadahead (0.100.0-16) ...\nSetting up libopts25:amd64 (1:5.18-2ubuntu2) ...\nSetting up ntp (1:4.2.6.p5+dfsg-3ubuntu2.14.04.2) ...\n * Starting NTP server ntpd\n ...done.\nProcessing triggers for libc-bin (2.19-0ubuntu6.5) ...\nProcessing triggers for ureadahead (0.100.0-16) ...\n"
}
10.0.15.21 | success >> {
"changed": true,
"stderr": "",
"stdout": "Reading package lists...\nBuilding dependency tree...\nReading state information...\nThe following extra packages will be installed:\n libopts25\nSuggested packages:\n ntp-doc\nThe following NEW packages will be installed:\n libopts25 ntp\n0 upgraded, 2 newly installed, 0 to remove and 62 not upgraded.\nNeed to get 473 kB of archives.\nAfter this operation, 1676 kB of additional disk space will be used.\nGet:1 http://archive.ubuntu.com/ubuntu/ trusty/main libopts25 amd64 1:5.18-2ubuntu2 [55.3 kB]\nGet:2 http://archive.ubuntu.com/ubuntu/ trusty-updates/main ntp amd64 1:4.2.6.p5+dfsg-3ubuntu2.14.04.2 [418 kB]\nFetched 473 kB in 6s (76.9 kB/s)\nSelecting previously unselected package libopts25:amd64.\n(Reading database ... 60960 files and directories currently installed.)\nPreparing to unpack .../libopts25_1%3a5.18-2ubuntu2_amd64.deb ...\nUnpacking libopts25:amd64 (1:5.18-2ubuntu2) ...\nSelecting previously unselected package ntp.\nPreparing to unpack .../ntp_1%3a4.2.6.p5+dfsg-3ubuntu2.14.04.2_amd64.deb ...\nUnpacking ntp (1:4.2.6.p5+dfsg-3ubuntu2.14.04.2) ...\nProcessing triggers for man-db (2.6.7.1-1ubuntu1) ...\nProcessing triggers for ureadahead (0.100.0-16) ...\nSetting up libopts25:amd64 (1:5.18-2ubuntu2) ...\nSetting up ntp (1:4.2.6.p5+dfsg-3ubuntu2.14.04.2) ...\n * Starting NTP server ntpd\n ...done.\nProcessing triggers for libc-bin (2.19-0ubuntu6.5) ...\nProcessing triggers for ureadahead (0.100.0-16) ...\n"
}
ansible all -s -m apt -a "name=ntp update_cache=yes"
Idempotency
Notice that the db has a key of "changed: false" in the output
Ansible modules will detect if it is necessary to perform actions to reach the desired state. If no action is needed, then nothing is done
This is called idempotency and it is key to Configuration Management, ensuring that applying the commands over and over again won't have unexpected results
Let's restart the ntp service
(ansible)alex ~ $ ansible all -s -m service -a "name=ntp state=restarted"
10.0.15.23 | success >> {
"changed": true,
"name": "ntp",
"state": "started"
}
10.0.15.21 | success >> {
"changed": true,
"name": "ntp",
"state": "started"
}
10.0.15.22 | success >> {
"changed": true,
"name": "ntp",
"state": "started"
}
ansible all -s -m service -a "name=ntp state=restarted"
Enough! No more one-liners!
Ad-hoc commands may be useful for quick-and-dirty operations. But configuration management needs a more disciplined way of working
The playbooks are YAML files that specify a list of plays.
Each play is a series of tasks applied to a group of hosts
Playbooks are played with the ansible-playbook command
A playbook to deploy nginx
---
- hosts: web
sudo: yes
vars:
- external_port: 80
- internal_port: 8000
tasks:
- name: Add the apt repository for nginx
apt_repository: repo="ppa:nginx/stable" update_cache=yes
- name: Install nginx
apt: name=nginx state=present
- name: Ensure that nginx is started
service: name=nginx state=started
- name: Remove default site
file: path=/etc/nginx/sites-enabled/default state=absent
notify:
- Restart nginx
- name: Configure a site
template: src=templates/site.j2 dest=/etc/nginx/sites-available/site
- name: Enable a site
file: src=/etc/nginx/sites-available/site dest=/etc/nginx/sites-enabled/site state=link
notify:
- Restart nginx
handlers:
- name: Restart nginx
service: name=nginx state=restarted
deploy-nginx.yml
The site template
server {
listen {{ external_port }};
server_name {{ ansible_hostname }};
location / {
proxy_pass http://localhost:{{ internal_port }};
}
}
- Ansible uses Jinja2 for templating
- The variables can be defined in several ways: in playbooks, in the inventory, in separate files, at the command line...
- Some variables are automatically discovered by ansible and available for using in the playbooks: they are called facts. In this template we are using one: ansible_hostname
templates/site.j2
About facts
You can see the facts of the hosts using the module setup
ansible all -m setup
You'll get a very long list including hostnames, IPs, hardware and devices attached, installed software versions...
If you want to speed up the execution of a playbook, at the risk of losing those values, set gather_facts: no in the play
Let's run the playbook
$ ansible-playbook deploy-nginx.yml
PLAY [web] ********************************************************************
GATHERING FACTS ***************************************************************
ok: [10.0.15.21]
ok: [10.0.15.22]
TASK: [Add the apt repository for nginx] **************************************
changed: [10.0.15.21]
changed: [10.0.15.22]
TASK: [Install nginx] *********************************************************
changed: [10.0.15.22]
changed: [10.0.15.21]
TASK: [Ensure that nginx is started] ******************************************
ok: [10.0.15.21]
ok: [10.0.15.22]
TASK: [Remove default site] ***************************************************
changed: [10.0.15.21]
changed: [10.0.15.22]
TASK: [Configure a site] ******************************************************
changed: [10.0.15.21]
changed: [10.0.15.22]
TASK: [Enable a site] *********************************************************
changed: [10.0.15.21]
changed: [10.0.15.22]
TASK: [Remove default site] ***************************************************
ok: [10.0.15.21]
ok: [10.0.15.22]
NOTIFIED: [Restart nginx] *****************************************************
changed: [10.0.15.21]
changed: [10.0.15.22]
PLAY RECAP ********************************************************************
10.0.15.21 : ok=9 changed=6 unreachable=0 failed=0
10.0.15.22 : ok=9 changed=6 unreachable=0 failed=0
Encapsulation: roles
In order to create reusable components, we can define roles
Roles are a way to relate certain elements to a group of servers:
- Variables
- Tasks
- Handlers
- Files and templates
- Dependencies to other roles
Let's create a role to install MongoDB
$ tree roles
roles
└── mongodb_server
├── defaults
│ └── main.yml
├── handlers
│ └── main.yml
├── README.md
└── tasks
└── main.yml
Tasks
---
- name: Add the apt-key for mongodb-org
apt_key: keyserver=hkp://keyserver.ubuntu.com:80 id=0x7F0CEB10
sudo: yes
- name: Add the apt repository for mongodb-org
apt_repository: repo="deb http://downloads-distro.mongodb.org/repo/ubuntu-upstart dist 10gen" update_cache=yes
sudo: yes
- name: Make sure mongodb-org is installed
apt: name="mongodb-org" state=latest
sudo: yes
- name: Enable small journal files if applicable
lineinfile: dest=/etc/mongod.conf regexp="^smallfiles =" line="smallfiles = true"
when: small_files
sudo: yes
notify:
- Restart mongodb-org
- name: Bind mongodb to every IP
lineinfile: dest=/etc/mongod.conf regexp="^bind_ip =" state=absent
sudo: yes
notify:
- Restart mongodb-org
- name: Make sure mongod is started
service: name=mongod state=started
sudo: yes
roles/mongodb_server/tasks/main.yml
Handlers
---
- name: Restart mongodb-org
service: name=mongod state=restarted
sudo: yes
Defaults
---
small_files: False
roles/mongodb_server/handlers/main.yml
roles/mongodb_server/defaults/main.yml
Using roles in a playbook
---
- hosts: db
sudo: yes
roles:
- mongodb_server
tasks:
- name: Install the pre-requisites
apt: name={{ item }} update_cache=yes
with_items:
- lvm2
- name: Create a vg named vgdata with /dev/sdb
lvg: vg=vgdata pvs=/dev/sdb
- name: Create a lv named lvdata01 in vgdata
lvol: vg=vgdata lv=lvdata01 size=80%VG
- name: Create an ext4 filesystem in /dev/mapper/vgdata-lvdata01
filesystem: fstype=ext4 dev=/dev/mapper/vgdata-lvdata01
- name: Make sure mongod is stopped
service: name=mongod state=stopped
- name: Mount the directory
mount: name=/var/lib/mongodb src=/dev/mapper/vgdata-lvdata01 fstype=ext4 state=mounted
- name: Re-establish permissions for the directory
file: path=/var/lib/mongodb owner=mongodb group=nogroup state=directory recurse=yes
- name: Make sure mongod is started
service: name=mongod state=started
deploy-mongo.yml
Install MongoDB!
$ ansible-playbook deploy-mongo.yml
PLAY [db] *********************************************************************
GATHERING FACTS ***************************************************************
ok: [10.0.15.23]
TASK: [mongodb_server | Add the apt-key for mongodb-org] **********************
ok: [10.0.15.23]
TASK: [mongodb_server | Add the apt repository for mongodb-org] ***************
ok: [10.0.15.23]
TASK: [mongodb_server | Make sure mongodb-org is installed] *******************
changed: [10.0.15.23]
TASK: [mongodb_server | Enable small journal files if applicable] *************
skipping: [10.0.15.23]
TASK: [mongodb_server | Bind mongodb to every IP] *****************************
changed: [10.0.15.23]
TASK: [mongodb_server | Make sure mongod is started] **************************
ok: [10.0.15.23]
TASK: [Install the pre-requisites] ********************************************
changed: [10.0.15.23] => (item=lvm2)
TASK: [Create a vg named vgdata with /dev/sdb] ********************************
changed: [10.0.15.23]
TASK: [Create a lv named lvdata01 in vgdata] **********************************
changed: [10.0.15.23]
TASK: [Create an ext4 filesystem in /dev/mapper/vgdata-lvdata01] **************
changed: [10.0.15.23]
TASK: [Make sure mongod is stopped] *******************************************
changed: [10.0.15.23]
TASK: [Mount the directory] ***************************************************
changed: [10.0.15.23]
TASK: [Re-establish permissions for the directory] ****************************
changed: [10.0.15.23]
TASK: [Make sure mongod is started] *******************************************
changed: [10.0.15.23]
NOTIFIED: [mongodb_server | Restart mongodb-org] ******************************
changed: [10.0.15.23]
PLAY RECAP ********************************************************************
10.0.15.23 : ok=15 changed=11 unreachable=0 failed=0
Fun with AWS
There are many modules to manage AWS:
- Launch and terminate EC2 instances
- Manage EIP
- S3 objects upload / download
- ELB management
- RDS
- Route 53
- Elasticache...
In order to use them, you will need to:
export AWS_ACCESS_KEY_ID='AK123'
export AWS_SECRET_ACCESS_KEY='abc123'
pip install boto
Let's create an EC2 instance
We will create the instance using the ec2 module
Once it is created, we register the result of the module in a variable so we can access the data of the instance.
We then use the add_host module to dinamically add the instance to the inventory...
... so that we can provision it in the next play of the playbook!
# Example to create an AWS instance and install MongoDB
#
# The region, key_name, image and security group (group) must match your configuration
#
# Before running the playbook, ensure that:
# - The key 'demo-key' has been added to ssh-agent
# - export ANSIBLE_HOST_KEY_CHECKING=False
# - The environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY match your credentials
#
---
- hosts: localhost
connection: local
gather_facts: no
tasks:
- name: EC2 provisioning of MongoDB instance
ec2:
region: eu-west-1
key_name: demo-key
instance_type: t2.micro
image: ami-234ecc54
wait: yes
instance_tags:
Name: mongo-server
exact_count: 1
count_tag:
Name: mongo-server
group:
- SSH-ACCESS
volumes:
- device_name: /dev/xvdc
# /dev/xvdc of 5 GB
volume_size: 5
register: ec2
- name: Wait for SSH to come up
wait_for: host={{ item.public_ip }} port=22 delay=10 timeout=320 state=started
with_items: ec2.instances
- name: Add the host to the group to be provisioned
add_host: name={{ item.public_ip }} groupname=to_be_provisioned
with_items: ec2.instances
- hosts: to_be_provisioned
user: ubuntu
sudo: yes
roles:
- {
role: mongodb_server,
small_files: true
}
tasks:
- name: Install the pre-requisites
apt: name={{ item }} update_cache=yes
with_items:
- lvm2
- name: Create a vg named vgdata with /dev/xvdc
lvg: vg=vgdata pvs=/dev/xvdc
- name: Create a lv named lvdata01 in vgdata
lvol: vg=vgdata lv=lvdata01 size=80%VG
- name: Create an ext4 filesystem in /dev/mapper/vgdata-lvdata01
filesystem: fstype=ext4 dev=/dev/mapper/vgdata-lvdata01
- name: Make sure mongod is stopped
service: name=mongod state=stopped
- name: Mount the directory
mount: name=/var/lib/mongodb src=/dev/mapper/vgdata-lvdata01 fstype=ext4 state=mounted
- name: Re-establish permissions for the directory
file: path=/var/lib/mongodb owner=mongodb group=nogroup state=directory recurse=yes
- name: Make sure mongod is started
service: name=mongod state=started
Here we go!
deploy-aws-mongo.yml
Managing dynamic inventory
Hardcoding IPs in a file is not a valid way to manage a cloud. We need a dynamic inventory
Ansible provides dynamic inventory scripts for many clouds. The files ec2.py and ec2.ini for AWS are in github.com/ansible/ansible/tree/devel/plugins/inventory
You can inspect your cloud just typing:
./ec2.py --list
The dynamic inventory script creates groups based on tags (super cool!)
Rolling updates
Using pre_tasks and post_tasks, and modules to manage load balancers, we can create a playbook to perform a rolling update
The variable serial controls the number of servers that are concurrently updated
Rolling update in AWS
---
- hosts: web
serial: 3
pre_tasks:
- name: Gathering ec2 facts
action: ec2_facts
- name: Instance de-register
local_action:
module: ec2_elb
instance_id: "{{ ansible_ec2_instance_id }}"
state: 'absent'
roles:
- role_that_updates
post_tasks:
- name: Instance register
local_action:
module: ec2_elb
instance_id: "{{ ansible_ec2_instance_id }}"
ec2_elbs: "{{ item }}"
state: 'present'
with_items: ec2_elbs
ansible-playbook -i ec2.py rolling-update.yml
rolling-update.yml
More advanced topics
- Complex templating, filters and variable manipulation
- Asynchronous actions
- Local playbooks and delegation
- Ansible pull mode
- Using ansible-vault to encrypt data
- Development of new modules
- ...
Further reading
Great documentation at docs.ansible.com
Examples at github.com/ansible/ansible-examples
Good books:
Happy hacking!
Introduction to Ansible
By Alejandro Guirao Rodríguez
Introduction to Ansible
- 4,084