A clustering daemon for Docker
Text
Docker Swarm does not handle the proxying of network traffic; only the Docker commands themselves. This means that if Swarm goes down, all of the running containers are unaffected though you can't change their state.
When Swarm is started or restarted, it rebuilds the database of Docker hosts automatically. This means that as long as your discovery protocol is available, Swarm should be able to figure out what the cluster looks like.
- Made up of multiple discrete
Docker hosts behind a single
Docker-like interface
- Each Docker host is assigned
a set of tags for scheduling
- Ability to partition a cluster
Basically: enough to get by
In order to help identify the capacities of a given Docker host, it should be started with a set of tags that can be used for filtering (determining which Docker hosts are available to schedule a container).
vagrant@dockerhost01:~$ cat /etc/default/docker
# Managed by Ansible
# Docker Upstart and SysVinit configuration file
# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS="--dns=8.8.8.8 --dns=8.8.4.4 \
--label status=master \
--label disk=ssd \
--label zone=internal \
--tlsverify \
--tlscacert=/etc/pki/tls/ca.pem \
--tlscert=/etc/pki/tls/dockerhost01-cert.pem \
--tlskey=/etc/pki/tls/dockerhost01-key.pem \
# Local port for Registrator
-H unix:///var/run/docker.sock \
-H tcp://0.0.0.0:2376"
# Start a Nginx node on the edge
$ docker run -d -p 80:80 \
--name edge_webserver \
-e constraint:zone==external \
-e constraint:disk==ssd \
-t nginx:latest
# Start an app container on the same Docker host
$ docker run -d -p 5000 \
--name app_1 \
-e affinity:container==edge_webserver \
-t app:latest
..but now with:
Links
Links
Links
(Extra Credit)
- Docker Hosts running Ubuntu
- Docker daemon
- Listening to NW + Socket
- Configured w/ tags
- Swarm daemon
- Pointed at Consul k/v
- /swarm/<dockerip>:<port>
- Registrator daemon
- Pointed at Consul k/v
- /services/<name>-<port>/
dockerhost:container:port
- Monitoring tools (DataDog)
# dockerhost[01:03]
# Docker
/usr/bin/docker -d \
--dns=8.8.8.8 \
--dns=8.8.4.4 \
--label status=master \
--label disk=ssd \
--label zone=internal \
--tlsverify \
--tlscacert=/etc/pki/tls/ca.pem \
--tlscert=/etc/pki/tls/dockerhost01-cert.pem \
--tlskey=/etc/pki/tls/dockerhost01-key.pem \
-H unix:///var/run/docker.sock \
-H tcp://0.0.0.0:2376
# Swarm
/usr/local/bin/swarm join \
--addr=10.100.199.201:2376 \
--discovery consul://dockerswarm01/swarm
# Registrator
/usr/local/bin/registrator \
-ip=10.100.199.201 \
consul://dockerswarm01/services
- Running Swarm
- Connects the Docker hosts
- Provides a single interface
- Must be able to connect to
all Docker hosts
- Handles only distribution of
Docker commands; not
proxying of network traffic
- Running containers are not
affected by a Swarm outage
# dockerswarm01
# Swarm daemon
/usr/local/bin/swarm manage \
--tlsverify \
--tlscacert=/etc/pki/tls/ca.pem \
--tlscert=/etc/pki/tls/swarm-cert.pem \
--tlskey=/etc/pki/tls/swarm-key.pem \
-H tcp://0.0.0.0:2376 \
--strategy random \
--discovery consul://dockerswarm01/swarm
- Provided by Consul
- Key / Value store
- `/swarm` for Swarm
- Used by Swarm
master to maintain
the cluster
- `/services` for Registrator
- Used by the routing layer
to route traffic to services
Data under /swarm in Consul
Data under /services in Consul
The Swarm Cluster
Registrator's Services
- Hosts running:
- Consul-template: to poll
Consul's k/v store and
generate configs based off
of what is in /services
- Nginx/HAProxy: to route
traffic to the proper Docker
host and port for the
application.