@ryanwallner
ryan.wallner@clusterhq.com
IRC: wallnerring (freednode -> #clusterhq)
Because a stateless protocol does not require the server to retain session information or status about each communications partner for the duration of multiple requests. HTTP is a stateless protocol, which means that the connection between the browser and the server is lost once the transaction ends
- Databases, Message Queues, Cache, Logs ...
Containers should be portable, even when they have state.
Stateful things scale vertically, stateless things scale horizontally
Ease of operational manageability
User Requests Volume
for Container
docker: "--volume-driver=flocker"
flocker: "volumes-cli create"
Storage Driver
Flocker requests for storage to be automatically provisioned through it's configured backend.
Storage Driver
A persistent storage volume is successfully created and ready to be given to a container application
Storage Driver
Persistent storage is mounted inside the container so the application can storage information that will remain after the container's lifecycle
Host
Host
Host
Host
Container fails, scheduled to moved and migrates
Host
Host
New container is started on a new host, the volume is moved to the new host so when container starts is has the data it expected.
Linking
Expose Ports
Directly Using Storage
docker --link <name or id>:<alias>
docker -p 3306:3306 or docker -P
docker --volume-driver=flocker myCache:/data/nginx/cache
http {
...
proxy_cache_path /data/nginx/cache keys_zone=one:10m;
server {
proxy_cache one;
location / {
proxy_pass http://localhost:8000;
}
}
}
https://github.com/wallnerryan/swarm-compose-flocker-aws-ebs
//Very easy to get started: http://doc-dev.clusterhq.com/labs/installer.html#labs-installer
uft-flocker-sample-files
uft-flocker-get-nodes --ubuntu-aws
uft-flocker-install cluster.yml && \
uft-flocker-config cluster.yml && \
uft-flocker-plugin-install cluster.yml
// Prep-work
NODE1=<public ip for node1>
NODE2=<public ip for node2>
NODE3=<public ip for node3>
MASTER=<public ip for master>
PNODE1=<private ip for node1>
PNODE2=<private ip for node2>
PNODE3=<private ip for node3>
KEY=/Path/to/your/aws/ec2/user.pem
chmod 0600 $KEY
// Joining the slaves
ssh -i $KEY root@$MASTER docker run --rm swarm create
ssh -i $KEY root@$NODE1 docker run -d swarm join --addr=$PNODE1:2375 token://$CLUSTERKEY
ssh -i $KEY root@$NODE2 docker run -d swarm join --addr=$PNODE2:2375 token://$CLUSTERKEY
ssh -i $KEY root@$NODE3 docker run -d swarm join --addr=$PNODE3:2375 token://$CLUSTERKEY
// Starting the Master
ssh -i $KEY root@$MASTER docker run -d -p 2357:2375 swarm manage token://$CLUSTERKEY
// Point docker tools at your swarm master in AWS
export DOCKER_HOST=tcp://<your_swarm_master_public_ip>:2357
// Start the app
./start_or_moveback.sh
(Add some data)
// When ready, move your application
./move.sh
web:
image: wallnerryan/todolist
environment:
- DATABASE_IP=<Private IP Your Database Container will be scheduled to>
- DATABASE=mysql
ports:
- 8080:8080
mysql:
image: wallnerryan/mysql
volume_driver: flocker
volumes:
- 'todolist:/var/lib/mysql'
environment:
- constraint:node==<Node Name Your Database Container will be scheduled to>
ports:
- 3306:3306