Decisions you'll make
after drinking the Kool-Aid
Sr. Janitor @
flyinprogrammer
Except we just claimed there exists a system which will successfully centralize an intentionally decentralized architecture.
Mesos
Zookeeper
When in high-availability mode,
Mesos requires a Zookeeper cluster.
Mesos
Zookeeper
To run both clusters with high availability we must run at a minimum 3 nodes of each service.
Mesos
Mesos
Zookeeper
Zookeeper
Mesos Cluster
Zookeeper Cluster
When we run our Marathon framework on top of Mesos,
it also relies on Zookeeper to maintain state and coordinate leader election.
Marathon
Marathon Cluster
Marathon
Marathon
Mesos
Mesos Cluster
Mesos
Mesos
Zookeeper
Zookeeper Cluster
Zookeeper
Zookeeper
Marathon
Marathon Cluster
Marathon
Marathon
Mesos
Mesos Cluster
Mesos
Mesos
Zookeeper
Zookeeper Cluster
Zookeeper
Zookeeper
Marathon
not-prod-pod-1
Mesos
Zookeeper
Marathon
not-prod-pod-2
Mesos
Zookeeper
Marathon
prd-pod-1
Mesos
Zookeeper
Marathon
prd-pod-2
Mesos
Zookeeper
Production
SLA
Not
Production
SLA
Zookeeper/Mesos/Marathon Pods
Artifact Repository
Source Code Repository
Docker Registry
Logging Storage and Analytics
Metric Storage and Analytics
Service Discovery
Load Balancing
Orchestration
Monitoring and Alerting
Build System
Storage
Data Streaming
Automated Recovery
Automated Deployment
Support Services
Secrets
How will we choose to implement each part of this stack?
Can my existing choices handle the ephemeral nature of containers?
Which services will be pod, availability zone, or region specific?
How do we incorporate security?
How will we educate our engineering group?
Will all this change actually solve a real business problem?
Let's tackle these challenges one at a time.
Secret Storage
Secret Service
Container
Continuous Delivery
Container Registry
Continuous Integration
Code Repo
Layers of Trust
Application
0.0.0.0:8080
Bridge
host_ip:31000
Typical Application
Port Mapping
host
curl host:31000
Application
0.0.0.0:31000
Bridge
host_ip:31000
RMI Server
Port Mapping
host
curl host:31000
"portMappings": [{
"containerPort": 8080,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp",
"name": "api",
"labels": {}
"portMappings": [{
"containerPort": 0,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp",
"name": "rmi",
"labels": {}
Typical Application
Port Mapping
RMI Server
Port Mapping
# Specify what address our applications should bind to.
SO_BIND_ADDR=${SO_BIND_ADDR:-0.0.0.0}
# If we've set RMI_PORT, then we probably want to do RMI Port things
if [ -n "$RMI_PORT" ]; then
# We need the hostname rmi is started with to the match the hostname we will access it with.
# By default if the user supplies something explicit, us that, else use HOST which
# Marathon sets to the agent hostname. Otherwise, use localhost for when this script is used outside of Docker.
addJvmParameter java.rmi.server.hostname ${RMI_HOST:-${HOST:-localhost}}
# If RMI_PORT is set to a PORT{int}, patch it with the real port
# and export a PORT_{int}={int} pair.
if [[ $RMI_PORT = "PORT"* ]] ; then
export RMI_PORT=$(($RMI_PORT))
# Marathon does not map PORT_(PORT NUMBER) for ephemeral ports
export PORT_$RMI_PORT=$RMI_PORT
fi
# Have our RMIRegistry and JMXConnectorServer bind to the socket address
# which will be routable through the docker bridge
addJvmParameter jetty.jmxrmihost $SO_BIND_ADDR
# Use our final RMI_PORT
addJvmParameter jetty.jmxrmiport ${RMI_PORT}
fi
ROOT_PASSWORD=hunter8
Environment Variable Mapping
Currently I typically use 'host' or 'bridge' networking.
"portMappings": [{
"name": "foo",
"labels": {},
"containerPort": 8081,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp"
}, {
"name": "bar",
"labels": {},
"containerPort": 8082,
"hostPort": 0,
"servicePort": 0,
"protocol": "tcp"
}]
curl master:5050/tasks
"discovery": {
"name": "app1",
"ports": {
"ports": [{
"name": "foo",
"number": 31792,
"protocol": "tcp"
}, {
"name": "bar",
"number": 31793,
"protocol": "tcp"
}]
}
},
vagrant@mesos:~curl localhost:8123/v1/hosts/mps.v100.test-app.marathon.mesos.
[
{
"host": "mps.v100.test-app.marathon.mesos.",
"ip": "172.17.0.2"
},
{
"host": "mps.v100.test-app.marathon.mesos.",
"ip": "172.17.0.4"
},
{
"host": "mps.v100.test-app.marathon.mesos.",
"ip": "172.17.0.3"
}
]
vagrant@mesos:~$ curl localhost:8123/v1/services/_mps.v100.test-app._tcp.marathon.mesos.
[
{
"service": "_mps.v100.test-app._tcp.marathon.mesos.",
"host": "mps.v100.test-app-xc4p5-s0.marathon.mesos.",
"ip": "172.17.0.2",
"port": "31893"
},
{
"service": "_mps.v100.test-app._tcp.marathon.mesos.",
"host": "mps.v100.test-app-xc4p5-s0.marathon.mesos.",
"ip": "172.17.0.2",
"port": "31894"
},
...
"dns_config": {
"node_ttl": "10s",
"allow_stale": true,
"max_stale": "10s",
"service_ttl": {
"*": "10s"
}
}
docker run -d \
-e PORTS=9090 \
--net=host \
mesosphere/marathon-lb \
sse \
-m http://master:8080 \
--health-check \
--group external
{
"id": "/app1",
"labels": { "HAPROXY_GROUP": "external" },
"container": {
"docker": {
"image": "flyinprogrammer/mps",
"network": "BRIDGE",
"portMappings": [{
"containerPort": 8081,
"hostPort": 0,
"servicePort": 10000,
"protocol": "tcp",
"name": "foo",
"labels": {}
}, {
"containerPort": 8082,
"hostPort": 0,
"servicePort": 10001,
"protocol": "tcp",
"name": "bar",
"labels": {}
}]
Marathon Configuration
If a servicePort value is assigned by Marathon then Marathon guarantees that its value is unique across the cluster.
ab -n 100000 -c 20 http://54.186.59.17:10000/
ab -n 100000 -c 20 http://54.186.59.17:10001/
And many more.