http://slides.com/dalealleshouse/kube-pi
Dale Alleshouse
@HumpbackFreak
hideoushumpbackfreak.com
github.com/dalealleshouse
What is Kubernetes?
Kubernetes Goals
Kubernetes Basic Architecture
Awesome Kubernetes Demo!
What Now?
https://github.com/dalealleshouse/zero-to-devops/tree/pi
# Create a deployment for each container in the demo system
kubectl run html-frontend --image=dalealleshouse/html-frontend:1.0 --port=80\
--env STATUS_HOST=k8-master:31111 --labels="app=html-frontend"
kubectl run java-consumer --image=dalealleshouse/java-consumer:1.0
kubectl run ruby-producer --image=dalealleshouse/ruby-producer:1.0
kubectl run status-api --image=dalealleshouse/status-api:1.0 port=5000 \
--labels="app=status-api"
kubectl run queue --image=arm32v6/rabbitmq:3.7-management-alpine
# View the pods created by the deployments
kubectl get pods
# Run docker type commands on the containers
kubectl exec -it *POD_NAME* bash
kubectl logs *POD_NAME*
Services provide a durable end point
# Notice the java-consumer cannot connect to the queue
kubectl get logs *java-consumer-pod*
# The following command makes the queue discoverable via the name queue
kubectl expose deployment queue --port=15672,5672 --name=queue
# Running the command again shows that it is connected now
kubectl get logs *java-consumer-pod*
The above only creates an internal endpoint, below creates an externally accessible endpoint
# Create an endpoint for the HTML page as the REST API Service
kubectl create service nodeport html-frontend --tcp=80:80 --node-port=31112
kubectl create service nodeport status-api --tcp=80:5000 --node-port=31111
The website is now externally accessible at the cluster endpoint
The preferred alternative to using shell commands is storing configuration in yaml files. See the kube directory
# Delete all objects made previously
# Each object has a matching file in the kube directory
kubectl delete -f kube/
# Recreate everything
kubectl create -f kube/
K8S has a default dashboard and default monitoring called heapster
# configuration specific - most cloud providers have something similar
# create a tunnel from the cluster to the local machine
kubectl proxy
# Get the authentication token
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | \
grep admin-user | awk '{print $1}')
http://localhost:8001/ui
K8S will automatically load balance requests to a service between all replicas. Making subsequent requests to the html page reveals different pod names.
# Scale the NGINX deployment to 3 replicas
kubectl scale deployment html-frontend --replicas=3
K8S can create replicas easy and quickly
K8S can scale based on load.
# Maintain between 1 and 5 replicas based on CPU usage
kubectl autoscale deployment java-consumer --min=1 --max=5 --cpu-percent=50
# Run this repeatedly to see # of replicas created
# Also, the "In Process" number on the web page will reflect the number of replicas
kubectl get deployments
K8S will automatically restart any pods that die.
# Find nodes running html-frontend
kubectl describe nodes | grep -E 'html-frontend|Name:'
# Simulate a failure by unplugging a network cable
# Pods are automatically regenerated
kubectl get pods
If the endpoint check fails, K8S automatically kills the container and starts a new one
# Find front end pod
kubectl get pods
# Simulate a failure by manually deleting the health check file
kubectl exec *POD_NAME* rm /usr/share/nginx/html/healthz.html
# Notice the restart
kubectl get pods
# See the restart in the event log
kubectl get events | grep *POD_NAME*
...
livenessProbe:
httpGet:
path: /healthz.html
port: 80
initialDelaySeconds: 3
periodSeconds: 2
readinessProbe:
httpGet:
path: /healthz.html
port: 80
initialDelaySeconds: 3
periodSeconds: 2
Specify health and readiness checks in yaml
K8S will update one pod at a time so there is no downtime for updates
# Update the image on the deployment
kubectl set image deployment/html-frontend html-frontend=dalealleshouse/html-frontend:2.0
# Run repeadly to see the number of available replicas
kubectl get deployments
Viewing the html page shows an error. K8S makes it easy to roll back deployments
# Roll back the deployment to the old image
kubectl rollout undo deployment html-frontend
DON'T FORGET THE SURVEY!!