http://slides.com/dalealleshouse/kube-azure
Dale Alleshouse
@HumpbackFreak
hideoushumpbackfreak.com
github.com/dalealleshouse
What is Kubernetes?
Kubernetes Goals
Kubernetes Basic Architecture
Awesome Kubernetes Demo!
What Now?
https://github.com/dalealleshouse/zero-to-devops/tree/azure
Create a new resource group
DNS_PREFIX=kube-demo
CLUSTER_NAME=kube-demo
az acs create --orchestrator-type=kubernetes --resource-group $RESOURCE_GROUP \
--name=$CLUSTER_NAME --dns-prefix=$DNS_PREFIX --generate-ssh-keys
Create Cluster
RESOURCE_GROUP=kube-demo
LOCATION=eastus
az group create --name=$RESOURCE_GROUP --location=$LOCATION
# Install kubectl
sudo az acs kubernetes install-cli
# Authorize and configure for new cluster
az acs kubernetes get-credentials --resource-group=$RESOURCE_GROUP --name=$CLUSTER_NAME
Install and configure
# View ~/.kube/config
kubectl config view
# Verify cluster is configured correctly
kubectl get cs
Verify
# Create a deployment for three internal deployments
kubectl run java-consumer --image=dalealleshouse/java-consumer:1.0
kubectl run ruby-producer --image=dalealleshouse/ruby-producer:1.0
kubectl run queue --image=rabbitmq:3.6.6-management
# View the pods created by the deployments
kubectl get pods
# Run docker type commands on the containers
kubectl exec -it *POD_NAME* bash
kubectl logs *POD_NAME*
Services provide a durable end point
# Notice the java-consumer cannot connect to the queue
kubectl get logs *java-consumer-pod*
# The following command makes the queue discoverable via the name queue
kubectl expose deployment queue --port=15672,5672 --name=queue
# Running the command again shows that it is connected now
kubectl get logs *java-consumer-pod*
Create REST API Deployment and Load Balancer
# Create deployment
kubectl run status-api --image=dalealleshouse/status-api:1.0 port=5000
# Create Service
kubectl expose deployment status-api --port=80 --target-port=5000 --name=status-api \
--type=LoadBalancer
# Watch for service to become available
watch 'kubectl get svc'
Create Load Balancer Service for Front End with env var pointing to REST API
kubectl run html-frontend --image=dalealleshouse/html-frontend:1.0 --port=80 \
--env STATUS_HOST=*STATUS-HOST-ADDRESS*
kubectl expose deployment html-frontend --port=80 --name=html-frontend --type=LoadBalancer
The preferred alternative to using shell commands is storing configuration in yaml files. See the kube directory
# Delete all objects made previously
# Each object has a matching file in the kube directory
kubectl delete -f kube/
# Recreate everything
kubectl create -f kube/
K8S has a default dashboard
kubectl proxy
Navigate to http://127.0.0.1:8081/ui
K8S will automatically load balance requests to a service between all replicas.
# Scale the NGINX deployment to 3 replicas
kubectl scale deployment html-frontend --replicas=3
K8S can create replicas easy and quickly
K8S can scale based on load.
# Maintain between 1 and 5 replicas based on CPU usage
kubectl autoscale deployment java-consumer --min=1 --max=5 --cpu-percent=50
# Run this repeatedly to see # of replicas created
# Also, the "In Process" number on the web page will reflect the number of replicas
kubectl get pods -l run=html-frontend
K8S will automatically restart any pods that die.
# View the html-frontend pods
kubectl get pods -l run=html-frontend
# Forcibly shut down container to simulate a node\pod failure
kubectl delete pod *CONTAINER*
# Containers are regenerated immediately
kubectl get pods -l run=html-frontend
If the endpoint check fails, K8S automatically kills the container and starts a new one
# Find front end pod
kubectl get pods -l run=html-frontend
# Simulate a failure by manually deleting the health check file
kubectl exec *POD_NAME* rm usr/share/nginx/html/healthz.html
# Notice the restart
kubectl get pods -l run=html-frontend
...
livenessProbe:
httpGet:
path: /healthz.html
port: 80
initialDelaySeconds: 3
periodSeconds: 2
readinessProbe:
httpGet:
path: /healthz.html
port: 80
initialDelaySeconds: 3
periodSeconds: 2
Specify health and readiness checks in yaml
K8S will update one pod at a time so there is no downtime for updates
# Update the image on the deployment
kubectl set image deployment/html-frontend html-frontend=dalealleshouse/html-frontend:2.0
# Run repeadly to see the number of available replicas
kubectl get deployments
Viewing the html page shows an error. K8S makes it easy to roll back deployments
# Roll back the deployment to the old image
kubectl rollout undo deployment html-frontend
az group delete -n kube-demo