K8s - let's deploy an application
Lunch & Learn
by Kamil Galuszka
- It's exposing all k8s APIs
- All kubectl commands go through node
- It's scheduling and creating different k8s objects on Nodes
- It's a place where k8s objects are scheduled & created
- They expose available CPU & memory
- They can be grouped in pools of nodes based on specific characteristics (like high memory nodes, CPU, etc.)
Master - Node relationships
- Declarative API (yay!)
- They described in YAML / JSON
- You can easily edit them in place ( `kubectl edit` is Your friend )
- Some of the objects are immutable so make sure You check that before editing
- Every k8s object has to live on given namespace
- You can use namespaces to logically divide different environments
- Spin up a container (or multiple containers)
- Governs IP/networking
- Normally You don't deploy them directly
other word, Pod is a single instance of an application
Some of the properties in Pods
nodeSelectors: type: 'cpu-intensive' --- resources: requests: memory: "64Mi" cpu: "250m" limits: memory: "128Mi" cpu: "500m" --- envFrom: - configMapRef: name: worker-configmap --- image: worker-nodejs:latest imagePullPolicy: Never
ReplicaSet and Deployment
- A ReplicaSet ensures that a specified number of pod replicas are running at any given time
- A Deployment controller provides declarative updates for Pods and ReplicaSets.
- ReplicaSets are managing scaling of pods.
- Pods are born & killed by ReplicaSets. They are never resurrected
kind: Deployment apiVersion: apps/v1beta1 metadata: name: orchestrator labels: name: orchestrator-ruby role: orchestrator spec: replicas: 2 selector: matchLabels: name: orchestrator-ruby role: orchestrator strategy: rollingUpdate: maxUnavailable: 10% maxSurge: 10% type: RollingUpdate template: metadata: name: orchestrator-ruby labels: name: orchestrator-ruby role: orchestrator spec: containers: - name: orchestrator-ruby resources: requests: cpu: 40m memory: 50Mi image: orchestrator-ruby:latest imagePullPolicy: Never readinessProbe: httpGet: path: /healthcheck port: 80 initialDelaySeconds: 15 failureThreshold: 2 livenessProbe: httpGet: path: /healthcheck port: 80 failureThreshold: 2 initialDelaySeconds: 15 envFrom: - configMapRef: name: orchestrator-configmap ports: - name: a-http containerPort: 80
- Grouping of pods is done by label selector
- Service is routing requests/connections to specific pods
kind: Service apiVersion: v1 metadata: name: orchestrator-ruby labels: name: orchestrator-ruby role: orchestrator spec: ports: - name: http port: 80 targetPort: a-http protocol: TCP selector: name: orchestrator-ruby role: orchestrator
ConfigMaps and Secrets
- Any non-sensitive ENV variables you can store them in ConfigMap
- All sensitive ENV variables should be store in secrets (encoded Base64)
apiVersion: v1 kind: ConfigMap metadata: name: api-configmap namespace: staging data: REDIS_URL: redis://redis-staging-master/
- There are 2 types of this type of objects:
- Job (scheduled once)
- CronJob (scheduled based on cron)
- Job controller spin up and control the Pod to successfully complete
Package manager for k8s
- Helm uses charts to deploy into cluster
- A lot of different charts are available as open source for ex.
- Elasticsearch etc.
- Most of them shouldn't be in production
sneak peak of this tool.
# kubectl apply -f file.yaml # kubectl describe pod name_of_the_pod # kubectl get pods -n <namespace> # kubectl edit deployment <name> -n namespace # kubectl delete pod <name> -n namespace # kubectl logs pod <name> -n namespace # -n is Your friend. But if You don't want # to specify namespace with every command use `kubens`
Let's build an app
That is ...
...build from 3 microservices
- Ruby Orchestrator
- Python API
- NodeJS Worker
...it's deployed to
- redis as a storage
Ruby (Sinatra) GET /healthcheck GET /am-i-famous?name=<person_name> GET /result/<job_id> DELETE /result/<job_id> Python (Flask) GET /healthcheck POST /job/ GET /result/<job_id> DELETE /result/<job_id>
- Sinatra (Orchestrator)
- Flask (API)
- Commander (Worker)
How to run it?
$ make minikube-start $ docker-compose build $ make helm-init $ make create-dev $ make create-staging // To cleanup/shutdown whole thing $ make delete-dev $ make delete-staging $ make minikube-stop
127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost # For this tutorial probably # linux has to change it to 127.0.0.1 192.168.99.100 k8s-dreams-dev 192.168.99.100 k8s-dreams-staging
We are hiring!
K8s - let's deploy an application - Lunch & Learn
By Kamil Gałuszka