Abdullah Fathi
Pautan Muat Turun
Manual deployment of Containers is hard to maintain, error-prone and annoying
(even beyond security and configuration concerns
Containers might crash/ go down and need to be replaced
We might need more container instances upon traffic spikes
Incoming traffic should be distributed equally
Container healthchecks + automatic re-deployment
Autoscaling
Load Balancer
Problem we are facing
Kubernetes to The Rescue
An open-source system for orchestrating container deployments
Automatic Deployment
Scaling & Load Balancing
Management
The scheduler watches created Pods that do not have a Node design yet and designs the Pod run on a specific node
Allows you to interact with the Kubernetes API its the front end of the Kubernetes control plane.
The controller Manager runs controllers. These are background threads that run tasks in a cluster
Rancher adalah platform untuk menguruskan kluster Kubernetes melalui antara muka web
- Set PATH environment variable
- Import kubeconfig file
- Set default kubeconfig: $env:KUBECONFIG = "D:\FOTIA\training\kubernetes\microk8s-config.yaml"
- Verify connection: kubectl cluster-info
Feature | Deployment | ReplicaSet |
---|---|---|
Rollout Management | Yes (rolling updates, rollbacks) | No |
Scaling | Yes | Yes |
Self-Healing | Yes | Yes |
Label Selector | Yes | Yes |
History/Revisions | Yes | No |
Main Use Case | Automated updates and scaling | Ensuring desired number of pods |
Use with StatefulSet | No | No |
Use with DaemonSet | No | No |
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:v1
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: my-daemonset
spec:
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-container
image: my-image:v1
Ways to configure resources under kubernetes
The imperative approach involves directly running kubectl commands to create, update, or delete resources
# Create Deployment
kubectl create deployment first-app --image=<image from registry>
kubctl get deployments
kubectl get pods
#create service
#expose service
kubectl expose deployment first-app --type=LoadBalancer --port=8080
kubectl get svc
# Scale Up the pods
kubectl scale deployments/first-app --replicas=3
# Rolling update
kubectl set image deployment/first-app kube-sample-app=<new image tag>
# Deployment Rollbacks
# Set unexisted image tag
kubectl set image deployment/first-app kube-sample-app=<unexsited new image tag>
kubectl rollout status deployment/first-app
kubectl rollout undo deployment/first-app
# Rollback older deployments
kubectl rollout history deployment/first-app
kubectl rollout history deployment/first-app --revision=1
kubectl rollout undo deployment/first-app --to-revision=1
# Delete Deployment
kubectl delete service first-app
kubectl delete deployment first-app
Kubernetes Configuration File
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels: ...
spec:
replicas: 2
selector: ...
template: ...
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels: ...
spec:
selector: ...
ports: ...
Deployment
Service
1) Metadata
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels: ...
spec:
replicas: 2
selector: ...
template: ...
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels: ...
spec:
selector: ...
ports: ...
2) Specification
Each configuration file has 3 parts
Attributes of "spec" are specific to the kind
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels: ...
spec:
replicas: 2
selector: ...
template: ...
Each configuration file has 3 parts
3) Status (automatically generated by k8s)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels: ...
spec:
replicas: 2
selector: ...
template:
metadata:
labels:
app:nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort:8080
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app:nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort:8080
Deployment
Service
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels: ...
spec:
selector:
app: nginx
ports: ...
Metadata contains label
Specification contains selector
Labels & Selectors
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app:nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort:8080
Deployment
Connecting Deployment to Pods
labels:
app: nginx
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort:8080
Deployment
Connecting Services to Deployments
Service
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels: ...
spec:
selector:
app: nginx
ports: ...
Connection is made through the Selector of the Labels
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort:8080
Deployment
Ports in Service and Pod
Service
apiVersion: v1
kind: Service
metadata:
name: nginx-service
labels: ...
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 8080
Other Service
Nginx Service
Pod
port: 80
targetPort: 8080
targetPort: Port to forward request (containerPort of Deployment)
containerPort: Port which pod listening
nodePort: between 30000-32767
IP address and port is not opened
Kubernetes: External Service
apiVersion: v1
kind: Service
metadata:
name: system-a-external-service
spec:
selector:
app: system-a
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30001
YAML File: External Service
Assign external IP address to service
Highly available persistent storage for Kubernetes
# kubectl get storageclass longhorn -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: host-pvc
namespace: training-devops
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 1Gi
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-nfs
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: "/usr/local/path"
server: <nfs-server-ip>
Kubernetes: Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: system-a-ingress
spec:
rules:
- host: system-a.fotia.com.my
http:
paths:
- backend:
serviceName: system-a-internal-service
servicePort: 8080
YAML File: Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: system-a-ingress
spec:
rules:
- host: system-a.fotia.com.my
http:
paths:
- backend:
serviceName: system-a-internal-service
servicePort: 8080
apiVersion: v1
kind: Service
metadata:
name: system-a-internal-service
spec:
selector:
app: system-a
ports:
- protocol: TCP
port: 8080
targetPort: 8080
Ingress and Internal Service Configuration
Configure Ingress in Kubernetes Cluster
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: system-a-ingress
spec:
rules:
- host: system-a.fotia.com.my
http:
paths:
- backend:
serviceName: system-a-internal-service
servicePort: 8080
What is Ingress Controller?
Ingress Controller behind Proxy/LB
No server in Kubernetes cluster is accessible from outside
Multiple paths for same host
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: system-a-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: system-a.fotia.com.my
http:
paths:
- path: /dashboard
backend:
serviceName: dashboard-service
servicePort: 8080
- path: /cart
backend:
serviceName: cart-service
servicePort: 3000
Multiple sub-domains or domains
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: system-a-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: dashboard.system-a.com.my
http:
paths:
backend:
serviceName: dashboard-service
servicePort: 8080
- host: cart.system-a.com.my
http:
paths:
backend:
serviceName: cart-service
servicePort: 3000
Configure TLS Certificate - https
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-ingress
spec:
tls:
- hosts:
- system-a.fotia.com.my
secretName: system-a-secret-tls
rules:
- host: system-a.fotia.com.my
http:
paths:
- path: /
backend:
serviceName: system-a-internal-service
servicePort: 8080
apiVersion: v1
kind: Secret
metadata:
name: system-a-secret-tls
namespace: default
data:
tls.crt: base64 encoded cert
tls.key: base64 encoded key
type: kubernetes.io/tls
HorizontalPodAutoScaler
apiVersion: apps/v1
kind: Deployment
metadata:
name: afathi-second-app
spec:
replicas: 1
selector:
matchLabels:
app: afathi-second-app
template:
metadata:
labels:
app: afathi-second-app
tier: frontend
spec:
containers:
- name: second-nodejs
image: fathich3k/kube-sample-app
resources:
limits:
memory: 500Mi # 500Mib
cpu: 100m # 0.1cpu
requests:
memory: 250Mi
cpu: 80m
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: afathi-frontend-service
spec:
selector:
app: afathi-second-app
ports:
- port: 80 # port service nk exposed
targetPort: 8080 # port yg container listen
type: NodePort
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
name: second-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: afathi-second-app
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 50
Requests specify the minimum amount of CPU and memory resources that Kubernetes will guarantee for the container. If a node has enough resources to satisfy the requests, the container is scheduled on that node. Requests are used for scheduling decisions.
Limits specify the maximum amount of CPU and memory resources that the container can use. If the container tries to exceed these limits, it will be throttled (for CPU) or terminated (for memory).
Your feedback matters
There are no secrets to success. It is the result of preparation, hard work, and learning from failure. - Colin Powell