Abdullah Fathi

Pautan Muat Turun

Manual deployment of Containers is hard to maintain, error-prone and annoying
(even beyond security and configuration concerns

Containers might crash/ go down and need to be replaced

We might need more container instances upon traffic spikes

Incoming traffic should be distributed equally

Container healthchecks + automatic re-deployment

Autoscaling

Load Balancer

Problem we are facing

Kubernetes to The Rescue

An open-source system for orchestrating container deployments

Automatic Deployment
Scaling & Load Balancing
Management

Apa itu Kubernetes?

  • Kubernetes adalah platform orkestrasi container yang mengautomasikan proses penempatan, pengurusan dan penskalaan sistem aplikasi
  • Kubernetes dilengkapi dengan fungsi high availability (HA) kepada persekitaran container dan menyokong
    ciri-ciri self healing serta auto scaling

Kubernetes Architecture

  • Various Components which help the Worker Nodes
  • The Master Node controls your deployment
  • Worker Nodes run the container of your application
  • "Nodes" are your machine / virtual instances
  •  Multiple Pods can be created and removed to scale your app

Master Node

Worker Node

  • Responsible for the overall management of Kubernetes clusters.
  • Consist of three components that take care of communication, scheduling, and controllers:
    • The API Server
    • Scheduler
    • Controller Manage

Master Node

The scheduler watches created Pods that do not have a Node design yet and designs the Pod run on a specific node

Scheduler

Allows you to interact with the Kubernetes API its the front end of the Kubernetes control plane.

API Server

The controller Manager runs controllers. These are background threads that run tasks in a cluster

Controller Manager

  • This is a distributed key-value stored.
  • Kubernetes uses etcd as its Achitecture database and stores all cluster data here.
  • Some of the information are job scheduling info, Pod details, stage information among others

etcd

  • The kubelet handles communication with a worker node.
  • It’s the agent that communicates with the API server to see if pods have been assigned to the Nodes

Kubelet

  • This is the network proxy and load balancer for service, on a single Worker Node.
  • It handles the networking routing for TCP and UDP packets and performs connection forwarding.

Kube-proxy

  • Various Components which help the Worker Nodes
  • The Master Node controls your deployment
  • Worker Nodes run the container of your application
  • "Nodes" are your machine / virtual instances
  •  Multiple Pods can be created and removed to scale your app

Apa itu Rancher?

Rancher adalah platform untuk menguruskan kluster Kubernetes melalui antara muka web

Install Kubectl

Connect to remote k8s cluster

Windows

- Set PATH environment variable
- Import kubeconfig file
- Set default kubeconfig: $env:KUBECONFIG = "D:\FOTIA\training\kubernetes\microk8s-config.yaml"
- Verify connection: kubectl cluster-info

k9s

Terminal based UI to interact with your Kubernetes clusters

Kubernetes
Object

  • Smallest unit of k8s
  • Contains and runs one or multiple containers
  • Pods contain shared resources (e.g. volumes) for all Pod containers
  • Has a cluster-internal IP by default
  • Usually 1 container per Pod
  • Each Pod gets its own cluster-internal IP by default
  • New IP address on re-creation:
    Pods are ephemeral (k8s will start, stop and replace them as needed)
  • Controls (multiple) Pods
  • Blueprint for Pods
  • Abstraction of Pods
  • DB can't be replicated via Deployment
  • Deployment can be paused, deleted and rolled back
  • Manage and automate the rollout and scaling of applications
Feature Deployment ReplicaSet
Rollout Management Yes (rolling updates, rollbacks) No
Scaling Yes Yes
Self-Healing Yes Yes
Label Selector Yes Yes
History/Revisions Yes No
Main Use Case Automated updates and scaling Ensuring desired number of pods
Use with StatefulSet No No
Use with DaemonSet No No

Deployment vs ReplicaSet

Deployment

  • Managing and orchestrating the distribution of pods across nodes in a cluster
  • Ability to ensure high availability by defining the desired number of replica

DaemonSet

  • Distributing pods across nodes
  • Designed to ensure that a specific pod runs on every available node in a cluster
  • Does not have replicas, guarantee that a single instance of the pod is running on every node
  • Ideal for scenarios where you need to run a specific system-level pod or agent on every node, such as monitoring agents or log collectors
apiVersion: apps/v1
  kind: Deployment
  metadata:  
  	name: my-deployment
    spec:  
    	replicas: 3  
    selector:    
    	matchLabels:      
      	app: my-app  
    template:    
    	metadata:      
      	labels:        
        	app: my-app    
    spec:      
    	containers:        
      	- name: my-container          
         image: my-image:v1
  apiVersion: apps/v1
  kind: DaemonSet
  metadata:  
  	name: my-daemonset
    spec:  
    	selector:    
      	matchLabels:      
        	app: my-app  
    template:    
    	metadata:      
      	labels:        
        	app: my-app    
   	spec:      
    	containers:        
      	- name: my-container          
         image: my-image:v1

Deployment

DaemonSet

  • Permanent IP address and also a load balancer
  • Lifecycle of Pod and Service is not connected:
    Even if pod dies, the service and its IP address will stay
  • Type of service:
    • External Service: Accessible from public request
    • Internal Service: Not exposed to public request
  • Exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
  • External configuration of the application
  • Don't put credentials into ConfigMap
  • Used to store secret data
  • base64 encoded format
  • Reference secret in Deployment/Pod
  • Data Persistence
  • Storage on local machine
    (on same server node where pod is running)
  • Remote storage outside of k8s cluster

8. Stateful Set

  • Stateful apps or database
  • Avoid data inconsistencies:
    Manage which Pod is writing or reading the storage

Imperative VS Declarative

Ways to configure resources under kubernetes

Imperative

The imperative approach involves directly running kubectl commands to create, update, or delete resources

# Create Deployment
kubectl create deployment first-app --image=<image from registry>
kubctl get deployments
kubectl get pods

#create service
#expose service
kubectl expose deployment first-app --type=LoadBalancer --port=8080
kubectl get svc

# Scale Up the pods
kubectl scale deployments/first-app --replicas=3

# Rolling update 
kubectl set image deployment/first-app kube-sample-app=<new image tag>

# Deployment Rollbacks
# Set unexisted image tag
kubectl set image deployment/first-app kube-sample-app=<unexsited new image tag>
kubectl rollout status deployment/first-app
kubectl rollout undo deployment/first-app

# Rollback older deployments
kubectl rollout history deployment/first-app
kubectl rollout history deployment/first-app --revision=1
kubectl rollout undo deployment/first-app --to-revision=1

# Delete Deployment
kubectl delete service first-app
kubectl delete deployment first-app

Kubernetes Configuration File

Declarative

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels: ...
spec:
  replicas: 2
  selector: ...
  template: ...
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels: ...
spec:
  selector: ...
  ports: ...

Deployment

Service

1) Metadata

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels: ...
spec:
  replicas: 2
  selector: ...
  template: ...
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels: ...
spec:
  selector: ...
  ports: ...

2) Specification

Each configuration file has 3 parts

Attributes of "spec" are specific to the kind

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels: ...
spec:
  replicas: 2
  selector: ...
  template: ...

Each configuration file has 3 parts

3) Status (automatically generated by k8s)

  • k8s update state continuously
  • desired state == actual state
  • etcd holds the current status of any k8s component

Template

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels: ...
spec:
  replicas: 2
  selector: ...
  template: 
    metadata:
      labels:
        app:nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort:8080
  • Has it's own "metadata" and "spec" section
  • Applies to Pod
  • Blueprint for a Pod

Template

Connecting components

(Labels & Selectors & Ports)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector: 
    matchLabels:
      app: nginx
  template: 
    metadata:
      labels:
        app:nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort:8080

Deployment

Service

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels: ...
spec:
  selector:
    app: nginx
  ports: ...

Metadata contains label

Specification contains selector

Labels & Selectors

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector: 
    matchLabels:
      app: nginx
  template: 
    metadata:
      labels:
        app:nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort:8080

Deployment

  • any key-value pair from component

Connecting Deployment to Pods

labels:
  app: nginx
  • Pods get the label through the template blueprint
  • This label is matched by the selector
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector: 
    matchLabels:
      app: nginx
  template: 
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort:8080

Deployment

Connecting Services to Deployments

Service

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels: ...
spec:
  selector:
    app: nginx
  ports: ...

Connection is made through the Selector of the Labels

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector: 
    matchLabels:
      app: nginx
  template: 
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort:8080

Deployment

Ports in Service and Pod

Service

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels: ...
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080

Other Service

Nginx Service

Pod

port: 80

targetPort: 8080

targetPort: Port to forward request (containerPort of Deployment)

containerPort: Port which pod listening

External Service

Make service as an external service

nodePort: between 30000-32767

IP address and port is not opened

Kubernetes: External Service

apiVersion: v1
kind: Service
metadata:
  name: system-a-external-service
spec:
  selector:
    app: system-a
  type: NodePort
  ports:
  	- protocol: TCP
      port: 8080
      targetPort: 8080
      nodePort: 30001

YAML File: External Service

Assign external IP address to service

Kubernetes Volume

Kubernetes can mount volume into containers

A broad variety of Volume Types/drivers are supported

  • "Local" Volumes (i.e. on Nodes)
  • Cloud-Provider specific Volumes

Volume lifetime depends on the Pod lifetime

  • Volumes survive Container restarts (and removal)
  • Volumes are removed when Pods are destroyed

Kubernetes Volumes

  • Supports many different Drivers and Types
  • Volumes are not necessarily persistent
  • Volumes survive Container restarts and removals

Docker Volumes

  • Basically no Driver / Type Support
  • Volumes persist until manually cleared
  • Volumes survive Container restarts and removals

Highly available persistent storage for Kubernetes

# kubectl get storageclass longhorn -o yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: host-pvc
  namespace: training-devops
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: longhorn
  resources:
    requests:
      storage: 1Gi

Network File System (NFS)

apiVersion: v1
kind: PersistentVolume
metadata:
  name: example-nfs
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: "/usr/local/path"
    server: <nfs-server-ip>

Ingress

Kubernetes: Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: system-a-ingress
spec:
  rules:
  - host: system-a.fotia.com.my
    http:
    	paths:
        - backend:
        	serviceName: system-a-internal-service
            servicePort: 8080
  • kind: Ingress
  • Routing rules:
    • Forward request to the internal service
  • paths: the URL path

YAML File: Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: system-a-ingress
spec:
  rules:
  - host: system-a.fotia.com.my
    http:
    	paths:
        - backend:
        	serviceName: system-a-internal-service
            servicePort: 8080
apiVersion: v1
kind: Service
metadata:
  name: system-a-internal-service
spec:
  selector:
  	app: system-a
  ports:
  	- protocol: TCP
      port: 8080
      targetPort: 8080
  • No nodePort in Internal Service
  • Instead of LoadBalancer, default type: ClusterIP

Ingress and Internal Service Configuration

Configure Ingress in Kubernetes Cluster

  • We need an Ingress Controller to do an implementation for ingress
  • Ingress Controller: Evaluates and processes Ingress rules
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: system-a-ingress
spec:
  rules:
  - host: system-a.fotia.com.my
    http:
    	paths:
        - backend:
        	serviceName: system-a-internal-service
            servicePort: 8080

What is Ingress Controller?

Ingress Controller behind Proxy/LB

No server in Kubernetes cluster is accessible from outside

  • Good security practice
  • Separate server
  • Public IP address and open ports
  • Entrypoint to cluster

Multiple paths for same host

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: system-a-ingress
  annotations:
  	nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: system-a.fotia.com.my
    http:
    	paths:
        - path: /dashboard
          backend:
        	serviceName: dashboard-service
            servicePort: 8080
        - path: /cart
          backend:
          	serviceName: cart-service
            servicePort: 3000

Multiple sub-domains or domains

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: system-a-ingress
  annotations:
  	nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: dashboard.system-a.com.my
    http:
    	paths:
          backend:
        	serviceName: dashboard-service
            servicePort: 8080
  - host: cart.system-a.com.my
  	http:
    	paths:
          backend:
            serviceName: cart-service
            servicePort: 3000

Configure TLS Certificate - https

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tls-ingress
spec:
  tls:
  - hosts:
  	- system-a.fotia.com.my
    secretName: system-a-secret-tls
  rules:
  - host: system-a.fotia.com.my
    http:
    	paths:
        - path: /
          backend:
        	serviceName: system-a-internal-service
            servicePort: 8080
apiVersion: v1
kind: Secret
metadata:
  name: system-a-secret-tls
  namespace: default
data:
  tls.crt: base64 encoded cert
  tls.key: base64 encoded key
type: kubernetes.io/tls
  • Data keys need to be "tls.crt" and "tls.key"
  • Values are file content not file paths/location
  • Secret component must be in the same namespace as the ingress component

Auto-Scaling

HorizontalPodAutoScaler

apiVersion: apps/v1
kind: Deployment
metadata:
  name: afathi-second-app
spec:
  replicas: 1
  selector:
    matchLabels:
      app: afathi-second-app
  template:
    metadata:
      labels:
        app: afathi-second-app
        tier: frontend
    spec:
      containers:
      - name: second-nodejs
        image: fathich3k/kube-sample-app
        resources:
          limits:
            memory: 500Mi # 500Mib
            cpu: 100m # 0.1cpu
          requests:
            memory: 250Mi
            cpu: 80m
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: afathi-frontend-service
spec:
  selector:
    app: afathi-second-app
  ports:
    - port: 80 # port service nk exposed
      targetPort: 8080 # port yg container listen
  type: NodePort
---
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: second-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: afathi-second-app
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50

Requests

Requests specify the minimum amount of CPU and memory resources that Kubernetes will guarantee for the container. If a node has enough resources to satisfy the requests, the container is scheduled on that node. Requests are used for scheduling decisions.

  • memory: The amount of RAM the container is guaranteed to have.
  • cpu: The amount of CPU the container is guaranteed to have.

 

Limits

Limits specify the maximum amount of CPU and memory resources that the container can use. If the container tries to exceed these limits, it will be throttled (for CPU) or terminated (for memory).

  • memory: The maximum amount of RAM the container is allowed to use.
  • cpu: The maximum amount of CPU the container is allowed to use.

Demo App

Pod Internal Communication

Pod to Pod Communication

Your feedback matters

There are no secrets to success. It is the result of preparation, hard work, and learning from failure. - Colin Powell

THANK YOU