Kubernetes for Java Developers

Kubernetes

  • Kubernetes is a container orchestration tool.
  • Container orchestration automates the deployment, management, scaling, and networking of containers.
  • Container orchestration can be used in any environment where you use containers. It can help you to deploy the same application across different environments without needing to redesign it.
  • Containers give your microservice-based apps an ideal application deployment unit and self-contained execution environment.
  • Kubernetes in greek means helmsmen or ship pilot. It is also referred as k8s.
  • Cloud native computing foundation takes care of kubernetes development and licensing.

Features of an orchestration system

  • Autoscaling
  • Service  discovery
  • Load balancing
  • Self Healing
  • Zero downtime deployment
  • Cloud Neutral : Standardized platform on any infrastructure
  • Auto update and rollback

Understanding pods in Kubernetes

  • Pods is the smallest deployable unit in kubernetes.
  • You cannot have a container inside kubernetes without a pod. Container lives inside pod.

 

 

 

 

 

 

 

 

 

  • Pod is a collection of containers that can run on host. 

 

 

 

 

 

 

 

 

 

Cont 1

Cont 2

Cont 3

Cont 4

Pod 1

Pod 2

Node 1

Kubenetes Cluster

  • All the machines inside kubernetes cluster is referred to as node.
  • Nodes are of 2 types master node and worker node.
  • Master nodes manages all worker nodes.
  • Components of master node
    • API server : we interact with the master node with API server which is present in master node by using Kubectl command
    • Scheduler : A scheduler will launch the pods on the worker node.
    • Control manager : Control manager runs in the background and it makes sure that the cluster is in the desired state.
  • API server manatains the current state of the server in etcd
  • Components of a Worker node
    • Kubelet : API Server in Master node interacts with worker node via kubelet to perform various tasks on the worker node. We never interact with the kubelet directly.
    • Proxy : when application from outside world want to communicate with the containers inside a worker node than they have to go through this network proxy. It also helps in performing load balancing.

Steps to install minikube on ubuntu

 

 

 

 

 

 

 

 

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube_latest_amd64.deb

sudo dpkg -i minikube_latest_amd64.deb


minikube start

/* while running this command if you get the instruction to add the docker group
 then run the command 
usermod -aG docker $USER && newgrp docker
 and re run command minikube start*/

 

sudo apt-get install -y apt-transport-https ca-certificates curl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

 sudo apt-get update
 
 sudo apt-get install -y kubectl
 
 kubectl get pods -A

Steps to install minikube and Kubectl on mac

brew install minikube
brew install kubectl 

Test your setup

minikube start
kubectl get po -A

Microservice code 

package com.kuberneteswithspringboot.demo.kubenetesspringboot;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.validation.annotation.Validated;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;

import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;

@SpringBootApplication
@RestController
public class KubernetesSpringBootApplication {

	@Value("${app.version}")
	String appVersion;

	@GetMapping("/")
	public String index() throws IOException {
		String returnString= "Spring Boot App "+ appVersion;
		String cmd = "cat /proc/self/cgroup | grep name";
		Runtime run = Runtime.getRuntime();
		Process pr = run.exec(cmd);
		try {
			pr.waitFor();
		} catch (InterruptedException e) {
			e.printStackTrace();
		}
		BufferedReader buf = new BufferedReader(new InputStreamReader(pr.getInputStream()));
		String line = " ";
		if (( line=buf.readLine())!=null) {
			returnString+=line;
		}
		return returnString;
	}

	public static void main(String[] args) {
		SpringApplication.run(KubernetesSpringBootApplication.class, args);
	}

}

Creating your first pod

kubectl run firstpod --image=pulkitpushkarna/kubernetes-with-spring-boot:v1

List pods

kubectl get pods

Describe pod

kubectl describe pod firstpod
kubectl exec -it firstpod -- /bin/bash

Going inside the pod

Delete the pod

kubectl delete pod firstpod

Create Pod using yaml file

kind: Pod
apiVersion: v1
metadata:
  name: firstpod
  labels:
    app: fp
    release: stable
spec:
  containers:
    - name: webapp
      image: pulkitpushkarna/kubernetes-with-spring-boot:v1
        
kubectl create -f firstpod.yaml 

firstpod.yaml

Deleting the pods which were created via yaml file

kubectl delete -f firstpod.yaml 

POD lifecycle

  • Pending : As soon as we trigger create command, pod enters in pending state.
  • Running : All the containers of the pods have been created and Pod is scheduled for one of the worker node.
  • Succeeded : All the containers have been executed without any error.
  • Failed : When one or more pods failed in execution
  • Unknown : If the pod's state cannot be known by the master node.

Display all labels

kubectl get all --show-labels

Filter pods for matching labels

kubectl get all --selector='app=fp'

Filter pods for matching labels negation

kubectl get all --selector='app!=fp'
kubectl get all --selector='app in (fp)'

Using in operator for filteration

Annotations

  • Annotations are not used to identify or query the resources.
  • It is just arbitrary data that is provided in the form of key value pair.
  • This data can we used any other tool as required like grafana
kind: Pod
apiVersion: v1
metadata:
  name: firstpod
  labels:
    app: fp
    release: stable
  annotations:
    dockerHubUrl: "https://github.com/pulkitpushkarna/kubernetes-spring-boot.git"
    logDir: "etc/logs"
spec:
  containers:
    - name: webapp
      image: pulkitpushkarna/kubernetes-with-spring-boot:v1
        

Namespace

  • When there are multiple applications deployed on a kubernetes cluster we have to make sure that no one application uses up all the resources of the server like cpu, storage etc.
  • We also need to make sure that one team does not steps on the resources of other team.
  • Each namespace will be allocated a resource quota
  • Get all namespaces

 

  • Create namespace

 

  • Delete the existing first pod and create a new pod within firstns namespace
kubectl get ns
kubectl create ns firstns
kubectl create -f firstpod.yaml --namespace firstns

Now if you try to get pods in default namespace it will say no resources found

kubectl get pods

Now check for the pods within first ns namespace

kubectl get pods --namespace firstns

View the default namespace

kubectl config view

Change the default namespace

kubectl config set-context --current --namespace firstns

Now if you try to get pods you will get the pods within firstns namespace

Handy commands 

  • Get all resources

 

 

  • Dry run

 

 

  • Explain

 

 

  • Get YAML for a running pod
kubectl get all
kubectl create -f firstpod.yaml --dry-run=client
kubectl explain pods

kubectl explain deployments
kubectl get pod/firstpod -o yaml

Kubernetes Deployment Object

  • A deployment object helps us manage multiple replicas of our pod.
  • We create deployment for each microservice
  • The recommended way of creating a deployment is by using yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mywebserver
  labels:
    app: spring-boot-app
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: spring-boot-app
  template:
    metadata:
      labels:
        app: spring-boot-app
    spec:
      containers:
        - name: my-spring-boot-app
          image: pulkitpushkarna/kubernetes-with-spring-boot:v1
          ports:
            - containerPort: 8080

webserver.yaml

Create deployment object

kubectl create -f webserver.yaml 

Get all the objects

 kubectl get all

Get deployment objects

kubectl get deployment
kubectl get pod

Get pod objects

kubernetes service Object

  • A service will logically group the set of pods that need to access each other so that they are able to communicate with each other seamlessely.

Client

Cluster IP Service

Pod A

Pod B

Pod C

Node Port Service

30000 to 32767

  • Cluster IP Service : It generates a virtual IP address and all the communication within pods in the cluster will happen with this virtual IP address. Cluster IP cannot be accessed from outside world.
  • Node port Service : Node port service is used to communicate with the pods from outside cluster. The port number which will used for communications from the outside world can be between 30000 to 32767.
apiVersion: v1
kind: Service
metadata:
  name: webserver-service
spec:
  type: NodePort
  selector:
    app: spring-boot-app
  ports:
    - nodePort: 30123
      port: 8080
      targetPort: 8080


webservice-service.yaml

 kubectl create -f webserver-service.yaml

Create the service using the command below

kubectl get services

Get services 

Start tunnel for service

minikube service webserver-service -n firstns

Now the url which is opened on your browser use the same on your console using curl command to see the load balancing.

Update strategy for updating image

  • Recreate : If we use this strategy than kubernetes will destroy all the pods and then recreate the pods. It doesn't promise zero downtime deployment.
  • RollingUpdate : Everythings keep rolling as the update happens nothing is stopped. Older versions of the application keeps on running until the update happens. Following are the 2 important fields for rolling update.
    • maxUnavailable : It tells kubernetes that how many these pods can go unavailable as soon as update is executed.
    • maxSurge: It tells kubernetes how many new pods should be created as soon as update starts.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mywebserver
  labels:
    app: spring-boot-app
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: spring-boot-app
  template:
    metadata:
      labels:
        app: spring-boot-app
    spec:
      containers:
        - name: my-spring-boot-app
          image: pulkitpushkarna/kubernetes-with-spring-boot:v2
          ports:
            - containerPort: 8080

In the spec section you can see the strategy for updating the deployment

Checkout to branch V2 of the application

Update the deployment with V2

kubectl replace -f webserver.yaml
 git checkout v2

Now start hitting the following commands multiple times to observe different stages of the pods

kubectl get pods

Get the description of the deployment

kubectl describe deployment mywebserver -n firstns

Rollback to a previous revision

View revision history

kubectl rollout history deployment

View details of a revision

kubectl rollout history deployment mywebserver --revision=2

Roll back to previous revision

kubectl rollout undo deployment mywebserver --to-revision=1

Run following command multiple times to track the progress of pods

kubectl get pods

Manually scale replicas

kubectl scale deployment mywebserver --replicas=10

Checkout ready pods

kubectl get deployments  

Check the state of the pods

kubectl get pods

Autoscaling with horizontal pod autoscaler

  • Autoscaling  is a feature in which the cluster is capable of increasing the number of nodes as the demand for service response increases and decrease the number of nodes as the requirement decreases.
  • This feature of auto scaling is currently supported in Google Cloud Engine (GCE) and Google Container Engine (GKE) and will start with AWS pretty soon.
kubectl autoscale deployment mywebserver --cpu-percent=50 --min=1 --max=10 

kubectl get hpa

Exercise 1

  • Clone the project https://github.com/pulkitpushkarna/kubernetes-spring-boot
  • Checkout to v1 branch create the deployment and service for webapp as shown in the slides.
  • Get the numbers of pod created.
  • Try to bring down one pod and observer the behaviour.
  • Observe the load balancing within the pods.
  • Checkout to branch v2
  • Update the deployment with v2
  • Roll back the previous revision
  • Perform manual scaling on the cluster

Volumes in Kubernetes

  • Kubernetes supports many types of volumes.
  • A Pod can use any number of volume types simultaneously.
  • Ephemeral volume types have a lifetime of a pod, but persistent volumes exist beyond the lifetime of a pod.
  • When a pod ceases to exist, Kubernetes destroys ephemeral volumes; however, Kubernetes does not destroy persistent volumes.

Types of volume:

  • emptyDir :
    • The life cycle of emptyDir is the same as the pod it belongs to. When a pod is deleted, the data in its emptyDir will also be deleted.
    • ​The emptyDir type volume is created when the pod is allocated to the node, and kubernetes will automatically allocate a directory on the node, so there is no need to specify the corresponding directory file on the host node.
  • hostPath : 
    • hostPath Volume is a directory or file on the host that the pod mounts, so that the container can use the host's file system for storage
    • The disadvantage is that in k8s, pods are dynamically scheduled on each node.
    • When a pod is started on the current node and the file is stored locally through hostPath, the next time it is scheduled to start on another node, the file stored on the previous node cannot be used.

 

  • ConfigMap
    • A ConfigMap provides a way to inject configuration data into pods.
    • The data stored in a ConfigMap can be referenced in a volume of type configMap and then consumed by containerized applications running in a pod.
  • Secret
    • A secret volume is used to pass sensitive information, such as passwords, to Pods.
    • secrets are stored on tmpf (temporary folders)
    • Instead of storing it on a node.

hostPath Mounting

  • Delete any deployments which are already there.
  • Make changes in webserver.yaml  for hostPath Mounting as shown in the next slide.
  •  Create a new deployment using webserver.yaml file.

 

  • Get the pods and connect to on the pod and look for the data directory and create a file in it.

 

 

 

  • Now ssh on minikube node and check its /data directory

 

 

 

kubectl create -f webserver.yaml
pulkit-pushkarna kubenetes-spring-boot % kubectl get pod
NAME                           READY   STATUS    RESTARTS   AGE
mywebserver-847df9fdcb-8q4cb   1/1     Running   0          67s
mywebserver-847df9fdcb-rjd4g   1/1     Running   0          67s
pulkit-pushkarna kubenetes-spring-boot % kubectl exec -it mywebserver-847df9fdcb-8q4cb -- /bin/bash
root@mywebserver-847df9fdcb-8q4cb:/usr/local/bin# cd /data/
root@mywebserver-847df9fdcb-8q4cb:/data# touch hello.html
pulkitpushkarna@pulkit-pushkarna kubenetes-spring-boot % minikube ssh
docker@minikube:~$ cd /data
docker@minikube:/data$ ls
hello.html
docker@minikube:/data$ 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mywebserver
  labels:
    app: spring-boot-app
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: spring-boot-app
  template:
    metadata:
      labels:
        app: spring-boot-app
    spec:
      containers:
        - name: my-spring-boot-app
          image: pulkitpushkarna/kubernetes-with-spring-boot:v2
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: demovol
              mountPath: /data
      volumes:
        - name: demovol
          hostPath:
            path: /data
            type: Directory



Mounting Config Map

  • Delete the existing deployment
  • create a file config-map.yaml

 

 

 

 

 

  • Make changes in the webserver.yaml file as shown in the next page
apiVersion: v1
kind: ConfigMap
metadata:
  name: demo-configmap
data:
  initdb.sql:
    create table person();
  key:
    kasdkjajksdkj
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mywebserver
  labels:
    app: spring-boot-app
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: spring-boot-app
  template:
    metadata:
      labels:
        app: spring-boot-app
    spec:
      containers:
        - name: my-spring-boot-app
          image: pulkitpushkarna/kubernetes-with-spring-boot:v2
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: demovol
              mountPath: /data
            - name: config-map-vol
              mountPath: /etc/config
      volumes:
        - name: demovol
          hostPath:
            path: /data
            type: Directory
        - name: config-map-vol
          configMap:
            name: demo-configmap



  • create config map using yaml file

 

  • Create Deployment

 

  • Get the created config Map

 

  • Connect with the pod and check for files in /etc/config folder
kubectl create -f config-map.yaml
kubectl create -f webserver.yaml 
kubectl get configMap
pulkit-pushkarna kubenetes-spring-boot % kubectl get pod
NAME                           READY   STATUS    RESTARTS   AGE
mywebserver-79df4d5d54-qvglq   1/1     Running   0          10s
mywebserver-79df4d5d54-tgvgc   1/1     Running   0          10s
pulkit-pushkarna kubenetes-spring-boot % kubectl exec -it mywebserver-79df4d5d54-qvglq -- /bin/bash
root@mywebserver-79df4d5d54-qvglq:/usr/local/bin# cd /etc/config/
root@mywebserver-79df4d5d54-qvglq:/etc/config# ls
initdb.sql  key
root@mywebserver-79df4d5d54-qvglq:/etc/config# cat initdb.sql 
create table person();
root@mywebserver-79df4d5d54-qvglq:/etc/config# cat key
kasdkjajksdkj

Mounting Secret

  • Delete the existing deployment objects.
  • create secret yaml file

 

 

 

 

 

  • username and password values are base64 encoded

 

 

 

  • Mount the secret volume using deployment file.

 

 

 

 

apiVersion: v1
kind: Secret
metadata:
  name: demo-secret
type: Opaque
data:
  username:
    U2VjcmV0VXNlcm5hbWUK
  password:
    U2VjcmV0UGFzc3dvcmQK
pulkit-pushkarna kubenetes-spring-boot % echo "SecretUsername" | base64
U2VjcmV0VXNlcm5hbWUK
pulkit-pushkarna kubenetes-spring-boot % echo "SecretPassword" | base64
U2VjcmV0UGFzc3dvcmQK
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mywebserver
  labels:
    app: spring-boot-app
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: spring-boot-app
  template:
    metadata:
      labels:
        app: spring-boot-app
    spec:
      containers:
        - name: my-spring-boot-app
          image: pulkitpushkarna/kubernetes-with-spring-boot:v2
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: demovol
              mountPath: /data
            - name: config-map-vol
              mountPath: /etc/config
            - name: my-secret
              mountPath: /etc/mysecrets
      volumes:
        - name: demovol
          hostPath:
            path: /data
            type: Directory
        - name: config-map-vol
          configMap:
            name: demo-configmap
        - name: my-secret
          secret:
            secretName: demo-secret



volumeMounts:
            - name: demovol
              mountPath: /data
            - name: config-map-vol
              mountPath: /etc/config
            - name: my-secret
              mountPath: /etc/mysecrets
      volumes:
        - name: demovol
          hostPath:
            path: /data
            type: Directory
        - name: config-map-vol
          configMap:
            name: demo-configmap
        - name: my-secret
          secret:
            secretName: demo-secret
  • Create secret Object

 

  • Create the deployment Object

 

 

  • connect to the pod and check for the secret directory
kubectl create -f secrets.yaml 
kubectl create -f webserver.yaml
pulkit-pushkarna kubenetes-spring-boot % kubectl get pod 
NAME                           READY   STATUS    RESTARTS   AGE
mywebserver-77b4c6d9b5-7p6jv   1/1     Running   0          17s
mywebserver-77b4c6d9b5-vkdz2   1/1     Running   0          17s
pulkit-pushkarna kubenetes-spring-boot % kubectl exec -it mywebserver-77b4c6d9b5-7p6jv -- /bin/bash
root@mywebserver-77b4c6d9b5-7p6jv:/usr/local/bin# cd /etc
root@mywebserver-77b4c6d9b5-7p6jv:/etc# cd mysecrets/
root@mywebserver-77b4c6d9b5-7p6jv:/etc/mysecrets# ls
password  username
root@mywebserver-77b4c6d9b5-7p6jv:/etc/mysecrets# cat password
SecretPassword
root@mywebserver-77b4c6d9b5-7p6jv:/etc/mysecrets# cat username
SecretUsername

emptyDir mounting

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mywebserver
  labels:
    app: spring-boot-app
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: spring-boot-app
  template:
    metadata:
      labels:
        app: spring-boot-app
    spec:
      containers:
        - name: my-spring-boot-app
          image: pulkitpushkarna/kubernetes-with-spring-boot:v2
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: demovol
              mountPath: /data
            - name: config-map-vol
              mountPath: /etc/config
            - name: my-secret
              mountPath: /etc/mysecrets
            - name: cache-volume
              mountPath: /cache
      volumes:
        - name: demovol
          hostPath:
            path: /data
            type: Directory
        - name: config-map-vol
          configMap:
            name: demo-configmap
        - name: my-secret
          secret:
            secretName: demo-secret
        - name: cache-volume
          emptyDir: {}


 volumeMounts:
            - name: demovol
              mountPath: /data
            - name: config-map-vol
              mountPath: /etc/config
            - name: my-secret
              mountPath: /etc/mysecrets
            - name: cache-volume
              mountPath: /cache
      volumes:
        - name: demovol
          hostPath:
            path: /data
            type: Directory
        - name: config-map-vol
          configMap:
            name: demo-configmap
        - name: my-secret
          secret:
            secretName: demo-secret
        - name: cache-volume
          emptyDir: {}

Persistent volumes

  • If we want storage that lives beyond the lifecycle of a container and pod in a cluster than we use persistent Volume.
  • A persistent volume is a storage at cluster level.
  • A pod that wants to use certain amount of storage from Persistent volume can request the same using Persistent volume claim.
  • There are 3 steps if you want to use Persistent volume
    • Create Persistent volume
    • Create Persistent Volume claim
    • Mount the Volume claim

Access Modes

  • ReadWriteOnce : Only One node in the cluster can read and write to persistent volume.
  • ReadOnlyMany : Any Number of nodes in the cluster can read only to the persistent volume in the cluster.
  • ReadWriteMany : Any number of nodes in the cluster can read and write to persistent volume.

Create Persistent Volume

  • Create yaml file for Persistent Volume persistent-volume.yaml

 

 

 

 

 

 

  • Create persistent Volume from yaml file

 

  • Check for the created persistent volume
apiVersion: v1
kind: PersistentVolume
metadata:
  name: demo-persistent-volume
spec:
  capacity:
    storage: 128M
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /my-peristent-volume

kubectl create -f persistent-volume.yaml
kubectl get pv

Create Persistent Volume Claim

  • create yaml file for Persistent volume claim persistent-volume-claim.yaml

 

 

 

 

 

  • create persistent volume claim using yaml file

 

  • Check for the persistent volume claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: demo-pvc
spec:
  resources:
    requests:
      storage: 64M
  accessModes:
    - ReadWriteOnce

kubectl create -f persistent-volume-claim.yaml
kubectl get pvc

mounting permanent volume

 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mywebserver
  labels:
    app: spring-boot-app
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: spring-boot-app
  template:
    metadata:
      labels:
        app: spring-boot-app
    spec:
      containers:
        - name: my-spring-boot-app
          image: pulkitpushkarna/kubernetes-with-spring-boot:v2
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: demovol
              mountPath: /data
            - name: config-map-vol
              mountPath: /etc/config
            - name: my-secret
              mountPath: /etc/mysecrets
            - name: cache-volume
              mountPath: /cache
            - name: demo-pvc
              mountPath: /clusterVol
      volumes:
        - name: demovol
          hostPath:
            path: /data
            type: Directory
        - name: config-map-vol
          configMap:
            name: demo-configmap
        - name: my-secret
          secret:
            secretName: demo-secret
        - name: cache-volume
          emptyDir: {}
        - name: demo-pvc
          persistentVolumeClaim:
            claimName: demo-pvc


 volumeMounts:
            - name: demovol
              mountPath: /data
            - name: config-map-vol
              mountPath: /etc/config
            - name: my-secret
              mountPath: /etc/mysecrets
            - name: cache-volume
              mountPath: /cache
            - name: demo-pvc
              mountPath: /clusterVol
      volumes:
        - name: demovol
          hostPath:
            path: /data
            type: Directory
        - name: config-map-vol
          configMap:
            name: demo-configmap
        - name: my-secret
          secret:
            secretName: demo-secret
        - name: cache-volume
          emptyDir: {}
        - name: demo-pvc
          persistentVolumeClaim:
            claimName: demo-pvc

Delete the existing deployment and create a new deployment using webserver.yaml file

 

 

  • connect to the pods and go to the clusterVol folder and create a file in the folder.

 

 

 

 

  • Now exit from the pod and delete all resources from cluster

 

  • Now create new deployment from webserver.yaml file. You will observe that the file hello.html persists in clusterVol inside the pod
kubectl delete -f webserver.yaml
kubectl create -f webserver.yaml
pulkitpushkarna@pulkit-pushkarna kubenetes-spring-boot % kubectl get pods
NAME                           READY   STATUS    RESTARTS   AGE
mywebserver-86d5f67d78-dkgrg   1/1     Running   0          8s
mywebserver-86d5f67d78-zntw6   1/1     Running   0          8s
pulkitpushkarna@pulkit-pushkarna kubenetes-spring-boot % kubectl exec -it mywebserver-86d5f67d78-dkgrg -- /bin/bash
root@mywebserver-86d5f67d78-dkgrg:/usr/local/bin# cd /clusterVol/
root@mywebserver-86d5f67d78-dkgrg:/clusterVol# touch hello.html
 kubectl delete all --all

Exercise 2

  • Perform following volume mounting as shown in the slides
    • emptydir
    • hostpath
    • secret
    • configmap
    • persistent volume

Creating a cluster of SpringBoot App with Mysql database

  • In the branch v2-with-mysql-and-config-map-application-properties we have done configuration to connect our Spring Boot App with Mysql.
  • Now In order to run this app in Kubernetes we need to do following things.
  • We need to create a Mysql Deployment file.
apiVersion: apps/v1
kind: Deployment
metadata:
  name: docker-mysql
  labels:
    app: docker-mysql
spec:
  replicas: 1
  selector:
    matchLabels:
      app: docker-mysql
  template:
    metadata:
      labels:
        app: docker-mysql
    spec:
      containers:
        - name: docker-mysql
          image: mysql
          env:
            - name: MYSQL_DATABASE
              value: mydb
            - name: MYSQL_ROOT_PASSWORD
              value: test1234
            - name: MYSQL_ROOT_HOST
              value: '%'

docker-mysql-deployment.yaml

We need to have a service to expose ports of mysql docker-mysql-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: docker-mysql
  labels:
    app: docker-mysql
spec:
  selector:
    app: docker-mysql
  type: NodePort
  ports:
    - port: 3306
      targetPort: 3306
      nodePort: 30287

For the Spring Boot App image deployed on Kubernetes inside a pod we need to provide different application properties. podApplicationConfigFile/application.properties

app.version=externalPropertiesFile

spring.datasource.url=jdbc:mysql://docker-mysql:3306/mydb
spring.datasource.username=root
spring.datasource.password=test1234
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.jpa.hibernate.ddl-auto=create
spring.jpa.database-platform=org.hibernate.dialect.MySQL5Dialect
server.port=9091

You must have observed that in the datasource url we have specified docker-mysql which is name of the mysql container

We will gonna create a configMap for the properties file

kubectl create configmap spring-app-config --from-file=podApplicationConfigFile/application.properties

Checkout the configMap

 kubectl get configMap 

Checkout the YAML created by the command kubectl command

kubectl get configMap spring-app-config -o yaml

Now inside the webserver.yaml file we will mount spring-app-config ConfigMap

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mywebserver
  labels:
    app: spring-boot-app
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  selector:
    matchLabels:
      app: spring-boot-app
  template:
    metadata:
      labels:
        app: spring-boot-app
    spec:
      containers:
        - name: my-spring-boot-app
          image: pulkitpushkarna/kubernetes-with-spring-boot:v4
          ports:
            - containerPort: 8080
          volumeMounts:
            - name: demovol
              mountPath: /data
            - name: config-map-vol
              mountPath: /etc/config
            - name: my-secret
              mountPath: /etc/mysecrets
            - name: cache-volume
              mountPath: /cache
            - name: demo-pvc
              mountPath: /clusterVol
            - name: application-config
              mountPath: /appConfigFile
      volumes:
        - name: demovol
          hostPath:
            path: /data
            type: Directory
        - name: config-map-vol
          configMap:
            name: demo-configmap
        - name: my-secret
          secret:
            secretName: demo-secret
        - name: cache-volume
          emptyDir: {}
        - name: demo-pvc
          persistentVolumeClaim:
            claimName: demo-pvc
        - name: application-config
          configMap:
            name: spring-app-config


Create MySQL deployment

 kubectl create -f docker-mysql-deployment.yaml

Create MySQL service

kubectl create -f docker-mysql-service.yaml

Create deployment for web app

kubectl create -f webserver.yaml 

Get pods

kubectl get pod

Check the logs of one of the pod from WebApp Deployment

kubectl logs mywebserver-7dffbdcfbb-6k95x

if you go inside the port you will see that the application.properties in mounted on the folder /appConfigFile

pulkit-pushkarna kubenetes-spring-boot % kubectl exec -it mywebserver-7dffbdcfbb-6k95x -- /bin/bash
root@mywebserver-7dffbdcfbb-6k95x:/usr/local/bin# cd /appConfigFile/
root@mywebserver-7dffbdcfbb-6k95x:/appConfigFile# ls
application.properties
root@mywebserver-7dffbdcfbb-6k95x:/appConfigFile#
root@mywebserver-7dffbdcfbb-6k95x:/appConfigFile# cat application.properties 
app.version=externalPropertiesFile

spring.datasource.url=jdbc:mysql://docker-mysql:3306/mydb
spring.datasource.username=root
spring.datasource.password=test1234
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.jpa.hibernate.ddl-auto=create
spring.jpa.database-platform=org.hibernate.dialect.MySQL5Dialect
server.port=9091root@mywebserver-7dffbdcfbb-6k95x:/appConfigFile#

Perfom curl command inside the container

root@mywebserver-7dffbdcfbb-6k95x:/appConfigFile# curl localhost:9091
Spring Boot App--externalPropertiesFile14:name=systemd:/docker/e5008bbd7f2b3ce3aabdbc921fb444258c0afdfaa62b0954c974ff6b8b31cc9b/kubepods/besteffort/pode7809864-c537-47a7-bfe9-fd08c33c39cf/e83dc09b0d71cce137de038ce7c498f27ac3c2e96e227b823cdf0abe0d7e3a1c

Running kubernetes cluster on EKS

  • In order to get started with EKS the first thing is that we need to install AWS CLI.
  • After installing configure aws credentials in aws CLI using command AWS configure
  • After installing AWS CLI we need to install eksctl.
  • Following are the steps to install eksctl on Mac.
    • ​brew tap weaveworks/tap
    • brew install weaveworks/tap/eksctl
    • eksctl version
  • Stop the minkube which is running on your system

Exercise 3

  • As shown in slides create a cluster with webserver and mysql deployments.
  • Connect the webserver with mysql.

Create kubernetes Cluster

eksctl create cluster --name my-kube-cluster --node-type t2.micro --nodes 2 --nodes-min 2 --nodes-max 3

Checkout to branch v1 and execute webserver.yaml

 

 kubectl create -f webserver.yaml

You will observer that 2 instances are created in aws. You can also list the pods and check the deployment object.

kubectl get pods
kubectl get deployment

In order to to expose our app to the outside world we need to create a load balancer by execute the following command.

kubectl expose deploy mywebserver --type=LoadBalancer --port=8080
pulkitpushkarna@pulkit-pushkarna kubenetes-spring-boot % kubectl get service
NAME          TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)          AGE
kubernetes    ClusterIP      10.100.0.1       <none>                                                                    443/TCP          18m
mywebserver   LoadBalancer   10.100.158.212   a029f7f1279bb42b886f03428e5a3551-2144999605.us-west-2.elb.amazonaws.com   8080:30824/TCP   41s

To get the url of the load balancer execute following command

use the url mentioned in the external IP column and prepend :8080 to it. You will see the load balancing between 2 nodes.

pulkitpushkarna@pulkit-pushkarna kubenetes-spring-boot % curl a029f7f1279bb42b886f03428e5a3551-2144999605.us-west-2.elb.amazonaws.com:8080
Spring Boot App v1--11:pids:/kubepods/besteffort/pod025ad0e8-93fd-4716-af56-ee002d27e515/a12d547cc0e521bfd52502eab3a036e0a8408d39607f5af9174adab144e76184%                                                                                                                            pulkitpushkarna@pulkit-pushkarna kubenetes-spring-boot % curl a029f7f1279bb42b886f03428e5a3551-2144999605.us-west-2.elb.amazonaws.com:8080
Spring Boot App v1--11:pids:/kubepods/besteffort/pod025ad0e8-93fd-4716-af56-ee002d27e515/a12d547cc0e521bfd52502eab3a036e0a8408d39607f5af9174adab144e76184%                                                                                                                            pulkitpushkarna@pulkit-pushkarna kubenetes-spring-boot % curl a029f7f1279bb42b886f03428e5a3551-2144999605.us-west-2.elb.amazonaws.com:8080
Spring Boot App v1--11:pids:/kubepods/besteffort/pod025ad0e8-93fd-4716-af56-ee002d27e515/a12d547cc0e521bfd52502eab3a036e0a8408d39607f5af9174adab144e76184%                                                                                                                            pulkitpushkarna@pulkit-pushkarna kubenetes-spring-boot % curl a029f7f1279bb42b886f03428e5a3551-2144999605.us-west-2.elb.amazonaws.com:8080
Spring Boot App v1--11:hugetlb:/kubepods/besteffort/poda2192d56-165c-47b0-b1dd-1ffd08cfb0df/e306da51cae5a644756d32a6bc72c37cfbf574b396f8655e04f53581fe2b5339% 
  • Terminate both the servers running on your cluster. Kubernetes will bring back the nodes.
  • Checkout to v2 branch and update the webserver deployment by executing the following command:

 

  • Keep on executing the get pod command to check the status of the nodes.
  • Check the rollback history

 

  • Roll back to previous version
kubectl replace -f webserver.yaml
kubectl rollout history deployment
kubectl rollout undo deployment mywebserver --to-revision=1

Mounting EBS for persistent storage

  • Checkout to mounting-EBS-as-persistent-volume branch
  • Create an EBS volume 

 

  • Delete existing persistent volume

 

  • Create new persistent volume
aws ec2 describe-availability-zones
aws ec2 create-volume --availability-zone=us-west-2a --size=10 --volume-type=gp2
kubectl delete pv demo-persistent-volume 
kubectl create -f persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: demo-persistent-volume
spec:
  capacity:
    storage: 128M
  accessModes:
    - ReadWriteOnce
  awsElasticBlockStore:
    fsType: ext4
    volumeID: vol-040ab6d057f6f1624

kubectl delete pvc demo-pvc
kubectl create -f persistent-volume-claim.yaml 
  • Delete and create new PVC
  • Launch webapp deployment

\

  • Once the pods are up connect with one of the pod and check for /clusterVol directory
  • Create a file in directory
  • Delete the deployment
  • Create the deployment again
  • After connecting with one of the pods you will observe that the file which was created before deletion of the deployment still exists.
kubectl create -f webserver.yaml

Cleaning up the resources

eksctl delete cluster --name my-kube-cluster

Also delete the EBS volume which you have created for persistent volume

Made with Slides.com