Web Apps
Role1: Ensure the code from Developer taken to client/customer/enduser whom ever is accessing
Role2: Ensure these env available all the time with 99.99% uptime
ls
touch
rm
cp
mv
1. Everything is a command. 2. Commands is what we in linux sessions 3. From command line you can find certain info a. username b. hostname c. $ - normal user, # - root user root - administrator in Linux 4. Commands will have options -single-word --single-word -s (single character)
---- 1. Machines in todays world we use is not physical directly those are virtual machines --- 1. To list the files you have ls 2. output of ls command will show files and directries, Directories can be identified with some blue color, also ls -l command as well 3. You can combine options -lA , -Al , However this combination is purely depends on the command.. Meaning every command in linux may not be flexible with combining options. Thus my suggestion is use individual. ---
1. Linux has case-sensitive file systems. Meaning in Windows if you create a file ABC.txt, you cannot create abc.txt. So in Linux you can create ABC.txt and abc.txt and also Abc.txt also. 2. Windows works on extentions like .txt, .mp3, .doc. But Linux does not need any extentions.. In windows file abc.txt denotes -> abc is file name -> .txt is extention of the file. In Linux abc.txt denotes -> abc.txt is filename The extention of files are created in Linux only for our understanding that what type of file it is .py -> pythonfile .sh -> shell file However we will prefer to use extentions to make our life easy.
pwd
cd
mkdir
rmdir
rm -r
cp -r
mv
cat
head
tail
grep
awk
vim
find
curl
tar
unzip
AZ is nothing but one data center
Server
1. VMWare
2. XEN
3. KVM
Desktop
1. VMWare Workstation
2. Oracle VBox
3. Parallels
ps
kill
groupadd
useradd
usermod
visudo
sudo
su
yum install
yum remove
yum update
systemctl enable
systemctl start
systemctl stop
systemctl restart
chmod
chgrp
chown
ip a
netstat -lntp
telnet
Generally, we ensure only required ports opened on SG in real time during work. However since we are dealing with LAB we would like to open all the traffic to the server to avoid issues while learning.
google landing pages
wikipedia pages
news websites
blogging websites
Gmail
Amazon
Mobile APP
Mobile Browser
b2b
b2c
Apps that just power up the business logic and does only compute part but not going to store any data inside them.
Apps that are going save the data in HDD, Like Databases.
t -> AWS Instance Type
3 -> Hardware Refresh number
micro -> Instance size
This works only in internal/intranet networks.
This is optional
A Record is used for denoting an IP address from a name.
CNAME record is used to denote an another name from a name.
What we want to Automate?
Git Centralized Repo
GitHub
git clone URL
gitbash
URL
IntelliJ
Laptop
Code Modifications
git commit -m "Message"
Message in IntelliJ
Commit does not make a transaction to central
git push / IntelliJ Push Button
git add / Choose from IntelliJ
git clone http://<URL>
git pull
( GIT Commit Messages)
#!/bin/bash
# THis is a sample script
Any line starting with a # character then that line will be ignored by the interpreter.
Single Quotes | Does not consider any character as a special character |
---|---|
Double Quotes | Very few characters like $ will be considered as special and remaining of them are normal characters |
(>, <)
Keyboard
Terminal/Screen
Input
Output
File
(>, <)
STDOUT (>) |
---|
STDIN (<) |
---|
Instead of displaying the output to the screen, if we want to store the output to a file then we this redirector.
Instead of taking the input from keyboard if we want to send through a file then we use this redirector
(>, <)
Keyboard
Terminal/Screen
Input
Output
File
Append
(>, <)
STDOUT (>) |
---|
STDOUT (1>) (>) |
---|
STDERR (2>) |
---|
STDOUT & STDERR (&>) |
---|
Both output and error will be redirected to the same file
Only Output
Only Error
(>, <)
&>/dev/null |
---|
In a case if we do not need any kind of output or error to a file for future reference we try to nullify the output with the help of /dev/null file
echo command will help us printing message on screen from our scripts.
While printing we can enable some esc sequences for more options
1. \e - To enable Color
2. \n - To print new line
3. \t - To Print New Tab
VAR=DATA
1. ReadWrite
2. Scalar
3. Local
Text
ReadOnly
Arrays
Environment
Special Variable | Purpose (To get what) | Values from Example |
$0 | Script Name | script1.sh |
$1 | First Argument | 10 |
$2 | Second Argument | abc |
$* | All Arguments | 10 abc 124 |
$@ | All Arguments | 10 abc 124 |
$# | Number of Arguments | 3 |
Example: script1.sh 10 abc 124
Individual scripts for all components wrapped with own shell script.
Individual scripts for all components wrapped with Makefile
Simple IF
If Else
Else If
Simple IF
If Else
Else If
if [ expression ] then commands fi
if [ expression ] then commands else commands fi
if [ expression1 ] then commands1 elif [ expression2 ] then commands2 elif [expression3 ] then commmands3 else commands4 fi
From the previous, if type syntaxes, You can observe that all the conditions are dependent on expressions. Lets categorize them in three.
String Comparision
Operators: = , ==, !=, -z
[ "abc" == "ABC" ] [ "abc" != "ABC" ] [ -z "$USER" ]
Examples
Number Comparision
Operators: -eq, -ne, -gt, -ge, -lt, -le
[ 1 -eq 2 ] [ 2 -ne 3 ] [ 2 -gt 3 ] [ 2 -ge 3 ] [ 2 -lt 3 ] [ 2 -le 3 ]
Examples
File Comparision
Operators: -f, -e
Examples
[ -f file ]
[ ! -f file ] [ -e file ]
command1 && command2
command1 || command2
&& symbol is referred as Logical AND, command2 will be executed only if command1 is successful.
|| symbol is referred as Logical OR, command2 will be executed only if command1 is failure.
SED command works in two modes depends up on the option you choose.
Delete the Lines
sed -e '/root/ d' /tmp/passwd
sed -i -e '/root/ d' /tmp/passwd
sed -i -e '/root/ d' -e '/nologin/ d' /tmp/passwd
sed -i -e '1 d' /tmp/passwd
Substitute the Words
sed -e 's/root/ROOT/' /tmp/passwd
sed -i -e 's/root/ROOT/' /tmp/passwd
sed -i -e 's/root/ROOT/gi' /tmp/passwd
Add the new lines
sed -e '1 i Hello' /tmp/passwd
sed -i -e '1 i Hello' /tmp/passwd
sed -i -e '1 a Hello' /tmp/passwd
sed -i -e '1 c Hello' /tmp/passwd
sed -i -e '/shutdown/ c Hello' /tmp/passwd
case $var in pattern1) commands1 ;; pattern2) commands2 ;; *) commands ;; esac
dev
qa
stage
uat
cit
pre-prod
nonprod
prod / live
perf
dr
sandbox
dev
qa
prod
Ansible offers both push and pull mechanism.
Ansible
v1
Ansible
v2.0
Ansible
v2.5
Ansible
v2.10
Until 2.9 it was called as module
Collections
Ansible (RedHat) 3 -> Ansible Core 2.10
Ansible (RedHat) 4 -> Ansible Core 2.11
Ansible (RedHat) 5 -> Ansible Core 2.12
Ansible (RedHat) 6 -> Ansible Core 2.13
Ansible 2.9
>= Ansible 2.10
module:
collection.module:
yum install ansible -y
pip3 install ansible
Key -Value - Plain
Key -Multiple Values. - List
Key - Key-Value - Map
<courseName>DevOps</courseName>
<trainerName>Raghu K</trainerName>
<timings>
"0600IST",
"0730IST"
</timings>
<topics>
<aws>
"EC2",
"S3"
</aws>
<devops>
"Ansible"
</devops>
</topics>
<phoneNumbers>
<personal>999</personal>
<mobile>888</mobile>
</phoneNumbers>
{
"courseName": "DevOps",
"trainerName": "Raghu K",
"timings": [
"0600IST",
"0730IST"
],
"topics": {
"aws": [
"EC2",
"S3"
],
"devops": ["Ansible", "Shell Scripting"]
},
"phoneNumbers": { "personal": 999, "mobile": 888 }
}
courseName: "DevOps"
trainerName: "Raghu K"
timings:
- 0600IST
- 0730IST
topics:
aws:
- EC2
- S3
devops: ["Ansible", "Shell Scripting"]
phoneNumbers: { personal: 999, mobile: 888 }
- hosts: DATABASES
tasks:
- ansible.builtin.debug:
msg: "Hello World"
- name: Play 2
hosts: APPLICATION
roles:
- roleA
- roleB
In the following order, variables are prioritized, Order is high to low.
$ ansible-playbook -i inventory -u centos -k 02-vars.yml -e URL=cli.example.com
1
1
2
3
4
5
In the following order, variables are prioritized, Order is high to low.
Parameter Managment
Secret Managment
AWS
Parameter Store
AWS
Secrets Manager
Application Server (shipping, cart, catalogue ...)
systemd.service
config
ansible
aws-role
(Parameter Store)
Insights are visible
Insights cannot be visible
groups:
- name: custom
rules:
- record: node_memory_used_percent
expr: ceil(100 - (100 * node_memory_MemAvailable_bytes / node_memory_MemTotal_bytes))
groups:
- name: Alerts
rules:
- alert: InstanceDown
expr: up == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Instance Down - [{{ $labels.instance}}]"
description: "Instance Down - [{{ $labels.instance}}]"
How logs ships from program to log files
How logs ships from program to log files
{
"level": "INFO",
"message": "Hello World",
"date": "2020-10-05"
}
2020-10-05 - INFO - Hello World
All in One node
All in Multiple node
Production Grade Cluster
node {
}
pipeline {
}
Condition | Previous State | Current State |
---|---|---|
always | N/A | any |
changed | any | any change |
fixed | failure | successful |
regression | successful | failure, unstable, aborted |
aborted | N/A | aborted |
failure | N/A | failed |
success | N/A | success |
unstable | N/A | unstable |
unsuccessful | unsuccessful | |
cleanup | N/A | N/A |
Usually, developers may not work on a single branch, they might work with multiple branches. Hence using single branch pipeline jobs is not enough for the development requirements, Hence we need multi branch pipelines where any branch is created without any effort of Developer or DevOps the jobs will be created automatically with multi branch pipelines. Each branch is going to be a separate pipeline in multi-branch pipleline.
Along with this usually, the softare is released with some version number and with git it is going to be tags, When any tags are created in git repos, these multi branch pipelines are going to create a job for that tag and release the software accordingly.
Jenkins init.d scripts
(Installation & Configuration)
Jenkins Server
Job
Job
Jenkins Job Creation
(SEED JOBS)
Declarative Pipeline Code
Keep Code Dry
(Shared Libraries)
Jenkins
Standalone
Box
Note: Since we are on lab, keeping cost in mind we may use single node going further.
Add node from Jenkins Server
Node reaches Jenkins server and add itself
Jenkins
MajorVersion.WeekNumber
Using GitTags
Ansible
Using GitTags
Docker
Using GitTags
Decides a custom Number
Manual Git Tag and Pipeline will detect the tag and makes a artifact.
Get Number from CI system
Pipeline uses number input & Makes Artifact using that number
Sprint Number
Use Git Tags for Releases
Tag
MajorVersion.SprintNumber.ReleaseNumberINSprint
Ex:1.1.4
Master / Main : Compile --> Check Code Quality --> Lint Checks & Unit Tests
Dev branch : Compile --> Check Code Quality
Tag : Compile --> Check Code Quality --> Lint Checks & Unit Tests --> Prepare & Publish Artifact
Unit testing means testing individual modules of an application in isolation (without any interaction with dependencies) to confirm that the code is doing things right.
Compile | Packaging | Download Dependencies (CI) | Download Dependencies (Server) | |
---|---|---|---|---|
Java | yes | yes | auto (while compiling) | no |
GoLang | yes | yes | yes | no |
NodeJS | no | no | yes | no |
Python/Ruby | no | no | no | yes |
PHP | no | no | no | yes |
If all our are servers are not in public, Means no public Ip address then how do we reach the servers.
We go with HCL (Hashicorp Configuration Lanaguage)
resource "aws_instance" "web" {
ami = "ami-a1b2c3d4"
instance_type = "t2.micro"
}
output "instance_ip_addr" {
value = aws_instance.server.private_ip
}
ouptut "sample" {
value = "Hello World"
}
data "aws_ami" "example" {
executable_users = ["self"]
most_recent = true
name_regex = "Centos-8-DevOps-Practice"
}
output "AMI_ID" {
value = data.aws_ami.example.id
}
variable "sample" {}
variable "sample1" {
default = "Hello World"
}
output "sample" {
value = var.sample
}
# String Data type
variable "sample1" {
default = "Hello World"
}
# Number data type
variable "sample2" {
default = 100
}
# Boolean Data type
variable "sample3" {
default = true
}
Terraform supports data types and those are
Strings data should be quoted in double-quotes, But whereas numbers and booleans need not to be.
Terraform only supports double quotes not single quotes
variable "sample" {
default = "Hello"
}
Default Variable Type
List Variable Type
variable "sample" {
default = [
"Hello",
1000,
true,
"World"
]
}
Map Variable Type
variable "sample" {
default = {
string = "Hello",
number = 100,
boolean = true
}
}
Terraform supports different data types in a single list or map variable, Need not to be the same data type.
terraform {
required_providers {
prod = {
source = "hashicorp/aws"
version = "1.0"
}
dev = {
source = "hashicorp/aws"
version = "2.0"
}
}
}
provider "aws" {}
provider "azurerm" {}
resource "aws_instance" "sample" {
ami = "ami-052ed3344670027b3"
instance_type = "t2.micro"
}
output "public_ip" {
value = aws_instance.sample.public_ip
}
resource "aws_instance" "sample" {
ami = "ami-052ed3344670027b3"
instance_type = "t2.micro"
tags = {
Name = "Sample"
}
}
output "public_ip" {
value = aws_instance.sample.public_ip
}
Q. What happens if we run terraform apply multiple times. Does it create multiple resources again and again ?
A: No
Q: How?
A: Terrafrom State file.
Q: What is it?
A: When terraform applies, then it stores the information about the resources it has created in a form of a file and it is called a State file
module "sample" {
source = "./ec2"
}
module "consul" {
source = "github.com/hashicorp/example"
}
resource "aws_instance" "sample" {
count = 3
ami = var.AMI[count.index]
instance_type = var.INSTANCE_TYPE
}
resource "aws_instance" "sample" {
count = condition ? true_val : false_val
ami = var.AMI[count.index]
instance_type = var.INSTANCE_TYPE
}
# count = var.CREATE ? 1 : 0
# count = var.ENV == "PROD" ? 1 : 0
variable "CREATE" {
default = true
}
variable "image_id" {
type = string
description = "The id of the machine image (AMI) to use for the server."
validation {
condition = length(var.image_id) > 4 && substr(var.image_id, 0, 4) == "ami-"
error_message = "The image_id value must be a valid AMI id, starting with \"ami-\"."
}
}
This way though we start with one project tomorrow if new projects come into picture for our company we can share the code of terraform between these projects and even multiple projects.
Q: How to size the network?
A: Based on number of IPs we need.
That means we need to forecast howmany number of machines and IPs come into existence.
Q: Can we really forecast exactly howmany are needed ?
A: Yes we may do an approximate calulcatation.
Q: Is there any anyway we start with not forecasting, Also at the same time we do not hit the limitations?
A: Yes, We can do that by taking all IPs which are available and consuming whatever is needed. This way we no need to forecast and start the setups immediately.
Class | Public Range ( Internet) | Private Range(Intranet) | Number of IPs |
---|---|---|---|
A | 1.0.0.0 to 127.0.0.0 | 10.0.0.0 to 10.255.255.255 | 16581375 |
B | 128.0.0.0 to 191.255.0.0 | 172.16.0.0 to 172.31.255.255 | 1048576 |
C | 192.0.0.0 to 223.255.255.0 | 192.168.0.0 to 192.168.255.255 | 65025 |
D | 224.0.0.0 to 239.255.255.255 | ||
E | 240.0.0.0 to 255.255.255.255 |
PROD
STAGE
PERF
DEV
10.100.0.0/16
Subnet | Number of IPs | Number of Subnets |
---|---|---|
/16 | 65536 | 256 |
/17 | 32768 | 512 |
/18 | 16834 | 1024 |
/19 | 8192 | 2048 |
/20 | 4096 | 4096 |
/21 | 2048 | 8192 |
/22 | 1024 | 16834 |
/23 | 512 | 32768 |
/24 | 256 | 65536 |
Class A Subnets
10.90.0.0/20
10.80.0.0/20
10.10.0.0/22
VPC Peering
This is our interest, meaning running this application. That means how much effort reduce to run it is what is more context.
Physical Machines
Physical Machines | Virtual Machines | Containers | |
---|---|---|---|
Availability | Very less, Because if the physical machine is down. The app is down, Unless you maintain more machines. | Good, Because VMs can move from one machine to another machine in case of H/W failures. | Best, Another container will be immediately created somewhere in cluster and this cause min downtime |
Maintenance (Deployments) |
CM tools -> Mutable | CM tools -> Immutable | Immutable |
Cost (Operations incl) | $$$$ | $$$ | $ |
Container is a Linux(Kernel) Feature.
Means it works only on Linux OS.
Linux Kernel offers control groups and namespaces and containers runs based on these features, Also security comes from these features
Containers capable of consuming all the resources of OS, Yet we need to control or limit the resources and that part will be done by Control Groups.
Namespaces are meant to isolate the resources. For Ex, netns is a network namespace which isolates the network for containers
The whole echosystem of Docker is quite simple, Mainly the Docker Imaging part is really simple and great.
Docker containers are simpler the manage and extends its feature to interact over APIs.
Any Container Runtime (Docker Uses ContainerD) Operations are categorized as
1. Low Level Runtime
2. High Level Runtime
To run any docker container it needs an Image
docker images docker pull image-name docker rmi image-name
Any docker image we pull, that image has a certain version. By default, it pulls latest image. In docker terminology that version is called Tag. latest is it the default tag that Docker Image uses.
Tags are used for Software releases Strategy
docker images docker pull image-name:tag docker rmi image-name:tag
docker run -d nginx
docker ps
docker ps -a
docker rm
docker rm -f
docker rm -f $(docker ps -a -q)
docker exec -it <ID> bash
INSTRUCTION arguments
is a Orchestrator
1. Backed by CNCF
2. Lot of community support.
3. It is an opensource and freeware
4. Better than Docker Swarm, Solves certain problems with Network and Storage.
5. Cloud Native
Master
Compute
One Server
kubectl cluster-info
kubectl get nodes
kubectl get nodes -o wide
kubectl api-versions
kubectl api-resources
kubectl --help
Containers
POD
apiVersion: v1
kind: Pod
metadata:
name: sample1
spec:
containers:
- name: nginx
image: nginx
It is used for Stateless Application.
Used to scale the pods.
StatefulSets are for Stateful application like DB.
Daemonsets are pods that run on each and every node to run some process on each and every node.
Ex: Prometheus Metrics
EKS
FRONTEND
PRIVATE-SUBNET
PRIVATE-SUBNET
CART
CATALOGUE
USER
SHIPPING
PAYMENT
RDS
DOCDB
EC
RABBITMQ
EKS
FRONTEND
PRIVATE-SUBNET
PRIVATE-SUBNET
CART
CATALOGUE
USER
SHIPPING
PAYMENT
RDS
DOCDB
EC
RABBITMQ
INTERNET
INGRESS
What Ingress does?
ALB
POD
C1
C2
POD
C1
C2
Publish PORT
Publish PORT
EKS
EC2 Node
EC2 Node
Ingress
Controller POD
Ingress Controller will Create
A load balancer
Service
General | RoboShop | |
---|---|---|
ClusterIP | This is internal, It is used for internal pod communications. This is a default one in kubernetes. | Yes, Frontend needs to talk to catalogue that will happen internally and we use ClusteerIP |