Abdullah Fathi
Download Presentation Slide
Have you ever heard the famous phrase
"It works on my machine"?
By using Docker it is not an excuse anymore. Docker allows you to locally run the same (or almost the same) environment which will be used in production
Docker makes it really easy to install and run software without worrying about setup or dependencies
Docker makes it really easy to install and run software without worrying about setup or dependencies
Windows 10 & 11 users will be able to install Docker Desktop if their computer supports the Windows Subsystem for Linux (WSL).
Register for a DockerHub account
Visit the link below to register for a DockerHub account (this is free)
Download and install all pending Windows OS updates
Run the WSL install script
Note - If you have previously enabled WSL and installed a distribution you may skip to step #7
Open PowerShell as Administrator and run: wsl --install
This will enable and install all required features as well as install Ubuntu.
Official documentation:
https://docs.microsoft.com/en-us/windows/wsl/install#install-wsl-command
Reboot your computer
Set a Username and Password in Ubuntu
After the reboot, Windows will auto-launch your new Ubuntu OS and prompt you to set a username and password.
Manually Installing a Distribution
If for some reason Windows did not prompt you to create a distribution or you simply would like to create a new one, you can do so by running the following command:
wsl --install -d Ubuntu
Install Docker Desktop
Navigate to the Docker Desktop installation page and click the Docker Desktop for Windows button:
Double-click the Docker Desktop Installer from your Downloads folder
Click "Install anyway" if warned the app isn't Microsoft-verified
Click "OK" to Add a shortcut to the Desktop
The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and volumes
The Docker client (docker) is the primary way that many Docker users interact with Docker
A Docker registry stores Docker images. Docker Hub is a public registry that anyone can use, and Docker is configured to look for images on Docker Hub by default
An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customization.
You might create your own images or you might only use those created by others and published in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.
Read-only templates used to build containers. Images contain the application code, libraries, tools, dependencies, and other files needed to run applications. Often, an image is based on another image, with some additional customization.
You might create your own images or you might only use those created by others and published in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualization technologies.
CONTAINER is a running environment for IMAGE
virtual file system
Docker
Virtual Machine
Applications
OS Kernel
Hardware
Applications
OS Kernel
Hardware
Size: Docker image much smaller
Speed: Docker containers start and run much fast
Compatibility: VM of any OS can run on any OS host
# Pull Image from Docker Hub
docker pull nginx
# Check Existing Images
docker images
# Tags
docker pull nginx:1.24-alpine
# Run Image
# Start image in a container
docker run nginx
# Status running container
docker ps
# Run container in detached mode
docker run -d nginx
# Restart container
docker stop <container id>
docker start <container id>
# Show all container lists (running/not running)
docker ps -aContainer
Port 5000
Container
Port 5000
Container
Port 5000
Container
Port 3000
Container
Port 5000
Container
Port 3000
Port 5000
Port 3000
Port 3001
Host
# Docker Port Binding Between Host and Container
docker run -d -p<host port>:<container port> <image>
#docker run -d -p6000:6379 redis
docker run -d -p80:80 nginx
# Debugging Container
docker logs <container id/container name>
# Display last part of Log
docker logs <container id/container name> | tail
# Stream the logs
docker logs <container id/container name> -f
# Assign container name
docker run -d -p6001:6379 --name redis-older redis:4.0
# Get to Terminal of Running Container (Interactive Terminal)
docker exec -it <container id/container name> /bin/bash
# View auto generated docker network
docker network ls
# Create docker network
docker network create <network name>
docker network create mongo-networkIt allows containers to communicate with each other on the same host through IP addresses/container name (custom bridge)
Containers share the network namespace with the host and use the host's networking directly.
# HOST network is already created by default
# We can directly use it as below
docker run -dit --rm --network host --name myws nginxdocker run -d \
-p 27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME=admin \
-e MONGO_INITDB_ROOT_PASSWORD=password \
--name mongodb \
--net mongo-network \
mongo# https://docs.docker.com/compose/compose-file/04-version-and-name/
services:
mongodb:
image: mongo
container_name: mongo
ports:
- 27017:27017 #HOST:CONTAINER
environtment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=passwordDocker Compose takes care of creating a common Network
Docker Run Command
docker-compose.yaml
# Start container using docker-compose file
docker-compose -f mongo.yaml up -d
# Stop the containers and network using docker compose
docker-compose -f mongo.yaml downA Dockerfile is simply a text file with instructions that Docker will interpret to make an image. It is relatively short most of the time and primarily consists of individual lines that build on one another.
#First line of every
# Start by basing it on another image
# Generates a layer based on Nodejs + Alpine.
FROM node:13-alpine
# Optionally define environment variables
# Previously we define it in docker compose which is recommended,
# So we didnt have to rebuild the image if the environment variables changes
ENV MONGO_DB_USERNAME=admin \
MONGO_DB_PWD=password
# RUN: execute any LINUX command / Constructs your container
# The directory is created INSIDE the container
RUN mkdir -p /home/app
# COPY: executes on the HOST machine
COPY . /home/app
# Executes an entrypoint LINUX command
# Specifies which command should be executed within the container
CMD ["node", "server.js"]
# Differences between RUN and CMD:
# CMD=entrypoint command
# you can have multiple RUN commands
# Mark dockerfile that this is the command we want to execute as entrypoint
#
#
install node 13
#
#
#
set MONGO_DB_USERNAME=admin
set MONGO_DB_PWD=password
#
#
create /home/app folder
#
copy current folder files to /home/app
#
Start the app with: "node server.js"Image Environment Blueprint
Dockerfile
FROM
COPY
CMD
node:14-alpine
. /home/app
["node", "server.js"]
Instruction telling Docker what to do
Argument to the instruction
FROM node
COPY package*.json ./
RUN npm install
COPY . .
CMD ["node", "index.js"]FROM: Link to Base Image
COPY: Copy files to target path
RUN: Runs command(s) in build step
CMD: Command that gets run on container start-up
Dockerfile
| COMMAND | DESC |
|---|---|
| FROM image|scratch | base image for the build |
| MAINTAINER email | name of the maintainer (metadata) |
| COPY path dst | copy path from the context into the container at location dst |
| ADD src dst | same as COPY but able to untar archives and accepts http urls |
| RUN args... | run an arbitrary command inside the container |
| USER name | set the default username |
| WORKDIR path | set the default working directory |
| CMD args... | set the default command |
| ENV name value | set an environment variable |
docker build -t my-app:1.0 .
# Rebuild Image
# 1) Delete container
docker rm <container id>
# 2) Delete Docker Image
docker rmi <image id>
# 3) Build New Image
docker build -t my-app:1.1 .Buildkit will hide away much of its progress which is something the legacy builder did not do
To see this output, you will want to pass the progress flag to the build command:
docker build --progress=plain .
Additionally, you can pass the no-cache flag to disable any caching:
docker build --no-cache --progress=plain .
To disable Buildkit, you can just pass the following variable to the build command:
DOCKER_BUILDKIT=0 docker build .
# Use node:16-alpine image as a parent image
FROM node:16-alpine
# Create app directory
WORKDIR /usr/src/app
# Copy package.json files to the working directory
COPY package*.json ./
# Install app dependencies
RUN npm install
# Copy the source files
COPY . .
# Build the React app for production
RUN npm run build
# Expose port 3000 for serving the app
EXPOSE 3000
# Command to run the app
CMD ["npm", "start"]# Stage 1 - Building the application
FROM node:16-alpine AS build
# Create app directory
WORKDIR /usr/src/app
# Copy package.json files to the working directory
COPY package*.json ./
# Install app dependencies
RUN npm install
# Copy the source files
COPY . .
# Build the React app for production
RUN npm run build
# Stage 2 - Serve the application
FROM nginx:alpine
# Copy build files to Nginx
COPY --from=build /usr/src/app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]525MB
43.8MB
FROM node:12.7-alpine AS build
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install -g @angular/cli@7.1.4
RUN npm install
COPY . .
EXPOSE 4200
CMD ["ng", "serve", "--host", "0.0.0.0"]### STAGE 1: Build ###
FROM node:12.7-alpine AS build
WORKDIR /usr/src/app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
RUN npm run build
### STAGE 2: Run ###
FROM nginx:1.17.1-alpine
COPY nginx.conf /etc/nginx/nginx.conf
COPY --from=build /usr/src/app/dist/aston-villa-app /usr/share/nginx/html
506MB
43.8MB
# The server needs to login to pull from PRIVATE repository
docker login -u <username>
# private registry
# docker login https://hub.fotia.com.my -u <HARBOR_USER> -p <HARBOR_TOKEN>
# Push image to private container registry
docker push hub.osdec.gov.my/<project-name>:<image tag>
# Pull image from private container registry
docker pull hub.osdec.gov.my/<project-name>:1.0registryDomain/imageName:tag
In DockerHub:
In Harbor (Private Registry):
Data Persistence
| Feature | Bind Mount/ Host Volume |
Named Volumes |
|---|---|---|
| Path Specification | Absolute path on the host file system | Docker-managed location |
| Management | Managed manually via the host OS | Managed via Docker CLI and API |
| Data Persistence | Depends on the host directory/file | Designed for persistent storage |
| Isolation | Less isolation (direct access to host FS) | More isolation (abstracted from host FS) |
| Performance | Varies based on host and container FS | Generally consistent, managed by Docker |
| Use Case Suitability | Development, direct host FS access | Production, data persistence, databases |
version: '3'
services:
mongodb:
image: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
#Volumes
volumes:
- mongo-data:/data/db
mongo-express:
image: mongo-express
ports:
- 8080:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongodb
volumes:
mongo-data:
driver: localWindows - \\wsl$\docker-desktop\mnt\docker-desktop-disk\data\docker\volumes
Linux - /var/lib/docker/volumes
MacOS - /var/lib/docker/volumes
Your feedback matters