26 Sep, 2021
Myanmar LoCo Team
Ko Ko Ye
2003-2004: Birth of the Borg System
- Google introduced the Borg System around 2003-2004. It started off as a small-scale project, with about 3-4 people initially in collaboration with a new version of Google’s new search engine. Borg was a large-scale internal cluster management system, which ran hundreds of thousands of jobs, from many thousands of different applications, across many clusters, each with up to tens of thousands of machines.
2013: From Borg to Omega
- Following Borg, Google introduced the Omega cluster management system, a flexible, scalable scheduler for large compute clusters.
2015: The year of Kube v1.0 & CNCF
- July 21: Kubernetes v1.0 gets released. Cloud Native Computing Foundation (CNCF).
- November 3: The Kubernetes ecosystem continues to grow! Companies who joined: Deis, OpenShift, Huawei, and Gondor.
- November 9: Kubernetes 1.1
- November 9-11: KubeCon 2015 is the first inaugural community Kubernetes conference in San Fransisco.
2016: The Year Kubernetes Goes Mainstream!
2017: The Year of Enterprise Adoption & Support
- Digital Ocean
Embedded Kubernetes – try a Raspberry Pi cluster
ARM or Intel. Standalone or cluster. Minimal space, maximum edge.
Under the cell tower. On the racecar. On satellites or everyday appliances, MicroK8s delivers the full Kubernetes experience on IoT and micro clouds.
Fully containerized deployment with compressed over-the-air updates for ultra-reliable operations.
Nvidia auto-detection with CUDA at the ready
Pass GPUs to docker apps for deep learning. Define AI pipelines with Kubeflow on your workstation.
We work with Amazon, Azure, Google, Oracle and IBM to simplify multi-cloud GPU enablement. Build and test locally on MicroK8s, then deploy to EKS, AKS or GKE with confidence.
Tracing. Metrics. Service Mesh. Registry.
Prometheus is popular for metrics, so we bundled it. Just like Jaeger, Istio, LinkerD and KNative.
Turn them on or off with one command.
Automatic security updates
Let it roll, or take control
Choose stable security releases only, or try release candidates, betas and daily builds. MicroK8s can update automatically, with rollback on failure.
Stick with a major version, or follow the latest upstream work. Go with the flow, or take control in the enterprise to specify versions with perfect precision.
Safe and easy CI/CD
Docker app developers love pipelines
So your CI/CD machine spins up a clean VM for each test run? Just install MicroK8s at the top of your script for a crisp, clean K8s to run your tests.
Download the MicroK8s Installer
Run the installer
Open a command line
brew install ubuntu/microk8s/microk8s
# Check the status while Kubernetes starts microk8s status --wait-ready # Turn on the services you want microk8s enable dashboard dns registry istio # microk8s enable --help # microk8s disable --help # Start using Kubernetes! microk8s kubectl get all --all-namespaces # Access the Kubernetes dashboard microk8s dashboard-proxy # Start and stop Kubernetes microk8s start microk8s stop
$ multipass launch --name microk8s-vm --mem 4G --disk 40G $ multipass list # Output Name State IPv4 Release microk8s-vm RUNNING 10.72.145.216 Ubuntu 18.04 LTS
$ multipass shell microk8s-vm sudo snap install microk8s --classic --channel=1.18/stable sudo iptables -P FORWARD ACCEPT
# Get shell inside VM multipass shell microk8s-vm # Shutdown VM multipass stop microk8s-vm # Delete and cleanup the VM: multipass delete microk8s-vm multipass purge
# Edit Parameters sudo vi /boot/firmware/cmdline.txt # CG Group, And adding the following: cgroup_enable=memory cgroup_memory=1
k0s is an all-inclusive Kubernetes distribution, configured with all of the features needed to build a Kubernetes cluster simply by copying and running an executable file on each target host.
- Available as a single static binary
- Offers a self-hosted, isolated control plane
- Supports a variety of storage backends, including etcd, SQLite, MySQL (or any compatible), and PostgreSQL.
- Offers an Elastic control plane
- Vanilla upstream Kubernetes
- Supports custom container runtimes (containerd is the default)
- Supports custom Container Network Interface (CNI) plugins (calico is the default)
- Supports x86_64 and arm64