18 Jan 2017
Product Manager, 이해승
, 서버 운영의 변화
Virtual Machine, Hypervisor, ... more deeply
Docker
이런 내용은 다들
아실테니까 .....
오늘 진짜로 이야기하려는 내용은,
Platform as a Service
Containerized Application Virtualization
Re-definition of Deployment by Docker
State
상태
사물, 현상이 놓여 있는 모양이나 형편
stateful/stateless - protocol
Authorization
Add User
Env Vars
Firewall
Dependencies
Network
Proxy Setting
Run App!!
Bundle Install
Configuration
Install GO or Python
Git Clone
Deployment Process
Authorization
Add User
Env Vars
Firewall
Dependencies
Network
Proxy Setting
Run App!!
Bundle Install
Configuration
Install GO or Python
Git Clone
State Change during Deployment
A
B
C
Practically,
numerous Applications
on server
D
A
B
C
A
B
C
New Application - D
A
B
C
A
B
C
A
B
C
D
A
B
C
D
A
B
C
D
D
Deployment Process - Normalization
D
A
B
C
A
B
C
therefore,
Deployment means making all applications
to Running or Runnable State
D
A
B
C
A
B
C
A
B
C
D
A
B
C
D
A
B
C
D
D
... means "maintaining the 'state' that all applications are running or runnable.
maintaining
global
state
management
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
A
B
D
C
App: A/B/D
App: C
App: A/B/D
App: C
App: A/B/D
App: C
App: C
App: C
Instance
Instance
Instance
Instance
Scale Out
App: A/B/D
App: C
App: A/B/D
App: C
App: C
App: C
Instance
Instance
Instance
Instance
Resource Monitoring & Auto Scaling
App: C
App: C
App: C
Instance
Instance
Instance
AMI
AMI
AMI
AMI
AMI
AMI
AMI
AMI
AMI
AMI
AMI
AMI
...
...
...
...
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
Instance
==> 단순화된 상태 관리!
==> 어떻게 하면 상태를 더 잘 관리할 수 있을까?
==> 이미지/인스턴스 활용의 극대화
명령어를 통한 상태 변화
CMD
CMD
Install Nginx
Setup Firewall
Install Python
BaaS service based on CHEF(KTHitel)
Cookbook1
Cookbook2
Install Nginx
Setup Firewall
Install Python
Cookbook1
Cookbook2
Install Nginx
Setup Firewall
Install Python
Cookbook1
Cookbook2
Install Nginx
Setup Firewall
Install Python
Cookbook
Cookbook
Install Nginx
Setup Firewall
Install Python
CMD
$ apt -get install apache2
Chef Cookbook
package 'httpd'
service 'httpd' do
action [ :start, :enable ]
end
초기 적용이 어려움
학습비용
많은 시행착오
작업 속도가 느림
서버 운영을 단순화
초기 적용이 어려움
학습비용
많은 시행착오
작업 속도가 느림
서버 운영을 단순화
Application
Server
A
B
C
(ver x.xxx)
(ver x.xxx)
(ver x.xxx)
Application
A
B
C
(ver x.xxx)
(ver x.xxx)
(ver x.xxx)
Application 배포 단위
배포 환경은 Heroku가 제공
Push & Build
A
(ver x.xxx)
Deployment
애플리케이션과 배포 단위의 1:1 매칭
애플리케이션별 독립된 배포 환경 제공
1:1
A
(ver x.xxx)
1:1
B
(ver x.xxx)
A
B
Scale-In/Out 기본 지원
A
A
A
A
A
A
A
A
애플리케이션 업데이트
- conventional
Update App
Update App
xx1
xx2
xx3
애플리케이션 업데이트
- Rollback
Update
Update
xx1
xx2
xx3
xx1
Rollback
애플리케이션 업데이트
- on Heroku
Push & Build
A
(ver x.xx1)
xx1
Push & Build
A
xx2
Push & Build
A
xx3
(ver x.xx2)
(ver x.xx3)
애플리케이션 업데이트
- Rollback on Heroku
배포의 단순화
A
A
애플리케이션 개발
환경구축
배포
A
Instance
Instance
Build
확장의 단순화
A
A
애플리케이션 개발
환경구축
배포
A
Instance
Instance
A
A
A
이미지화
확장
A
A
A
A
A
비용
PaaS에 적합하지 않은 애플리케이션
Build Error!
A
(ver x.xxx)
상태 변화에 의존한 작업
State X
State X+1
Command
Collect Logs
Modify Files
Install Library
PaaS는?
Immutable(Stateless)
Command
Collect Logs
Modify Files
Install Library
State X
State X+1
PaaS는?
Immutable(Stateless)
Command
Collect Logs
Modify Files
Install Library
State X
State X+1
==> 컨테이너형 가상화의 특징!!!
PaaS는?
Immutable(Stateless)
Command
Collect Logs
Modify Files
Install Library
State X
State X+1
==> 컨테이너형 가상화의 특징!!!
Heroku Dyno
= Linux Container(LXC),
Chroot on Steroid
Chroot
Linux Container(LXC)
Linux Container(LXC)
Host Kernel 공유
Linux Container(LXC)
프로세스 실행 환경 분리
Host
Container
Container
Container
Container
Process Isolation
격리된 환경에서 특정 프로세스만 실행
Linux Container
격리된환경, 즉 내부적인 의미의 컨테이너에 집중
Docker
LXC
AuFS
Docker Index
프로세스 격리
계층화된 저장 장치
이미지 공유
Docker
LXC
AuFS
Docker Index
프로세스 격리
계층화된 저장 장치
이미지 공유
.˙. Linux Container의 확장 툴셋
Heroku Dynos
≈
Heroku Dynos
≈
Almost same? Docker는 LXC의 Wrapper?
Docker의 구조
Docker의 구조
현재의 Docker(latest 1.13)
LXC
AuFS
프로세스 격리
계층화된 저장 장치
이미지 공유
Libcontainer
DeviceMapper
BTRFS
Docker Hub
Registry
Private
3rd Party
Repository
Boot2Docker
Docker 지원 시스템
Ubuntu, Red Hat Enterprise Linux, Oracle Linux, CentOS, Debian, Gentoo, Google Cloud Platform, Rackspace Cloud(Carina), Amazon EC2, IBM Softlayer, Microsoft Azure, Arch Linux, Fedora, OpenSUSE, CRUX linux, Microsoft Windows/Windows server, Mac OS X
Docker만의 차별성
A
B
C
D
E
이미지: 특정 애플리케이션이 실행 가능한 고유 환경
컨테이너: 특정 이미지 기반으로 실행되는 격리된 프로세스
Build Image
상태 변화
Git 설치하기
Git 설치 후 상태 변화
Diff를 통해서 Base Image와 컨테이너의 차이를 파악
Commit으로 새로운 이미지 생성
새로운 상태를 이미지로 저장
Dockerfile
이미지생성과정을기술한Docker 전용 DSL
Dockerfile을통한이미지빌드
Dockerfile을통한이미지빌드
빌드된 이미지
호스트 환경과 무관하게 실행 가능!
빌드된 이미지
호스트 환경과 무관하게 실행 가능!
==> 이미 애플리케이션 실행 가능 상태? 배포 완료?
Docker를 활용한 오픈소스 배포 문화
Build once, Run anywhere
이미지 생성/공유 기능
공식 Registry 서비스 지원
Private Registry 애플리케이션
Container vs. Docker
: 표준화된 컨테이너의 이동성에 집중
LXC vs. Docker
: LXC == 프로세스 격리를 위한 도구
: Docker == 컨테이너 수송을 위한 도구
IaaS의 자유
PaaS의 단순함
--> Container on CDN?
A
B
C
D
E
배포단위
Docker Images
애플리케이션 실행 환경
애플리케이션
배포 대상
Docker Servers
A
B
C
A
B
C
A
B
C
D
A
B
C
D
A
B
C
D
D
Remember
Conventional Deployment & Operation
배포와 서버 운영의 새로운 정의
Development
push
A
CI Server
A
Docker Servers
A'
B
C
C'
D
E
Docker Regitry
애플리케이션 업데이트
Development
push
A4
CI Server
Docker Regitry
A4
A3
A2
A1
애플리케이션 롤백
Docker Regitry
A4
A3
A2
A1
A5
A6
A7
클러스터링 가능한 도커 전용 OS
물류 시스템?
Docker로 모든 애플리케이션을 컨테이너로 만든다면, 서버는 Docker를 돌리기 위한 존재일뿐 --> CoreOS concept
... easier to manage is to use Openshift 3.0
as the infrastructure of the CDN.
Feature Toggle - launch darkly
Support for diverse kinds of workloads such as big data, cloud native apps, etc. |
Types of Workloads | Cloud Native Application |
"Application Group" models dependencies as a tree of groups. Components are started in dependency order. Colocation of group's containers on same Mesos slave is not supported. A Pod abstraction is on roadmap, but not yet available. |
Application Definition | A combination of Pods, Replication Controllers, Replica Sets, Services and Deployments. As explained in the overview above, a pod is a group of co-located containers; the atomic unit of deployment. Pods do not express dependencies among individual containers within them. Containers in a single Pod are guaranteed to run on a single Kubernetes node. |
Possible to scale an individual group, its dependents in the tree are also scaled. |
Application Scalability constructs | Each application tier is defined as a pod and can be scaled when managed by a Deployment or Replication Controller. The scaling can be manual or automated. |
Applications are distributed among Slave Nodes. | High Availability | Pods are distributed among Worker Nodes. Services also HA by detecting unhealthy pods and removing them. |
Application can be reached via Mesos-DNS, which can act as a rudimentary load balancer. |
Load Balancing | Pods are exposed through a Service, which can be a load balancer. |
Load-sensitive autoscaling available as a proof-of-concept application. Rate-sensitive autoscaling available for Mesosphere’s enterprise customers. Rich metric-based scaling policy. |
Auto-Scaling for the Application | Auto-scaling using a simple number-of-pods target is defined declaratively with the API exposed by Replication Controllers. CPU-utilization-per-pod target is available as of v1.1 in the Scale sub resource. Other targets are on the roadmap. |
"Rolling restarts" model uses application-defined minimumHealthCapacity (ratio of nodes serving new/old application) "Health check" hooks consume a "health" API provided by the application itself |
Rolling Application Upgrades and Rollback |
"Deployment" model supports strategies, but one similar to Mesos is planned for the future Health checks test for liveness i.e. is app responsive |
Logging: Can use ELK Monitoring: Can use external tools |
Logging and Monitoring | Health checks of two kinds: liveness (is app responsive) and readiness (is app responsive, but busy preparing and not yet able to serve) Logging: Container logs shipped to Elasticsearch/Kibana (ELK) service deployed in cluster Resource usage monitoring: Heapster/Grafana/Influx service deployed in cluster Logging/monitoring add-ons are part of official project Sysdig Cloud integration |
A Marathon container can use persistent volumes, but such volumes are local to the node where they are created, so the container must always run on that node. An experimental flocker integration supports persistent volumes that are not local to one node. |
Storage | Two storage APIs: The first provides abstractions for individual storage backends (e.g. NFS, AWS EBS, ceph,flocker). The second provides an abstraction for a storage resource request (e.g. 8 Gb), which can be fulfilled with different storage backends. Modifying the storage resource used by the Docker daemon on a cluster node requires temporarily removing the node from the cluster |
Marathon's docker integration facilitates mapping container ports to host ports, which are a limited resource. A container does not get its own IP by default, but it can if Mesos is integrated with Calico. Even so, multiple containers cannot share a network namespace (i.e. cannot talk to one another on localhost). |
Networking | The networking model lets any pod can communicate with other pods and with any service. The model requires two networks (one for pods, the other for services) Neither network is assumed to be (or needs to be) reachable from outside the cluster. The most common way of meeting this requirement is to deploy an overlay network on the cluster nodes. |
Containers can discover services using DNS or reverse proxy. | Service Discovery | Pods discover services using intra-cluster DNS |
Mesos has been simulated to scale to 50,000 nodes, although it is not clear how far scale has been pushed in production environments. Mesos can run LXC or Docker containers directly from the Marathon framework or it can fire up Kubernetes or Docker Swarm (the Docker-branded container manager) and let them do it. | Performance and Scalability | With the release of 1.2, Kubernetes now supports 1000-node clusters. Kubernetes scalability is benchmarked against the following Service Level Objectives (SLOs): API responsiveness: 99% of all API calls return in less than 1s Pod startup time: 99% of pods and their containers (with pre-pulled images) start within 5s. |
K8s-Mesos architecture
갑자기, CDN topology
Scattered CDN
Consolidated CDN
갑자기, CDN topology
Scattered CDN
Consolidated CDN
physical proximity minimizes latency
effective in low-connectivity regions
smaller POPs are easier to deploy
Higher maintenance costs
RTT prolonged by multiple connection points
Cumbersome to deploy new configurations
High-capacity servers are better for DDoS mitigation
Enables agile configuration deployment
Lower maintenance costs
Less effective in low-connectivity regions
High-capacity POPs harder to deploy
갑자기, CDN topology
Scattered CDN
Consolidated CDN
physical proximity minimizes latency
effective in low-connectivity regions
smaller POPs are easier to deploy
Higher maintenance costs
RTT prolonged by multiple connection points
Cumbersome to deploy new configurations
High-capacity servers are better for DDoS mitigation
Enables agile configuration deployment
Lower maintenance costs
Less effective in low-connectivity regions
High-capacity POPs harder to deploy
Akamai, CDNW
CloudFlare, LimeLight, and
AWS, GCP, Azure, AliYun
CDN strategy?
Scattered CDN
Consolidated CDN
vs.?
CS와 NGP가 계속 따로 운영될 수 밖에 없는 구조.
통합은 어렵다. 새로 만드는게 더 빠르다 --> PJ WE?
일부라도 좀 더 고쳐 보자. MSA로 변경해 보자 --> 통합은 여전히?
더 큰 그림(architecture)가 필요하다. 하지만 누가?
컨테이너화의 대상은? docker는 기술일 뿐? 우리에게 맞추려면?
CAPEX & OPEX를 줄이기 위해서 SD(Software-Defined)를 적극적으로 적용?