April 2018
clouds, containers, functions, applications, and their management
The first few services are relatively easy
Democratization of language and technology choice
Faster delivery, service teams running independently, rolling updates
The next 10 or so may introduce pain
Language and framework-specific libraries
Distributed environments, ephemeral infrastructure, out-moded tooling
Cluster Management
Host Discovery
Host Health Monitoring
Scheduling
Orchestrator Updates and Host Maintenance
Service Discovery
Networking and Load Balancing
Stateful Services
Multi-Tenant, Multi-Region
Application Health and Performance Monitoring
Application Deployments
Application Secrets
• Observability
• Logging
• Metrics
• Tracing
• Traffic Control
• Resiliency
• Efficiency
• Security
• Policy
a dedicated layer for managing service-to-service communication
so, a microservices platform?
obviously.
Orchestrators don't bring all that you need
and neither do service meshes,
but they do get you closer.
Missing: application lifecycle management, but not by much
partially.
Missing: distributed debugging; provide nascent visibility (topology)
where Dev and Ops meet
Problem: too much infrastructure code in services
to avoid...
Bloated service code
Duplicating work to make services production-ready
load balancing, auto scaling, rate limiting, traffic routing, ...
Inconsistency across services
retry, tls, failover, deadlines, cancellation, etc., for each language, framework
siloed implementations lead to fragmented, non-uniform policy application and difficult debugging
Diffusing responsibility of service management
Can modernize your IT inventory without:
Rewriting your applications
Adopting microservices, regular services are fine
Adopting new frameworks
Moving to the cloud
Address the long-tail of IT services
Get there for free
An open platform to connect, manage, and secure microservices
Observability
Resiliency
Traffic Control
Security
Policy Enforcement
@IstioMesh
is what gets people hooked on service metrics
Metrics without instrumenting apps
Consistent metrics across fleet
Trace flow of requests across services
Portable across metric backend providers
You get a metric! You get a metric! Everyone gets a metric!
© 2018 SolarWinds Worldwide, LLC. All rights reserved.
control over chaos
Timeouts and Retries with timeout budget
Circuit breakers and Health checks
Control connection pool size and request load
content-based traffic steering
Control Plane
Data Plane
Touches every packet/request in the system. Responsible for service discovery, health checking, routing, load balancing, authentication, authorization and observability.
Provides policy and configuration for services in the mesh.
Takes a set of isolated stateless sidecar proxies and turns them into a service mesh.
Does not touch any packets/requests in the system.
Pilot
Citadel
Mixer
Control Plane
Data Plane
istio-system namespace
policy check
Foo Pod
Proxy Sidecar
Service Foo
tls certs
discovery & config
Foo Container
Bar Pod
Proxy Sidecar
Service Bar
Bar Container
Out-of-band telemetry propagation
telemetry
reports
Control flow during request processing
application traffic
Application traffic
application namespace
telemetry reports
provides service discovery to sidecars
manages sidecar configuration
Pilot
Auth
Control Plane
the head of the ship
Mixer
istio-system namespace
system of record for service mesh
}
provides abstraction from underlying platforms
Pilot
Auth
Mixer
Control Plane
istio-system namespace
an attribute-processing and routing machine
operator-focused
Mixer
Control Plane
Data Plane
istio-system namespace
Foo Pod
Proxy Sidecar
Service Foo
Foo Container
Out-of-band telemetry propagation
Control flow during request processing
application traffic
application traffic
application namespace
telemetry reports
an attribute processing engine
Pilot
Auth
Control Plane
security at scale
Mixer
istio-system namespace
security by default
Orchestrate Key & Certificate:
A C++ based L4/L7 proxy
Low memory footprint
In production at Lyft
Capabilities:
the included battery
Data Plane
Pod
Proxy Sidecar
App Container
Envoy, Linkerd, Nginx, Conduit
Based on your operational expertise and need for battle-tested proxy. You may be looking for caching, WAF, or other functionality available in NGINX Plus.
If you're already running Linkerd and want to start adopting Istio control APIs like CheckRequest.
Conduit not currently designed as a general-purpose proxy, but lightweight and focused with extensibility via gRPC plugin.
Currently
Roadmap
See sidecar-related limitations as well as supported traffic management rules --> here.
Considered beta quality
Soliciting feedback and participation from community
Architecture
agent
Pilot
Auth
Mixer
Control Plane
config file
Data Plane
Mixer Module
"istio-proxy" container
route rules
tcp server
istio-system namespace
check
report
listener
dest
module
tcp
http
Out-of-band telemetry propagation
Control flow during request processing
application traffic
application traffic
http servers
Recording - O'Reilly: Istio & nginMesh
Web
Service Foo
Timeout = 600ms
Retries = 3
Timeout = 300ms
Retries = 3
Timeout = 900ms
Retries = 3
Service Bar
Database
Timeout = 500ms
Retries = 3
Timeout = 300ms
Retries = 3
Timeout = 900ms
Retries = 3
Web
Service Foo
Deadline = 600ms
Deadline = 496ms
Service Bar
Database
Deadline = 428ms
Deadline=180ms
Elapsed=104ms
Elapsed=68ms
Elapsed=248ms
AppOptics
types: logs, metrics, access control, quota
Papertrail
Prometheus
Grafana
Fluentd
Statsd
Stackdriver
Open Policy Agent
Let's look at Istio's canonical sample app.
Reviews v1
Reviews Pod
Reviews v2
Reviews v3
Product Pod
Details Container
Details Pod
Ratings Container
Ratings Pod
Product Container
Reviews Service
Reviews v1
Reviews Pod
Reviews v2
Reviews v3
Product Pod
Details Container
Details Pod
Ratings Container
Ratings Pod
Product Container
Nginx sidecar
Nginx sidecar
Nginx sidecar
Nginx sidecar
Nginx sidecar
Reviews Service
Nginx sidecar
Envoy ingress
kubectl version
kubectl get ns
kubectl apply -f ../istio-appoptics-0.5.1-solarwinds-v01.yaml
kubectl apply -f ./install/kubernetes/istio-sidecar-injector.yaml
check environment; deploy Istio
kubectl get ns
watch kubectl get po,svc -n istio-system
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)
confirm deployment Istio; deploy sample app
watch kubectl get po,svc
kubectl describe po/ | more
echo "http://$(kubectl get nodes -o template --template='{{range.items}}{{range.status.addresses}}{{if eq .type "InternalIP"}}{{.address}}{{end}}{{end}}{{end}}'):$(kubectl get svc istio-ingress -n istio-system -o jsonpath='{.spec.ports[0].nodePort}')/productpage"
confirm sample app
running Istio
running Istio
echo "http://$(kubectl get nodes -o template --template='{{range.items}}{{range.status.addresses}}{{if eq .type "InternalIP"}}{{.address}}{{end}}{{end}}{{end}}'):$(kubectl get svc istio-ingress -n istio-system -o jsonpath='{.spec.ports[0].nodePort}')/productpage"
See "reviews" v1, v2 and v3
# From Docker's perspective
docker ps | grep istio-proxy
# From Kubernetes' perspective
kubectl get po
kubectl describe <a pod>
Verify mesh deployment
# exec into 'istio-proxy'
kubectl exec -it <a pod> -c istio-proxy /bin/bash
Connect to proxy sidecar
# Generate load for Mixer telemetry adapter
docker run --rm istio/fortio load -c 1 -t 10m \
`echo "http://$(kubectl get nodes -o template --template='{{range.items}}{{range.status.addresses}}{{if eq .type "InternalIP"}}{{.address}}{{end}}{{end}}{{end}}'):$(kubectl get svc istio-ingress -n istio-system -o jsonpath='{.spec.ports[0].nodePort}')/productpage"`
Verify mesh configuration
running Istio
#Deploy new configuration to Nginx
istioctl create -f ./samples/bookinfo/kube/route-rule-all-v1.yaml
istioctl delete -f ./samples/bookinfo/kube/route-rule-all-v1.yaml
#A/B testing for user "lee"
kubectl apply -f ./samples/bookinfo/kube/route-rule-reviews-test-v2.yaml
Apply traffic routing policy
See Mixer telemetry
Try it out - github.com/solarwinds/istio-adapter
clouds, containers, functions,
applications and their management