The 4C's of cloud native computing represent security in depth where each "C" stands for level of isolation from outside in.
Layer | Description |
---|---|
Cloud | Security of entire infrastructure hosting the servers. Public/Private etc. |
Cluster | Kubernetes cluster |
Container | Docker containers. Running, for example in privilege mode. |
Code | Binaries, source code, code configuration, no TLS, variables in code, etc. |
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
- name: ImagePolicyWebhook
configuration:
imagePolicy:
kubeConfigFile: <path-to-kubeconfig-file>
allowTTL: 50
denyTTL: 50
retryBackoff: 500
defaultAllow: true
[!NOTE]
defaultAllow: true
if admission webhook server is not reachable, all request will be allowed
If Kubernetes components are deployed as daemons, edit service configuration file by systemctl edit service_name
, else if Kubernetes has been deployed using kubeadm
, simply edit pod manifest vim /etc/kubernetes/manifests/kube-apiserver.yaml
and add ImagePolicyWebhook
to --enable-admission-plugins=
section as well as pass admission control config file via --admission-control-config-file=
# 1. ssh into container
# 2. list processes
ps -ef
# 3. grep for seccomp status
grep -i seccomp /proc/{PID}/status
If the result is 2 it means that seccomp is enabled for the container
Mode | Description |
---|---|
Mode 0 | Disabled |
Mode 1 | Strict - will block all calls except read, write, exec, sigreadon |
Mode 2 | Filtered - filter selectively |
there are 2 profile types:
By default Docker enables seccomp filter (mode 2).
It blocks around 60 of the around 300 syscalls available with default profile
[!TIP] How to check what syscalls are blocked? Run amicontained tool as container to see syscalls blocked by default docker profile
docker run r.j3ss.co/amicontained amicontained
Run amicontained tool as pod to see syscalls blocked by Kubernetes default profile
k run amicontained --image r.j3ss.co/amicontained amicontained -- amidontained
check pod logs
k logs amicontained
Create a pod using yaml spec and enable RuntimeDefault profile under securityContext of pod
[!ATTENTION] default seccomp profile is located at
/var/lib/kubelet/seccomp
. Custom seccomp profile path must be relative to this path
apiVersion: v1
kind: Pod
metadata:
name: audit-pod
labels:
app: audit-pod
spec:
securityContext:
seccompProfile:
type: Localhost
localhostProfile: profiles/audit.json
containers:
- name: test-container
image: hashicorp/http-echo:0.2.3
args:
- "-text=just made some syscalls!"
securityContext:
allowPrivilegeEscalation: false
[!NOTE] In order to apply new seccomp profile, pod must be deleted and re-created. use
k recreate -f
command
By default seccomp logs will be saved in /var/log/syslog
You can easily tail logs for specific pod by tail -f /var/log/syslog | grep {pod_name}
/etc/apparmor.d/
#include <tunables/global>
profile k8s-apparmor-example-deny-write flags=(attach_disconnected) {
#include <abstractions/base>
file,
# Deny all file writes.
deny /** w,
}
systemctl status apparmor
cat /sys/module/apparmor/parameters/enabled
cat /sys/kernel/security/apparmor/profiles
aa-status
to check what profiles are loadedMode | Description |
---|---|
enforce | enforce and monitor on any app that fits the profile |
complain | log as events |
unconfined | any task allowed, no logging |
apparmor_parser -q /etc/apparmor.d/{profile_name}
[!TIP] to secure a pod an annotation in this format
container.apparmor.security.beta.kubernetes.io/<container_name>: localhost/profile_name OR runtime/default OR unconfined
AppArmor can be used to for example restrict access to a folder inside pod/container.
Capabilities are added and removed per container
[!TIP] To check what capabilities are needed for any give command run
getcap /<path>/<command>
or to check capabililties used by a running process rungetpcaps PID
[!NOTE] Install gVisor
[!NOTE] this requires nested virtualization (in case of running workloads on VMs) and can degrade performance. Some cloud providers do not support nested virtualization.
docker run --runtime kata -d nginx
docker run --runtime runsc -d nginx
runtimeClassName
on pod definition level to use the runtime