Cloud Native Security and Kubernetes
Kubernetes is based on a cloud-native architecture, and draws on advice from the CNCF about good practice for cloud native information security.
Read on through this page for an overview of how Kubernetes is designed to help you deploy a secure cloud native platform.
Cloud native information security
The CNCF white paper on cloud native security defines security controls and practices that are appropriate to different lifecycle phases.
Develop lifecycle phase
- Ensure the integrity of development environments.
- Design applications following good practice for information security, appropriate for your context.
- Consider end user security as part of solution design.
To achieve this, you can:
- Adopt an architecture, such as zero trust, that minimizes attack surfaces, even for internal threats.
- Define a code review process that considers security concerns.
- Build a threat model of your system or application that identifies trust boundaries. Use that to model to identify risks and to help find ways to treat those risks.
- Incorporate advanced security automation, such as fuzzing and security chaos engineering, where it's justified.
Distribute lifecycle phase
- Ensure the security of the supply chain for container images you execute.
- Ensure the security of the supply chain for the cluster and other components that execute your application. An example of another component might be an external database that your cloud-native application uses for persistence.
To achieve this, you can:
- Scan container images and other artifacts for known vulnerabilities.
- Ensure that software distribution uses encryption in transit, with a chain of trust for the software source.
- Adopt and follow processes to update dependencies when updates are available, especially in response to security announcements.
- Use validation mechanisms such as digital certificates for supply chain assurance.
- Subscribe to feeds and other mechanisms to alert you to security risks.
- Restrict access to artifacts. Place container images in a private registry that only allows authorized clients to pull images.
Deploy lifecycle phase
Ensure appropriate restrictions on what can be deployed, who can deploy it, and where it can be deployed to. You can enforce measures from the distribute phase, such as verifying the cryptographic identity of container image artifacts.
When you deploy Kubernetes, you also set the foundation for your applications' runtime environment: a Kubernetes cluster (or multiple clusters). That IT infrastructure must provide the security guarantees that higher layers expect.
Runtime lifecycle phase
The Runtime phase comprises three critical areas: compute, access, and storage.
Runtime protection: access
The Kubernetes API is what makes your cluster work. Protecting this API is key to providing effective cluster security.
Other pages in the Kubernetes documentation have more detail about how to set up specific aspects of access control. The security checklist has a set of suggested basic checks for your cluster.
Beyond that, securing your cluster means implementing effective authentication and authorization for API access. Use ServiceAccounts to provide and manage security identities for workloads and cluster components.
Kubernetes uses TLS to protect API traffic; make sure to deploy the cluster using TLS (including for traffic between nodes and the control plane), and protect the encryption keys. If you use Kubernetes' own API for CertificateSigningRequests, pay special attention to restricting misuse there.
Runtime protection: compute
Containers provide two things: isolation between different applications, and a mechanism to combine those isolated applications to run on the same host computer. Those two aspects, isolation and aggregation, mean that runtime security involves trade-offs and finding an appropriate balance.
Kubernetes relies on a container runtime to actually set up and run containers. The Kubernetes project does not recommend a specific container runtime and you should make sure that the runtime(s) that you choose meet your information security needs.
To protect your compute at runtime, you can:
-
Enforce Pod security standards for applications, to help ensure they run with only the necessary privileges.
-
Run a specialized operating system on your nodes that is designed specifically for running containerized workloads. This is typically based on a read-only operating system (immutable image) that provides only the services essential for running containers.
Container-specific operating systems help to isolate system components and present a reduced attack surface in case of a container escape.
-
Define ResourceQuotas to fairly allocate shared resources, and use mechanisms such as LimitRanges to ensure that Pods specify their resource requirements.
-
Partition workloads across different nodes. Use node isolation mechanisms, either from Kubernetes itself or from the ecosystem, to ensure that Pods with different trust contexts are run on separate sets of nodes.
-
Use a container runtime that provides security restrictions.
-
On Linux nodes, use a Linux security module such as AppArmor (beta) or seccomp.
Runtime protection: storage
To protect storage for your cluster and the applications that run there, you can:
- Integrate your cluster with an external storage plugin that provides encryption at rest for volumes.
- Enable encryption at rest for API objects.
- Protect data durability using backups. Verify that you can restore these, whenever you need to.
- Authenticate connections between cluster nodes and any network storage they rely upon.
- Implement data encryption within your own application.
For encryption keys, generating these within specialized hardware provides the best protection against disclosure risks. A hardware security module can let you perform cryptographic operations without allowing the security key to be copied elsewhere.
Networking and security
You should also consider network security measures, such as NetworkPolicy or a service mesh. Some network plugins for Kubernetes provide encryption for your cluster network, using technologies such as a virtual private network (VPN) overlay. By design, Kubernetes lets you use your own networking plugin for your cluster (if you use managed Kubernetes, the person or organization managing your cluster may have chosen a network plugin for you).
The network plugin you choose and the way you integrate it can have a strong impact on the security of information in transit.
Observability and runtime security
Kubernetes lets you extend your cluster with extra tooling. You can set up third party solutions to help you monitor or troubleshoot your applications and the clusters they are running. You also get some basic observability features built in to Kubernetes itself. Your code running in containers can generate logs, publish metrics or provide other observability data; at deploy time, you need to make sure your cluster provides an appropriate level of protection there.
If you set up a metrics dashboard or something similar, review the chain of components that populate data into that dashboard, as well as the dashboard itself. Make sure that the whole chain is designed with enough resilience and enough integrity protection that you can rely on it even during an incident where your cluster might be degraded.
Where appropriate, deploy security measures below the level of Kubernetes itself, such as cryptographically measured boot, or authenticated distribution of time (which helps ensure the fidelity of logs and audit records).
For a high assurance environment, deploy cryptographic protections to ensure that logs are both tamper-proof and confidential.
What's next
Cloud native security
- CNCF white paper on cloud native security.
- CNCF white paper on good practices for securing a software supply chain.
- Fixing the Kubernetes clusterf**k: Understanding security from the kernel up (FOSDEM 2020)
- Kubernetes Security Best Practices (Kubernetes Forum Seoul 2019)
- Towards Measured Boot Out of the Box (Linux Security Summit 2016)