Kubernetes Introduction

In order for applications to take advantage of potentially limitless scalability of cloud computing, they need to be designed in a special – cloud-native way. The Open Source Kubernetes software is the de-facto standard operating environment for anyone looking for a portable platform to deploy their workloads.

Kubernetes has several nested layers, each of which provides some level of isolation and security. Building on the container, Kubernetes layers provide progressively stronger isolation— you can start small and upgrade as needed. Starting from the smallest unit and moving outwards, here are the layers of a Kubernetes environment:

Container

A container provides basic management of resources, but does not isolate identity or the network. Container performance could suffer from a noisy neighbor on the node for resources that are not isolated by cgroups. It provides some security isolation, but only provides a single layer, compared to our desired double layer.

Pod

A pod is a collection of containers. A pod isolates a few more resources than a container, including the network. It does so with micro-segmentation using Kubernetes Network Policy, which dictates which pods can speak to one another. At the moment, a pod does not have a unique identity, but the Kubernetes community has made proposals to provide this. A pod still suffers from noisy neighbors on the same host.

Node

A node is a machine, either physical or virtual. A node serves as a host to a collection of pods, and has a superset of the privileges of those pods. A node leverages a hypervisor or hardware for isolation, including for its resources. Modern Kubernetes nodes run with distinct identities, and are authorized only to access the resources required by pods that are scheduled to the node. There can still be attacks at this level, such as convincing the scheduler to assign sensitive workloads to the node. You can use firewall rules to restrict network traffic to the node.

Cluster

A cluster is a collection of nodes and a control plane. This is a management layer for your containers. Clusters offer stronger network isolation with per-cluster DNS.

Networking

Network policies used to specify describing how groups of pods are allowed to communicate with each other and other network endpoints. Ingress policy is responsible for access to the services running inside the Kubernetes cluster, while egress describes how services running inside the cluster could access external networks.

Namespace

Namespaces provide a way of isolating/partitioning resources for convenience. Namespaces partitioning does not enforce quality of services, but rather provides an isolated view of a part of a cluster.

Miscellaneous

Kubernetes supports QoS service levels such as guaranteed vs best-effort and allows reserving resources. Kubernetes is extensible, and adding functionality does not require forking it. Instead, Kubernetes provides built-in supports for API extensions and defining custom resource types.

While Kubernetes is an extremely powerful platform, successfully running large microservices farm in production requires significant domain expertise.

Conclusion

While our 10,000 feet view is no way complete, we hope readers curious about Kubernetes will be inspired to investigate further.
Our practical experience from building Cloud-Native applications and running those in production, is to tread carefully. Serverless/FaaS platforms offer the same scalability benefits, without any management. While, serverless platforms from public cloud vendors will probably require slight code adaptations, correctly configuring your Kubernetes cluster with all resources will take time, as well.

If you are looking to try running Kubernetes for the first time, start experimenting with development and test environments.

More from our blog

See all posts