Why you may need Cloud Native Computing and Kubernetes
To understand the Cloud-Native Computing architecture and why CNC is necessary to take the full advantage of running your software in cloud, let us start by examining the previous enterprise computing architectures: client-server and multi-tier solutions.
Client-server architecture (client/server) is a software architecture in which solutions are split into two logical components – a client and a server communicating over a network. This approach enabled general-purpose computers – clients to extend their capabilities by using the resources of other hosts – servers. Client-server solutions enable Centralized Computing – the offloading of processing from client hosts to the central server computers, allows clients to run simple uniform software, such as a web browser.
Multitier architecture is an extension of the client-server model further separating different logical components, based on their functionality, such as presentation, processing, business logic and persistence into individual layers.
The problem with the multitier architecture is that each logical layer needs to be synchronized with all other layers, else our stack will be broken.
Microservices are the answer to the shortcomings of the n-tier model. Microservice architecture structures an application as a collection of loosely coupled services, which implement business capabilities. Each service implements a set of narrowly, related functions and each service can be deployed and updated independently. These services may be written in different programming languages.
According to CNCF: Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its own container, and dynamically orchestrating those containers to optimize resource utilization.
Designing apps to be container-native helps with scalability and enables combining microservices developed using different software stacks. Containers, however, are not enough on their own. Deploying multiple microservices at scale means a need for a consistent way to discover, recover, update, autoscale and secure applications. A container orchestration solution is the part that makes it all possible.
In simplistic terms, if you want to take the full advantage of cloud-scale computing, Cloud-Native Computing is the way to go. It provides you with a platform-independent operating system – Kubernetes and an entire ecosystem free and paid tools and resources to help you harvest the ability to run your applications at cloud-scale without vendor lock-ins.
An open source system for deploying, scaling, and managing containerized applications, Kubernetes handles the work of scheduling containers onto compute clusters and manages the workloads to ensure they run as intended.
Kubernetes is highly configurable and extensible. Kubernetes provides built-in customization mechanisms by exposing fine-grained configuration parameters, as well as by allowing extensions to the built-in Kubernetes API.
Most modern companies that provide services at scale are running Kubernetes in production, including COMCAST, eBay, Goldman Sachs, The New York Times, PHILIPS, SAP and others.
There are several alternatives to Kubernetes, including: Docker Swarm and Apache Mesos. There are situation when an experienced cloud architect would decide to build infrastructure using another cloud orchestration platform.
Further, some workloads could run on serverless infrastructure also known as Function-as-a-Service (FaaS). FaaS solutions, such as AWS Lambda and Azure Functions are usually relatively inexpensive and provide autoscaling out-of-the-box, while managing Kubernetes requires significant technical expertise.
We recommend that anyone relatively new to Cloud Native Computing starts by looking into Kubernetes as the platform of choice, for the breadth and depth of its ecosystem.