Summary: Kubernetes is a platform for managing multiple containers running across multiple hosts. Like containers, it is designed to run anywhere, including on-prem, private, public cloud, and even in the hybrid cloud.  In this article i have explained Kubernetes architecture diagram in detail.

Kubernetes was originally created by the Google Borg/Omega team. It is one of the most popular open-source project in history and has become a leader in the field of container orchestration.

The below infographic (bubble chart) will show the top 30 highest velocity open source projects as of June 2019 and Kubernetes hold 3rd position in the list.

Project State


30 highest velocity open source projects as of June 2019

Growing Contributors


Before getting into the Kubernetes architecture, we will see some of the operational complexities of managing traditional deployment in the early Days

Traditional Deployment era


Before the Kubernetes era, the traditional software applications were designed as monolithic, and deployment used to happen on the physical servers and there was no way to define the boundaries with respect to the resources used by applications. 

Physical servers setup is only capable of serving a single business, as the resources of physical servers cannot be distributed among different digital tenants. So naturally, there was agreed downtime, and availability wasn’t a requirement in the early days.

Virtualized deployment era


In the virtualized deployment era, single/multiple virtual machines are used for deploying the applications. This helped a lot to isolate the application from each other with the defined (resource limit – cpu/memory) boundary.

Though it provides complete isolation from the host OS and other VM’s. The virtualization layer has a striking negative effect on performance and virtualized workloads run about 30% slower than the equivalent containers. But this is useful when a strong security boundary is critical.

Container deployment ERA


Containers are considered to be lightweight. Containers have their own file system, CPU, memory, process space, and can run directly on the real cpu, with no virtualization overhead, just as ordinary binary executables do.

Containers basically decoupled from underlying infrastructure and can be ported into different cloud and OS distributions. Also, container runtime will efficiently use the disk space and network bandwidth because it will assemble all the layers and only download a layer if it is not already cached locally.

Advantages of Containers

  1. Agile app creation and deployment: Easier and efficient to create a container image compared to VM image.
  2. Continuous deployment and integration:  Deployment is quick and easy rollback
  3. Dev and Ops separation of concern: Create application container images at build or release time rather than deployment time; nothing but decoupling the images from infrastructure.
  4. Observability: Application health and other metrics can be observed.

Microservices


Microservices – Lightweight, designing small, isolated functions that can be tested, deployed, managed completely independent.  Lets developers write the application in various languages and In addition to the code, it includes libraries, dependencies, and environment requirements. Microservice architecture helps developers to take ownership of their part of the system, from design to delivery and ongoing operations. Major companies like Amazon, Netflix, etc.. had significant success in building their systems around microservices.

Without containers, we cannot end the talk of microservices. Though they both are not the same thing, because a microservice may run in container as well as in fully provision VM. Similarly, a container doesn’t have to be used for microservices, but in real world microservices and container enable developers to build and manage applications more easily.

Container images


Container image is a compiled version of a docker file that is built up from a series of read-only layers. It bundles application with all the dependencies and a container is deployed from the container image offering an isolated execution environment for the application.

You can have as many as running containers of the same image and it can be deployed on many platforms, such as Virtual Machine, Public Cloud, Private Cloud, and Hybrid Cloud.

Container Orchestration


i) Most container orchestration can group hosts together while creating clusters and schedule containers on the cluster, based on resource availability.

ii) Container orchestrator enables containers in a cluster to communicate with each other, regardless of the host where they are deployed.

iii) Allows to manage and optimize resource usage.

iv) It simplifies access to containerized applications, by creating a level of abstraction between the container and the user. It also manages and optimizes resource usage and they also allow for the implementation of policies to secure access to applications running inside the container.

v) With all these features, container archestrators are the best choice when it comes to managing containersed application. Most container orchestrators refer below, can be deployed on bare metal servers, public cloud, private cloud, etc… and in short, infrastructure of our choice (Example: We can spin up Kubernetes in cloud providers like AKS, EKS, GKE, Company data center, workstation, etc…).

Some more benefits of container orchestration include,

  1. Efficient resource management.
  2. Seamless scaling of services.
  3. High availability.
  4. Low operational overhead at scale.

Few container orchestration tools in the market today

  1. Amazon ECS
  2. Docker Platform
  3. Google GKE
  4. Azure Kubernetes Service
  5. Openshift Container Platform
  6. Oracle Container Engine  for Kubernetes

Why is Kubernetes useful?


Kubernetes can automate traditional system admin tasks like installing security patches, upgrading servers, and much more. Its main goal is to take care of cluster management and orchestration.

Kubernetes offers Zero-downtime deployments (deploy artifacts with newer versions and wait until they become healthy, and then shut down the old version) and reduces the developer effort it takes for deployment and patches roll out.

Kubernetes helps with continuous deployment practices like Canary deployment which will allow you to reduce the risk of gradually deploying the code to a small set of users, If everything is good, the rollout will happen to the entire infrastructure and to all users. 

Another common practice is Blue-green deployment – which will allow you to deploy a new version of application code in a parallel environment ( predictable release with zero downtime deployment)  and switch the traffic over to it once the sanity and other tests are successful. If the newer version has some issue we can rollback to the previous version. 

Some of the Kubernetes features  

  1. Automatic bin packing: You can tell Kubernetes on how much cpu and memory each container needs and Kubernetes can fit containers in the node accordingly and use the resources in an optimal way.
  2. Self – Healing: Kubernetes ensures the containers are automatically restarted when it goes down. In case of entire node goes fails, it replaces and reschedules the containers to another node.
  3. HPA: Application are scaled horizontally or manually using custom metrics utilisation
  4. Service discovery and Load Balancing: Containers receive their own IP address from Kubernetes. While it assigns a single DNS name for a set of containers to aid in load balancing requests across the containers. 
  5. Automatic rollbacks and rollout: Kubernetes gradually rollout, rollback updates, and config changes to the application. It constantly monitors application health to prevent any downtime and if something goes wrong, Kubernetes will roll back the change for you.
  6. Secret and config management: Kubernetes manages secrets for an application separately from the container images, to avoid rebuilding of the container every time. 
  7. Storage Orchestration: Kubernetes allows you to automatically mount the storage of your choices like local storage or public storage and many more.
  8. Batch execution: Kubernetes can manage your long-running Jobs and replace failed containers if desired.

Kubernetes architecture explained


Kubernetes architecture is composed of a master node and a set of worker nodes. Every cluster has at least one worker node and the nodes can be virtual machines and physical servers. Below are the control plane and node components that are tied together in a Kubernetes cluster. (Refer to Kubernetes architecture diagram above)

Control Plane component


Master node provides running environment for the control plane, which helps to manage the state of the cluster. The control plane components plays very distinct role in cluster management.

In order to communicate with the Kubernetes cluster, user send requests to the Master node via command-line interface or web user interface or application programming interface. It is important to keep running the control plane at all costs. Losing the control plane may introduce the downtimes and causing service distraction to clients with possible loss of business.  

To ensure the control plane is fault-tolerant, Master nodes should be configured in high availability mode. Only one of the master node actively handles all cluster, the control plane components stay in sync across all the master node replicas. This type of configuration adds resiliency to the cluster control plane, If an active master replica fails the other replica takes up and continues the operations of the Kubernetes cluster without any downtime. Generally these things are taken care in the Managed version of Kubernetes.

The primary components that exist on the master node are 

  1. Api server
  2. Controller Manager
  3. Etcd
  4. Cloud Controller Manager
  5. Scheduler

API server


All administrative tasks are coordinated by the Kube API server (central control plane component) by the master node. API server intercepts the call from the user, operator, and external agents, then validates and process the system. During the processing, the API server reads the Kubernetes cluster current state from etcd and after the execution of the call, the resulting state of the cluster is just saved into a distributed key-value data store for persistence.

The API server is the only master plane component to talk to etcd, both to read and write the cluster state information and acting as the middle man for any other control plane agent.

Etcd


Etcd is a distributed key-value data store used to persist only cluster state-related workload data. Data is compacted periodically to minimize the size of the data store and it is not deleted. etcd is inbuilt in all managed Kubernetes.

Scheduler


The role of a scheduler is to assign new objects such as pods to nodes, during the scheduler process, the decisions are made based on the current cluster state and new object requirements.  The scheduler obtains resource usage data for each worker node in the cluster and new object requirements which are part of its configuration data from etcd via the API server.

The scheduler also takes into account quality and services, data locality, affinity, taints, and toleration, etc.. 

Control manager


Running controllers to regulate the state of the cluster. Controllers are watch loops that will continuously run to check the cluster desired state with its current state in case of mismatch, the corrective action is taken in the cluster and until the current state matches the desired state.

Kube controller manager


Runs a controller that watches the shared state of the cluster through the API server and matches the current state with the desired state. Examples include the replication controller, endpoints controller, namespace controller, and service accounts controller.  All the controllers are bundled into a single process to reduce complexity.

Cloud controller manager


Controllers responsible to interact with the underlying infra of the cloud provider for support of availability zones, manage storage volumes and load balancing, and routing.

Worker nodes


Provide a running environment for client application through containerized microservice, the applications are encapsulated in pods which are controlled by cluster Control Plane agents running on the Master node. 

Pods are scheduled on worker nodes where they find required compute, memory, storage resources and networking to talk to the outside world. The pod is the smallest scheduling unit in Kubernetes. It is a logical collection of one or more containers which is co-scheduled together.

To access applications from the external world. We should communicate with the Master node, not with the worker node. A worker node has the following components 

  1. Container Runtime
  2. kubelet
  3. Kube-proxy

Container Runtime


Container runtime is responsible for the real operation of pods and containers and image management. 

kubelet


Kubelet runs on each node in the cluster and communicates with the control plane components from the master node. It receives pod definitions primarily from the API server and interacts with container run time to run containers associated with the pod. It maintains the lifecycle of containers.

Kube-proxy


Kubeproxy is a network agent, which runs on each node responsible for dynamic updates and maintenance of all networking rules on the node.

What problem does Kubernetes solve?


Problem statement

  1. Code deployments and patches need to be rolled out and rolled back multiple times in a known control way.
  2. Need to test the software more frequently and get the feedback quickly from that testing.
  3. Business needs application and services to be available 24/7
  4. Meet the business demands on traffic spikes in holiday season like (Black Friday, Cyber Monday … etc)
  5. Reduced cost for cloud infrastructure for the off-peak/peak holiday season

Resolution


  1. All those problems can be solved using Kubernetes. 
  2. We can have the CICD built into Kubernetes. So that we can distribute the load and run as many builds in parallel and scale in/out based on the load.
  3. A/b – Canary, Blue-green, and different mechanisms allow you to deploy code quickly and get feedback from the users. If everything is good, we can promote the artifacts to the next stage (full-blown deployment) otherwise rollback to the older version. 
  4. For availability, get a managed Kubernetes platform from top cloud providers like AWS, Google Cloud & Azure. Where EKS guarantees overall – 99.95%, with availability zones enabled – 99.95% and 99.9% when availability zones disabled. Similarly, GKE provides – 99.5% uptime for zonal deployments and – 99.95% for regional deployments.
  5. Kubernetes can scale applications based on metrics (Cpu utilization/Custom metrics – Request per second) using Horizontal Pod Autoscaler. In short, HPA adds and deletes replicas and can support sudden bursts in traffic and spikes during events like black friday, cyber monday, etc … with auto-scaling enabled.
  6. Kubernetes designed to run anywhere and the business can be on, Public, Private or Hybrid cloud.
  7. Kubernetes will keep your ops cost low and developers productive. It supports all new types of applications these days and it is really powerful platform not only for today’s applications but for future applications also. 

Why type of applications can run in Kubernetes?


Kubernetes is a great platform for building platforms, meaning –  Kubernetes helps you to manage underline infrastructure and helps you scale infrastructure and scale cloud infrastructure. We can build Platform as Service, Serverless, Function as a service, Software as a service on top of Kubernetes.

Kubernetes does not bound itself down, with any dependencies or limitations on which languages and applications it supports. If an application can run successfully in a container, it should run in Kubernetes also. Below are a wide variety of workloads supported by Kubernetes. 

  1. Stateless
  2. Stateful apps
  3. Big Data and Machine Learning workloads.
  4. Microservice workloads.

My Two Cents


  1. Docker Image Security: Do It Continuously, often, and automate scanning of container images for known security vulnerabilities.
  2. Use lightweight docker images. Refer Distroless Images
  3. Enable Docker Content Trust
  4. Run containers with non-root user privileges. Refer Dockerfile tips for production
  5. Follow microservices design patterns: for example, make sure you are running one process per container. 
  6. Last but not least don’t adopt any new technology because it is a cool thing; if you don’t have the exact use case or scenario; don’t use it for the sake of using it :).  You are likely to fail big time 

Conclusion


Kubernetes is a tool to manage multiple container running applications. Years back google was running all of its services like Gmail, Google Maps, Google search, and so on, in containers. Since there was no suitable orchestration available at that time, Google was forced to invent one named (Borg). Based on the learning so far and the challenges faced with the internal container orchestration, google finally found an open-source project in 2014 named Kubernetes.