Kubernetes Architecture Diagram Explained
Summary: Kubernetes is a platform for managing multiple containers running across multiple hosts. Like containers, it is designed to run anywhere, including on-prem, private, public cloud, and even in the hybrid cloud. In this article i have explained Kubernetes architecture diagram in detail.
Kubernetes was originally created by the Google Borg/Omega team. It is one of the most popular open-source project in history and has become a leader in the field of container orchestration.
The below infographic (bubble chart) will show the top 30 highest velocity open source projects as of June 2019 and Kubernetes hold 3rd position in the list.
30 highest velocity open source projects as of June 2019
Before getting into the Kubernetes architecture, we will see some of the operational complexities of managing traditional deployment in the early Days
Traditional Deployment era
Before the Kubernetes era, the traditional software applications were designed as monolithic, and deployment used to happen on the physical servers and there was no way to define the boundaries with respect to the resources used by applications.
Physical servers setup is only capable of serving a single business, as the resources of physical servers cannot be distributed among different digital tenants. So naturally, there was agreed downtime, and availability wasn’t a requirement in the early days.
Virtualized deployment era
In the virtualized deployment era, single/multiple virtual machines are used for deploying the applications. This helped a lot to isolate the application from each other with the defined (resource limit – cpu/memory) boundary.
Though it provides complete isolation from the host OS and other VM’s. The virtualization layer has a striking negative effect on performance and virtualized workloads run about 30% slower than the equivalent containers. But this is useful when a strong security boundary is critical.
Container deployment ERA
Containers are considered to be lightweight. Containers have their own file system, CPU, memory, process space, and can run directly on the real cpu, with no virtualization overhead, just as ordinary binary executables do.
Containers basically decoupled from underlying infrastructure and can be ported into different cloud and OS distributions. Also, container runtime will efficiently use the disk space and network bandwidth because it will assemble all the layers and only download a layer if it is not already cached locally.
Advantages of Containers
- Agile app creation and deployment: Easier and efficient to create a container image compared to VM image.
- Continuous deployment and integration: Deployment is quick and easy rollback
- Dev and Ops separation of concern: Create application container images at build or release time rather than deployment time; nothing but decoupling the images from infrastructure.
- Observability: Application health and other metrics can be observed.
Microservices – Lightweight, designing small, isolated functions that can be tested, deployed, managed completely independent. Lets developers write the application in various languages and In addition to the code, it includes libraries, dependencies, and environment requirements. Microservice architecture helps developers to take ownership of their part of the system, from design to delivery and ongoing operations. Major companies like Amazon, Netflix, etc.. had significant success in building their systems around microservices.
Without containers, we cannot end the talk of microservices. Though they both are not the same thing, because a microservice may run in a container as well as in a fully provisioned VM. Similarly, a container doesn’t have to be used for microservices. In the real world, microservices and containers enable developers to build and manage applications more easily.
Container image is a compiled version of a docker file that is built up from a series of read-only layers. It bundles application with all the dependencies and a container is deployed from the container image offering an isolated execution environment for the application.
You can have as many as running containers of the same image and it can be deployed on many platforms, such as Virtual Machine, Public Cloud, Private Cloud, and Hybrid Cloud.
▌Most container orchestration can group hosts together while creating clusters and schedule containers on the cluster, based on resource availability.
▌Container orchestrator enables containers in a cluster to communicate with each other, regardless of the host where they are deployed.
▌Allows to manage and optimize resource usage.
▌It simplifies access to containerized applications, by creating a level of abstraction between the container and the user. It also manages and optimizes resource usage and they also allow for the implementation of policies to secure access to applications running inside the container.
▌With all these features, container orchestrators are the best choice when it comes to managing containerized applications. Most container orchestrators refer below, can be deployed on bare metal servers, public cloud, private cloud, etc… and in short, the infrastructure of our choice (Example: We can spin up Kubernetes in cloud providers like AKS, EKS, GKE, Company data center, workstation, etc…).
Some more benefits of container orchestration include,
- Efficient resource management.
- Seamless scaling of services.
- High availability.
- Low operational overhead at scale.
Few container orchestration tools in the market today
- Amazon ECS
- Docker Platform
- Google GKE
- Azure Kubernetes Service
- Openshift Container Platform
- Oracle Container Engine for Kubernetes
What is Kubernetes?
As per Official documentation
Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications.
Kubernetes main goal is to take care of cluster management and orchestration. It has very rich set of abstract and effective software primitives that automate the functions of computing, storage, networking, and other infrastructure services
The key features of Kubernetes include:
- Automatic bin packing: You can tell Kubernetes how much CPU and Memory each container needs and Kubernetes can fit containers in the node accordingly and use the resources in an optimal way.
- Self – Healing: Kubernetes ensures the containers are automatically restarted when it goes down. In case the entire node goes fails, it replaces and reschedules the containers to another node.
- HPA: Application are scaled horizontally or manually using custom metrics utilization
- Service discovery and Load Balancing: Containers receive their own IP address from Kubernetes. While it assigns a single DNS name for a set of containers to aid in load balancing requests across the containers.
- Automatic rollbacks and rollout: Kubernetes gradually rollout, rollback updates, and config changes to the application. It constantly monitors application health to prevent any downtime and if something goes wrong, Kubernetes will roll back the change for you.
- Secret and config management: Kubernetes manages secrets for an application separately from the container images, to avoid rebuilding the container every time.
- Storage Orchestration: Kubernetes allows you to automatically mount the storage of your choices like local storage or public storage and many more
- Batch execution: Kubernetes can manage your long-running Jobs and replace failed containers if desired.
Kubernetes architecture explained
Kubernetes (follows client–server architecture) architecture is composed of a master node (a.k.a Control Plane) and a set of worker nodes.
The Master Node refers to the control node in the kubernetes cluster. In each cluster there should be one master responsible for management and control of the entire cluster. Its major role is to schedule work across the worker nodes. The primary components that exist on the master node are the Api server, Controller Manager, Etcd, Cloud Controller Manager, etc .. (explained briefly below)
Note: In K8 the work that is scheduled is called a POD, and the pod can hold one or more containers.
The nodes can be either physical servers or virtual machines (VMs). Users of the Kubernetes environment interact with the master node using either a command-line interface (kubectl), an application programming interface (API), or a graphical user interface (GUI).
Kubernetes has two goals: to be a cluster manager and a resource manager, Kubernetes uses a master to worker node model, which means that worker nodes are scalable and usable. The Kubernetes architecture can provide different worker node sizes for different workloads, so the resource manager part will find a suitable location in your cluster to perform work.
Note: Each Kubernetes cluster includes a master node and at least one worker node. (A cluster usually contains multiple worker nodes).
Below are the control plane and node components that are tied together in a Kubernetes cluster. (Refer to Kubernetes architecture diagram above)
Control Plane component
Master node provides a running environment for the control plane, which helps to manage the state of the cluster. The control plane components play a very distinct role in cluster management.
Note: It is important to keep running the control plane at all costs. Losing the control plane may introduce the downtimes and causing service distraction to clients with possible loss of business.
To ensure the control plane is fault-tolerant, Master nodes should be configured in high availability mode. Only one of the master nodes actively handles all clusters, the control plane components stay in sync across all the master node replicas. This type of configuration adds resiliency to the cluster control plane, If an active master replica fails the other replica takes up and continues the operations of the Kubernetes cluster without any downtime. Generally these things are taken care of in the Managed version of Kubernetes.
The primary components that exist on the master node are
- Api server
- Controller Manager
- Cloud Controller Manager
All administrative tasks are coordinated by the Kube API server (central control plane component) by the master node. API server intercepts the call from the user, operator, and external agents, then validates and processes the system. During the processing, the API server reads the Kubernetes cluster current state from etcd and after the execution of the call, the resulting state of the cluster is just saved into a distributed key-value data store for persistence.
The API server is the only master plane component to talk to etcd, both to read and write the cluster state information and acting as the middleman for any other control plane agent.
Etcd is a distributed key-value data store used to persist only cluster state-related workload data. Data is compacted periodically to minimize the size of the data store and it is not deleted. etcd is inbuilt in all managed Kubernetes.
The role of a scheduler is to assign new objects such as pods to nodes, during the scheduler process, the decisions are made based on the current cluster state and new object requirements. The scheduler obtains resource usage data for each worker node in the cluster and new object requirements which are part of its configuration data from etcd via the API server.
The scheduler also takes into account quality and services, data locality, affinity, taints, and toleration, etc..
Running controllers to regulate the state of the cluster. Controllers are watch loops that will continuously run to check the cluster desired state with its current state in case of mismatch, the corrective action is taken in the cluster and until the current state matches the desired state.
Kube controller manager
Runs a controller that watches the shared state of the cluster through the API server and matches the current state with the desired state. Examples include the replication controller, endpoints controller, namespace controller, and service accounts controller. All the controllers are bundled into a single process to reduce complexity.
Cloud controller manager
Controllers responsible to interact with the underlying infra of the cloud provider for support of availability zones, manage storage volumes and load balancing, and routing.
Worker nodes provide a running environment for client application through containerized microservice, the applications are encapsulated in pods which are controlled by cluster Control Plane agents running on the master node.
Pods are scheduled on worker nodes where they find required compute, memory, storage resources and networking to talk to the outside world. The pod is the smallest scheduling unit in Kubernetes. It is a logical collection of one or more containers which is scheduled together.
To access applications from the external world. We should communicate with the master node not with the worker node. A worker node has the following components
- Container Runtime
Container runtime is responsible for the real operation of pods and containers and image management.
Kubelet runs on each node in the cluster and communicates with the control plane components from the master node. It receives pod definitions primarily from the API server and interacts with container run time to run containers associated with the pod. It maintains the lifecycle of containers.
Kubeproxy is a network agent, which runs on each node responsible for dynamic updates and maintenance of all networking rules on the node.
What problem does Kubernetes solve?
- Code deployments and patches need to be rolled out and rolled back multiple times in a known control way.
- Need to test the software more frequently and get the feedback quickly from that testing.
- Business needs application and services to be available 24/7
- Meet the business demands on traffic spikes in holiday season like (Black Friday, Cyber Monday … etc)
- Reduced cost for cloud infrastructure for the off-peak/peak holiday season
- All those problems can be solved using Kubernetes.
- We can have the CICD built into Kubernetes. So that we can distribute the load and run as many builds in parallel and scale in/out based on the load.
- A/b – Canary, Blue-green, and different mechanisms allow you to deploy code quickly and get feedback from the users. If everything is good, we can promote the artifacts to the next stage (full-blown deployment) otherwise rollback to the older version.
- For availability, get a managed Kubernetes platform from top cloud providers like AWS, Google Cloud & Azure. Where EKS guarantees overall – 99.95%, with availability zones enabled – 99.95% and 99.9% when availability zones disabled. Similarly, GKE provides – 99.5% uptime for zonal deployments and – 99.95% for regional deployments.
- Kubernetes can scale applications based on metrics (Cpu utilization/Custom metrics – Request per second) using Horizontal Pod Autoscaler. In short, HPA adds and deletes replicas and can support sudden bursts in traffic and spikes during events like black friday, cyber monday, etc … with auto-scaling enabled.
- Kubernetes designed to run anywhere and the business can be on, Public, Private or Hybrid cloud.
- Kubernetes will keep your ops cost low and developers productive. It supports all new types of applications these days and it is really powerful platform not only for today’s applications but for future applications also.
Why type of applications can run in Kubernetes?
Kubernetes is a great platform for building platforms, meaning – Kubernetes helps you to manage underline infrastructure and helps you scale infrastructure and scale cloud infrastructure. We can build Platform as Service, Serverless, Function as a service, Software as a service on top of Kubernetes.
Kubernetes does not bound itself down, with any dependencies or limitations on which languages and applications it supports. If an application can run successfully in a container, it should run in Kubernetes also. Below are a wide variety of workloads supported by Kubernetes.
- Stateful apps
- Big Data and Machine Learning workloads.
- Microservice workloads.
My Two Cents
- Enable Docker Image Security: Do It Continuously, often, and automate scanning of container images for known security vulnerabilities.
- Use lightweight docker images. Refer Distroless Images, Enable Docker Content Trust
- Run containers with non-root user privileges. Refer Dockerfile tips for production
- Follow microservices design patterns: for example, make sure you are running one process per container.
- Last but not least, don’t adopt any new technology because it is a cool thing; if you don’t have the exact use case or scenario; don’t use it for the sake of using it :). You are likely to fail big time
Kubernetes is a tool to manage multiple container running applications. Years back google was running all of its services like Gmail, Google Maps, Google search, and so on, in containers. Since there was no suitable orchestration available at that time, Google was forced to invent one named (Borg). Based on the learning so far and the challenges faced with the internal container orchestration, google finally found an open-source project in 2014 named Kubernetes.