Kubernetes and Its Architecture

September 8, 2022
/
VenuGopal Reddy Pagidi
/
The Cloud
/

Kubernetes (also referred to as K8S), is a container orchestration tool for applications that run on containers. And it’s an open-source tool written in go language. It was released by Google mid-2014, developed at first as the third iteration of an internal container system called Borg. In 2015, Google partnered with Red Hat to create the CNCF( Cloud Native Computing Foundation) as a launchpad for the project. 

One of the things that makes Kubernetes so popular is its flexibility. Kubernetes is designed from the ground up to fully leverage the power of cloud computing. It comes with an architecture that allows for even the most complex systems to be scalable and highly available, all without adding complexity to the environment setup.

The architecture of Kubernetes has a number of moving parts, each with a specific set of functions. When observed in its visual form, Kubernetes architecture utilizes containers and other native components as its building blocks. It can group ‘n’ no of containers into one logical unit for managing and deploying them. And although Kubernetes runs on Linux, it can also be run on bare metal, virtual machines, cloud instances, or OpenStack. We are going to break down the components that make Kubernetes so flexible in this article. The following are the main components of K8s architecture.

  • The Master Node
  • Worker Node

The Master Node

Requests—UI, CLI, and API requests included—are handled by a Master node, the first component in a Kubernetes deployment. Master nodes contain 4 main components that each handle a specific function. These include: 

  • API server
  • Scheduler
  • Controller-manager
  • ETCD

The Master node basically handles the rest of the cluster through a control plane. Everything that goes on inside the cluster is governed by configuration details and control points defined by the Master node. A Master node can have multiple child nodes—known as Worker nodes—running microservices and functions.

As mentioned before, the first component in a Master node is an API server. REST commands associated with server controls are processed by the API server. Kubernetes API is exposed to UI and external users. It handles all external requests and acts as a bridge that ties other Master node components together.

The Scheduler takes care of resource management by storing the resource usage information for each worker node, and it schedules the work to be done in the form of Pods and Services. Since Kubernetes is highly scalable, pods in the Worker nodes need to scale up (or down) alongside the cluster. The Scheduler is aware of the cluster topology and handles two main tasks: filtering and scoring of requests and resources. It even has detailed commands such as CheckNodeMemoryPressure for handling missing exceptions.

Next, we have the Controller manager or controllers. Control in a Kubernetes cluster can be routed directly—using a control loop—or via the API server with the help of Job. There is also a Deployment controller for handling deployment of new nodes, with resilience a built-in feature. You have the option to write your own controller for the control plane.

There is a fourth component that completes the Master node, and that is the ETCD—which is also called a distributed key-value store. The Key-Value store shares configuration values and enabling service discovery. Plus, it also comes with a built-in REST API for standard Create, Read, Update, and Delete (CRUD) operations. The ETCD is also the heart of the K8S Cluster, so keep it safe.

The Worker Nodes

Worker nodes are where the magic happens. It used to be called a minion—which is cooler than ‘worker’—and it handles pods in a fluid way. Worker nodes construct the Kubernetes runtime environment and allow for containers and services to utilize cluster resources. Similar to the Master node, there are node components that make Worker nodes functional. These include:

  1.  Kubelet
  2.  Kube-proxy
  3. Pods
  4.  Container runtime

Kubelet acts as the main component and handles most of the workload in individual Worker nodes by working as the agent between master and worker. It takes care of everything from making sure that the node is healthy, that functions such as pod creation and mounting are performed, and that containers handling specific services are running. In a Master-Worker setup, Kubelet gathers pod configurations from the API server (Master) and provides a health status message every few seconds to the master.

Kube-proxy, on the other hand, takes care of service abstraction. Its primary function is handling connection forwarding, although a kube-proxy can be configured to allow for seamless communications between pods. Modern Kubernetes environments now use service mesh to simplify pod-to-pod communications.

Kubernetes doesn't actually run containers directly. Rather, it wraps one or more containers into a higher-level structure called a pod. Pods are, however, the basic unit of deployment in K8s. Containers in the same pod will also share the same resources and local network. Check out this sample pod.yml file as below:

apiVersion: v1
kind: Pod
metadata:
name: webapp
labels:
app: webapp
release: "0"
spec:
containers:
- name: webapp
image: venu-docker-registery-webapp-angular:release0

And finally, container runtime is the component responsible for running the containers. Kubernetes supports a variety of container runtimes including Docker, rkt, containerd, cri-o, and any implementation of the Kubernetes CRI (Container Runtime Interface).

#Kubernetesarchitecture
(Stark & Griffin, 2018)

Key Principles of the Kubernetes Architecture

The elegance of Kubernetes architecture has its roots. Kubernetes incorporates five design principles: scalability, high-availability, security, and portability. Apps are automatically deployed as microservices, so the entire system becomes incredibly scalable. Containers can be stateless, duplicated, and scaled automatically.

Since duplication and load balancing are native functions of Kubernetes—and can be fully automated—high-availability becomes an easy objective to achieve. When resources are drained by certain pods, there is no single point of failure. Managed Kubernetes services like AWS EKS make high-availability Kubernetes clusters even more accessible.

Security may not be a native function of Kubernetes, but that doesn’t mean the entire architecture cannot be secured with a simple solution. Limiting communications between pods, making sure that only assigned ports are open, and adding extra layers of security are equally easy. In the case of AWS EKS, you also have the entire security arsenal of Amazon Web Services at your disposal.

Portability represents the universal nature of Kubernetes. The entire architecture is platform—and hardware—agnostic. It can be deployed to any cloud environment as long as the resources needed by your web services are present. Everything from the Master node to container and CRIs can be moved to a new cloud environment without extensive adjustments.

Leveraging Kubernetes

Kubernetes architecture is built for cloud computing. It divides large tasks into smaller runtimes and makes environment management easy. It is up to the web apps and services that utilize Kubernetes to fully leverage the power of the environment.

References

Stark, W., & Griffin, A. (2018). Kubernetes architecture – All things in moderation. Retrieved 8 November 2019, from https://hydrasky.com/linux/kubernetes-architecutre/

Ibexlabs is an experienced DevOps & Managed Services provider and an AWS consulting partner. Our AWS Certified DevOps consultancy team evaluates your infrastructure and make recommendations based on your individual business or personal requirements. Contact us today and set up a free consultation to discuss a custom-built solution tailored just for you.

VenuGopal Reddy Pagidi

Venugopal is a Lead DevOps Engineer at Ibexlabs.

Talk to an Ibexlabs Cloud Advisor