I am a technology enthusiast with a keen focus on defining high-impact Cloud Native solutions. Expert in Scrum & DevOps practices.
Learn Kubernetes and Container Orchestration Management
As business applications are rapidly being developed either with a Cloud-Native approach or existing workloads are migrating to the cloud, everyone from business/product managers to software architects, developers, and operation engineers are finding it hard to ignore two technologies—Docker & Kubernetes.
Containers are making application deployments portable across different types of infrastructure: physical, virtual, or on-demand (cloud). However, the number of containers running in production can grow exponentially very fast. Thus, we need technology components that can effectively manage them.
This tutorial is intended for people who are looking for a conceptual understanding of Kubernetes. In this tutorial, I will cover container orchestration management and Kubernetes concepts, Kubernetes architecture, and Kubernetes features. I will conclude the tutorial with the benefits and complexities associated with Kubernetes and some key points to keep in mind before introducing Kubernetes into your architecture.
Key Terms and Definitions
- Cloud Computing: On-demand availability of computing resources (CPU, memory, storage)/platforms/software.
- Containers, Docker: Read the previous tutorial here.
- Kubernetes: A container orchestration technology (explored in detail throughout the tutorial).
- Cluster: Group of interconnected computers, that work together to perform a computational task.
- Node: Single computer in a cluster.
- Network Load Balancing: Distribution of incoming network traffic across multiple servers
- RESTful APIs: Application Programming Interface (API) that uses RESTful protocol.
- Infrastructure as Code (IaaC): Ability to provision and manage infrastructure (networks, virtual machines, load balancers, etc) using a descriptive model that can be versioned, very similar to application code.
- YAML: Human readable structured format. Commonly used for defining configuration files.
What Is Container Orchestration and Management?
Containerization enables packing, distributing, and easy porting of applications. As the number of applications and containers grow over a period of time, the following key issues emerge:
- How to manage application or container upgrades/patches without downtime?
- What do you do if a container stops running? Can your system self-heal?
- How do you monitor all your containers?
- Where and how do you place a newly spun container in your cluster?
- How to automatically scale (up or down) your application by varying the number of running containers?
In order to overcome the above challenges, a software-based strategy is required to be able to automatically create/update/remove, orchestrate, deploy and scale containers. There are several container orchestration & management tools available in the market, namely Kubernetes, Apache Mesos, Nomad, etc.
In addition to providing container orchestration and management capabilities to overcome the above-mentioned challenges, these tools provide additional capabilities such as networking, load balancing, storage, monitoring, etc. In this tutorial, we will focus on Kubernetes.
What Is Kubernetes?
Kubernetes is the most widely used container orchestration and management platform. It is open source and vendor agnostic i.e.; it can orchestrate containers from different vendors including Docker. Kubernetes is flexible and can orchestrate and manage containers running on physical or virtual infrastructure (including cloud).
As per the Kubernetes.io website:
“Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.”
Let us see how Kubernetes works "under the hood" and learn the different components of a Kubernetes cluster.
Kubernetes Component Architecture
Every Kubernetes cluster comprises two main components:
- Control Plane, also called Kubernetes Master or simply Master,
- and one or more Kubernetes Nodes, or simply Nodes.
A node is a virtual or a physical machine in the cluster and it contains all the services required to run a pod. It is managed by the control pane. A node comprises of following components:
- Pods: A pod is a logical construct. To put it simply a pod is a group of one or more containers, with shared storage and network resources. Each pod has a pod specification that defines how each pod, including its containers, should run. The kubernetes.io website defines pod as:
“Pods are the smallest deployable units of computing that you can create and manage in Kubernetes.”
- Container Runtime: It is the container engine that is responsible for running containers. Kubernetes supports several container engine providers: Docker, CRI-O, etc.
- Kubelet: It is an agent that runs on each node in the cluster. kubelet’s primary function is to ensure that the pods and their containers are running in the desired state, as per the pod specification. The kubelet also reports to "Master" the health of the host where it is running.
- Kube-proxy: This is a network proxy running on each node. It forwards requests to correct pods/containers in a cluster.
Kubectl is a command line tool (CLI) which you will use to send commands to control pane. It lets you control Kubernetes clusters via the API server (explained later).
Kubernetes API server exposes a set to RESTful APIs which enable you to control the cluster. Kubectl commands are internally converted into these REST API requests sent to the API server.
The control plane is responsible for managing all nodes in the cluster. It comprises of following components:
It acts as the gateway to other components of the control pane. API server exposes RESTful APIs, which help manage the cluster. Requests to API servers can be broadly categorized into two types.
- Retrieve Cluster Resources: These types of requests retrieve information about cluster components. Below is an example of a REST API call to retrieve information about a particular pod named pod1.
We can also retrieve the same information using the kubectl command-line tool. Internally the kubectl command is converted into a REST API call. Below is an example of the kubectl command.
- Changes Cluster Resources: These types of requests are used to make changes to cluster resources. Using these requests, you can create, update or delete resources. Below is an example of a REST API call to create a pod.
The below command will delete all pods.
A simple distributed key-value store is used to store the configuration of the Kubernetes cluster. It is accessible only through the API server. Etcd also stores the actual and desired state of the cluster objects.
Anything we read using kubectl command or APIs is fetched from etcd. Similarly, any update we make to cluster results into updating an entry in etcd.
This component is responsible for running controller processes. Controller processes regulate the state of the cluster. They are continuously monitoring the cluster through API server and are working towards matching the current state to the desired state of the cluster. Let us understand this in more detail with an example. Replication Controller (one type of controller) is responsible for ensuring that a specified number of pod replicas are running at any point in time.
Now let us assume that our desired cluster state is to have 3 web server pods running at any point in time. If one of these pods crashes, then our current state changes to 2 webserver pods. The Replication Controller detects this anomaly in the current and desired state, and it will automatically spin up a new webserver pod.
Similarly, there are other types of controllers regulating the state of respective objects, for example, Endpoint controller, Node controller, Job controller, etc.
It watches for pods that have not been assigned to a node. Scheduler is responsible for scheduling pod onto the best fit node. While making scheduling decisions about the best fit node it takes various constraints into account, such as pod hardware or software constraints, data locality, affinity and anti-affinity specifications, etc.
Benefits and Challenges
As we can see, Kubernetes automates a considerable amount of work required to deploy, orchestrate and manage containerized applications, and hides the associated complexity. This reduces resources required for deployment, monitoring & incidence management, and facilitates increased velocity of continuous deployments.
Below are the key benefits of leveraging Kubernetes:
On the flip side, the above-listed benefits come at a cost. Below are the key challenges in implementing Kubernetes that one needs to keep in mind:
Final Thoughts on Kubernetes
In this tutorial, we learned about the problems container orchestration and management platforms can solve, and how Kubernetes lives up to this task. Keeping the above benefits and challenges in mind, you may want to keep the key points below in mind before introducing a heavy-weight component like Kubernetes into your architecture.
To conclude, Kubernetes' benefits, in the long run, outweigh the associated complexity. With a good understanding of how its basic building blocks work together, one can begin the journey to design systems leveraging capabilities of the platform to deploy, orchestrate and manage your workloads. As you move along the journey one can start experimenting with other capabilities and extensions.
This brings us to the end of this tutorial. Happy learning.
A big thanks to Narotam Puri and Sapan Khandwala for taking the pains to review the draft version of this tutorial. Their valuable feedback has been instrumental in improving the quality of the tutorial. Keep up the good Karma!
- What is Kubernetes? | Kubernetes
Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation.
- Kubernetes Concepts and Architecture | Platform9
Kubernetes is more than just container orchestration. In this article you will learn about Kubernetes architecture constructs, concepts and much more.
- Introduction to Kubernetes - Architecture, Benefits, and Problems - Instana
Introduction to Kubernetes: Learn what Kubernetes is used for, how it is architected, benefits of using Kubernetes, and potential problems.
- Kubernetes Architecture | Aqua
Learn about Kubernetes Architecture, components, and design principles and see a sample installation and setup procedure.
This article is accurate and true to the best of the author’s knowledge. Content is for informational or entertainment purposes only and does not substitute for personal counsel or professional advice in business, financial, legal, or technical matters.
© 2021 Manhar Puri
Mishka on May 15, 2021:
Very well written and explained kubernetes.