Kubernetes(k8s)

Kubernetes, also known as K8s, is an open-source platform for managing containerized workloads and services. It provides a way to deploy, scale, and manage containerized applications across a cluster of nodes. Kubernetes was originally developed by Google and is now maintained by the Cloud Native Computing
Foundation (CNCF)


Kubernetes provides a set of powerful abstractions and APIs for managing containerized applications and their dependencies in a standardized and consistent way. It allows you to declaratively define your application's desired state in the form of a set of Kubernetes objects (such as pods, services, deployments, config maps, and many others), and then Kubernetes takes care of actually running and managing those objects on a cluster of machines.

Why you need Kubernetes and what it can do?

Simplify container management: Kubernetes provides a unified API for managing
containers, making it easier to deploy and manage containerized applications
across multiple hosts or cloud providers.

Enhance resiliency: Kubernetes provides built-in fault tolerance and self-healing
capabilities, which can help keep applications running even in the face of
hardware or software failures.

Simplify application deployment: Kubernetes provides a consistent way to deploy
and manage containerized applications across different environments, such as
on-premises data centers or public cloud providers.

Improve scalability: Kubernetes makes it easy to scale containerized applications up or
down based on demand, ensuring that applications can handle increased traffic or demand
without downtime or disruption.

Increase automation: Kubernetes automates many of the tasks involved in deploying and
managing containerized applications, such as rolling updates, scaling, and load balancing. This can help reduce the burden on operations teams and improve efficiency.

Provide flexibility: Kubernetes is highly configurable and extensible, allowing developers and operations teams to customize it to meet their specific needs. This includes support for different container runtimes, storage systems, and networking plugins.

Kubernetes allows you to choose the Container Runtime Interface (CRI), Container Network Interface (CNI), and Container Storage Interface (CSI) that you want to use with your cluster.

The CRI is a standardized interface between Kubernetes and the container runtime that is responsible for starting and stopping containers. The CRI abstracts away the details of the container runtime, allowing Kubernetes to work with any container runtime that implements the CRI interface. This makes it possible to use different container runtimes on different nodes in the same cluster, or to switch to a different container runtime without having to modify your applications or infrastructure.

The CNI is a standard for configuring network interfaces for Linux containers. Kubernetes uses a CNI plugin to configure the network interfaces for the containers running on your cluster. The CNI plugin is responsible for setting up the network namespace for the container, configuring the IP address and routing, and setting up any necessary network policies or security rules. By using a CNI plugin, Kubernetes makes it easy to switch between different networking solutions or to use multiple networking solutions in the same cluster.

The CSI is a standard for exposing storage systems to container orchestrators like Kubernetes. Kubernetes uses a CSI driver to interact with the underlying storage system. The CSI driver is responsible for managing the lifecycle of the storage volumes used by your applications, including creating, deleting, and resizing volumes. By using a CSI driver, Kubernetes makes it easy to use a wide range of storage systems with your applications, including cloud-based storage solutions, on-premises storage systems, and specialized storage solutions for specific use cases.

Container orchestration is the process of managing, deploying, and scaling containers in a distributed environment. It involves automating the deployment and management of containerized applications across a cluster of hosts, and ensuring that the containers are running as expected. Container orchestration systems typically provide features such as container scheduling, load balancing, service discovery, health monitoring, and automated scaling based on demand. Today, Kubernetes is the most popular container orchestration platform used globally.

k8s Cluster is a set of nodes that work together to run containerized applications. The nodes can be virtual or physical machines, and they typically run Linux as the operating system. The cluster consists of two main types of nodes:

Leave a Comment