What is a Kubernetes cluster?
A Kubernetes (K8s) cluster is a group of computing nodes, or worker machines, that run containerized applications. Containerization is a software deployment and runtime process that bundles an application’s code with all the files and libraries it needs to run on any infrastructure. Kubernetes is an open source container orchestration software with which you can manage, coordinate, and schedule containers at scale. Kubernetes places containers into pods and runs them on nodes. A Kubernetes cluster has, at a minimum, a master node running a container pod and a control plane that manages the cluster. When you deploy Kubernetes, you are essentially running a Kubernetes cluster.
What are Kubernetes fundamentals?
To understand a Kubernetes cluster, you first need to understand the fundamentals of containerization with Kubernetes.
A container is a single application or microservice packaged with its dependencies, runnable as a self-contained environment and application in one. Modern applications adopted distributed microservices architecture where every application includes hundreds or even thousands of discrete software components that run independently. Every component (or microservice) performs a single independent function to enhance code modularity. By creating independent containers for each service, applications can be deployed and distributed across a number of machines. You can scale individual microservice workloads and computation capabilities up or down to maximize application efficiency.
Kubernetes is open source container orchestration software that simplifies the management of containers at scale. It can schedule, run, start up and shut down containers, and automate management functions. Developers get the benefits of containerization at scale without the administration overheads.
Next, let’s look at some core Kubernetes concepts.
A pod is the standard deployable unit under Kubernetes. Pods contain one or more containers and, within the pod, containers share the same system resources such as storage and networking. Each pod gets a unique IP address.
Containers within a pod are not isolated. Think of a pod as similar to a virtual machine (VM), with containers similar to applications running on the VM. Pods and groups of pods can be organized by attaching attribute labels to them, such as labeling ‘dev’ or ‘prod’ for the type of environment.
A node is a machine that runs pods. It can be a physical or virtual server, such as an Amazon EC2 instance. The components on a node include:
- Kubelet for node and container management
- Kube-proxy for a network proxy
- Container runtime
A compatible container runtime must be installed on the node to run containers. Kubernetes supports several container runtimes, such as the Kubernetes Container Runtime Interface and container.
Replica set and deployment
A pod is a standalone artifact and when its node goes down it does not automatically restart. If a pod, or pods, are grouped into a replica set, in Kubernetes, you can designate replica sets that will always be running across nodes. This is critical for scaling up and down and ensuring the continuity of apps and services.
A deployment is the Kubernetes management object for deploying an application, as well as updating or rolling back the app without taking it offline.
Service and ingress
Use a Kubernetes service to expose a pod or group of pods on the network, through an endpoint, for interactivity that follows standard network communication rules. For public internet traffic access, a Kubernetes ingress is attached to a service, which then links to a pod or pods.
What are the Kubernetes cluster components?
A Kubernetes cluster is a group of one or more nodes with running pods. Within the cluster, the Kubernetes control plane manages nodes and pods.
Control plane components include:
- Kubernetes API server (kube-apiserver) that manages communications within and to the cluster
- Storage (etcd) to record the cluster’s persistent state
Scheduler (kube-scheduler) to manage node and subsequent pod Kubernetes resources
Other components include a controller manager for node and job control (kube-controller-manager), and a cloud controller manager for integration with provider-specific public cloud infrastructure (cloud-controller-manager).
Because containers do not have persistent storage, applications need to store data that persists. Pods may also require access to shared data. Persistent volumes can be added to a cluster as storage, referenced within the cluster similarly to a node.
How do developers work with the Kubernetes cluster?
Developers must first download and install Kubernetes on a master node and its worker nodes. They can then deploy the cluster on physical or virtual machines, locally, in a data center, or in the cloud.
For a simple start with Linux virtual machines, on your chosen master node (virtual machine), first install:
- Docker or any other containerization software.
- Repository key and code repository of Kubernetes.
- Package kubeadm for cluster bootstrapping.
- Package kubelet for node coordination.
- Package kubectl for the cluster command line.
Perform the process on each of the other designated worker nodes.
To initialize a cluster, run the kubeadm init command on the master node. You must add a kube config file and deploy pod networking, typically with a YAML file, before the cluster is ready for work. The kubeadm init command outputs a join command, which can be copied and pasted into the other virtual machine worker nodes’ command lines. This allows each worker node to join the cluster.
Working with Kubernetes
With Kubernetes UI dashboard, deployers can create and deploy applications on the cluster. For a Kubernetes UI dashboard, run the kubectl proxy command on the master machine. The UI will then be available at http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/.
What is Kubernetes cluster management?
Kubernetes cluster management is the term for managing multiple Kubernetes clusters at scale. As an example, consider a development environment—the team may require test, development, and production clusters that each run across multiple distributed on-site and cloud-based physical and virtual machines.
To manage multiple different types of clusters together, you need to be able to perform cluster operations such as creation and destruction, in-situ updates, maintenance, reconfiguration, security, cluster data reporting, and so on. Multi-cluster management can be achieved through a combination of Kubernetes services, specialized tools, configurations, and best practices.
How can AWS help with your Kubernetes cluster requirements?
AWS provides cloud services to configure, run, and manage your Kubernetes clusters:
- Amazon Elastic Compute Cloud (EC2) helps you provision and run Kubernetes on your choice of instance types.
- Amazon Elastic Kubernetes Service (EKS) helps you start, run, and scale Kubernetes—without needing to provision or manage master instances with a control plane and etcd. EKS comes with cluster management tools and useful integrations with AWS networking and security services.
Get started with Kubernetes clusters on AWS by creating a free account today.
Next Steps on AWS
Instant get access to the AWS Free Tier.
Get started building in the AWS management console.