AWS Partner Network (APN) Blog
Benefits of Running Virtual Machines on Red Hat OpenShift for AWS Customers
By Mehdi Salehi, Sr. Partner Solutions Architect – AWS
By Boris Jelic, Distinguished Engineer & APAC CTO, Hybrid Cloud Services – IBM Consulting
By Suresh Eswaran, Distinguished Engineer & ASEAN CTO, Hybrid Cloud Services – IBM Consulting
Over the last couple of decades, x86 virtualization has become a fundamental technology in data centers, enabling organizations to build agile, scalable, and efficient IT infrastructures. This technology has helped organizations consolidate multiple physical servers into a single server, reducing hardware costs, power consumption, and data center footprint.
However, given the rise of containerization and cloud computing, some experts question the continued relevance of virtual machines (VMs). Nevertheless, VMs remain a crucial part of many IT environments due to the long-term and substantial investments made by the companies.
In this post, you will learn that running VMs on top of OpenShift Container Platform (OCP) on Amazon Web Services (AWS) offers several benefits, including integrated management, migration and modernization, and improving developer productivity.
IBM Consulting is an AWS Premier Tier Services Partner and is recognized as a Global Systems Integrator (GSI) for Red Hat in APAC, which positions IBM Consulting to help customers who use AWS to harness the power of innovation and drive their business transformation.
OpenShift Virtualization
Red Hat OpenShift Virtualization is an enterprise-grade feature that makes it easy to deploy and manage VMs on OpenShift. This capability is underpinned by the open-source KubeVirt project that leverages Kernel-based virtual machines (KVMs) to run VMs, where they have access to standard Kubernetes networking and storage.
Available as part of the OpenShift subscription, Red Hat provides enterprise-grade support and additional features beyond what is available in the open-source KubeVirt project.
Figure 1 depicts the architecture of OpenShift Virtualization. Components shown in blue are part of the OpenShift Virtualization operator.
Figure 1 – Architecture components of OpenShift Virtualization.
- virt-controller: Cluster-level component responsible for cluster-wide virtualization functionality and managing the lifecycle of pods associated with VMs. The operator creates the pod where the virtual machine object executes.
- virt-handler: Host-level DaemonSet resource running on each worker node that monitors for changes to a VM object and brings it back into the required state.
- virt-launcher: Primary container in a VM-associated pod runs the virt-launcher component to provide control groups (cgroups) and namespaces for hosting the VM process. This processing occurs when the VM CRD object is passed to the virt-launcher component by the virt-handler component. The virt-launcher component uses a container-local libvirtd instance to start the virtual machine. The virt-launcher component then monitors the VM process until the process exits.
- libvirtd: Virtual machine pod has a libvirtd instance used by the virt-launcher component to manage the lifecycle of a VM process.
Comparing OpenShift and Red Hat Virtualization
Red Hat Virtualization (RHV) is an alternative solution to VMware vSphere to provide a full-stack virtualization solution and management console. OpenShift Virtualization, on the other hand, is an add-on feature of Red Hat OpenShift that enables running VMs on top of Kubernetes using the KubeVirt project.
According to the Red Hat Virtualization lifecycle, the RHV management feature set will be converged with OpenShift Virtualization, providing customers with requirements for containers and VMs, migration path, and common platform for deploying and managing both.
Main Use Cases of Running VMs Alongside Pods in Kubernetes
Support Legacy Applications
Many mission-critical applications were built for older operating systems that are not supported on cloud platforms and cannot be easily containerized. It’s impractical to rewrite and modernize all applications into containers.
By running these VMs on top of OpenShift, however, applications can be integrated with containers on OpenShift. This allows customers to take advantage of the benefits of cloud-native architectures without sacrificing the functionality of their VMs.
Lift-and-Shift Migration
For many customers, managing the hardware infrastructure is not a differentiator in their business. As a result, migrating the VMs to a cloud platform can be a wise decision.
However, it’s worth noting that in certain cases, older or legacy versions of operating systems may not be supported when setting up a comparable virtual machine on Amazon Elastic Compute Cloud (Amazon EC2).
With OpenShift Virtualization, customers can take advantage of a swift lift-and-shift method to move to a consolidated platform without the need for significant code refactoring.
Run Windows VMs Alongside Containers
OpenShift Virtualization leverages Kernel-based virtual machines (KVMs) to run VMs and allows you to run a variety of operating systems, including Windows.
However, note that running Windows on OpenShift Virtualization may require additional licenses and compliance with Microsoft licensing policies.
Accelerate Application Delivery with a Single Platform
Developers can quickly and easily create VMs for testing and development, reducing the time and cost of the development process. As a result, all applications, including VMs and containers, can benefit from a unified DevOps pipeline.
The OpenShift platform simplifies this challenge by providing a set of unified developer capabilities across VMs and containers. This reduces the number of tools and runtimes developers need to use to successfully build and deploy enterprise-grade software.
In-Place Modernization
In-place application modernization using OpenShift Virtualization involves wrapping VM legacy applications into a container pod, deploying them in OpenShift, gradually modernizing application services, implementing CI/CD pipelines, service mesh, monitoring and logging, scalability and resiliency, and continuous improvement.
This approach allows organizations to modernize their existing applications in a gradual and iterative manner, leveraging the power of containerization and container orchestration provided by OpenShift.
Configuring OpenShift Virtualization on AWS
This section illustrates how to configure the virtualization feature on OpenShift Container Platform. As a prerequisite of the following steps, we have already created a cluster available in AWS Marketplace.
Step 1: Install the OpenShift Virtualization Operator
OperatorHub is the web console interface in OpenShift that cluster administrators use to discover and install operators. When you install the OpenShift Virtualization operator, it creates custom resources that add new objects to the cluster, which enable the execution of virtualization tasks.
The next step is to install a “HyperConverged” object. OpenShift Hyperconverged Infrastructure (HCI) enables the creation of a highly scalable and distributed infrastructure for running containerized workloads.
Figure 2 – Installing HyperConverged.
OpenShift Virtualization operator adds several custom resources like “kubevirt” and “hyperconverged” in the openshift-cnv namespace. Below are a couple of examples:
Once installed, OpenShift Virtualization creates a new section in the OpenShift web console, which can be viewed in the left pane of the console after refreshing the page.
Step 2: Create a Virtual Machine
To create a VM from the OpenShift web console, select Virtualization from the left pane, click on Virtual Machines, and create a VM. It’s also possible to programmatically create VMs using a yaml definition file.
The OpenShift console provides access to multiple pre-configured templates to run VMs. You can also create custom templates by using the KubeVirt documentation.
As shown in Figure 3, we have used a predefined template to install Red Hat Enterprise Linux 9 (rhel9). The console allows you to customize and edit the configurations like the number of vCPUs, memory, and disk of the VM.
Figure 3 – Create a virtual machine on OpenShift.
Step 3: Verify the Virtual Machine
In the previous step, we sent an API request to OpenShift which should have created a “VirtualMachine” object based on the “virtualmachines.kubevirt.io” custom resource.
Next, let’s verify if the VM state is running successfully.
Now, let’s get additional details to determine why the VM cannot be scheduled.
As highlighted in the command result above, none of the existing nodes in the cluster can host a VM because the Amazon EC2 instances behind the cluster worker nodes do not support virtualization.
The following command lists the cluster nodes, filters out the master and infra nodes, and prints the underlying EC2 instance type(s) of the worker nodes.
M6i.2xlarge is not a bare metal instance, and that’s why the VM failed to be scheduled.
Step 4: Add Bare Metal Instances to the Cluster
At the time of writing, OpenShift requires bare metal worker nodes to be able to host VMs. We need to create a machine pool with an instance type that supports virtualization. There are several supported bare metal instance types from which to choose.
For this demo, c5n.metal is a suitable option due to its relatively low hourly cost. The details of the EC2 instance types and their hourly cost can be found in the Instance Types section of the EC2 console.
There are several ways to create a MachineSet in OpenShift. In this case, we’ll copy the specification of an existing MachineSet and modify its instance type to c5n.metal as follows.
We need to change the name, instanceType, and the number of replicas, and then apply the yaml file.
Depending on the instance family, it may take a few minutes until a bare metal node becomes available in the cluster.
The bare metal node is ready, and we are now able to recreate the VM.
Depending on the customer’s use case, there are different ways (explained here) to access to the VM and get a terminal. Figure 4 shows a virtual terminal which is accessible from the OpenShift console.
Figure 4 – Remote console to the VM.
In this example, we have installed a Linux guest operating system. Like any standard virtualization platform, OpenShift Virtualization supports non-Linux operating systems, such as Windows or others supported by the hardware architecture.
Step 5: Create VMs Programmatically
Customers rarely use the graphical console to create OpenShift resources. Similar to pods, deployments, and other resources, VMs can be created from configuration files and scripts as well.
This allows customers to manage VMs consistently through infrastructure as code (IaC). For more information, visit the OpenShift documentation or the upstream KubeVirt user guide.
Step 6: Interaction of VMs with Pods and AWS
Here, we’ll demonstrate a few scenarios to show how VMs can communicate with the other pods, as well as AWS resources.
Virtual Machine Networking
Each VM is assigned an address from the cluster’s pod IP range.
The IP is in the same range of the pods.
Access an OpenShift Service from the VM
The IP range of the service network in this cluster is 172.30.0.0/16.
As such, the following NGINX service has been assigned an IP address from the same range.
As seen below, the VM can reach the service network successfully.
Add an Amazon EBS Volume to the VM
Amazon Elastic Block Store (Amazon EBS) volumes can be easily provisioned and attached to VMs. In the next step, we’ll assign a 400GB GP3 volume to the VM using the OpenShift web console.
Figure 5 – Adding a block storage to the VM.
As shown below, a new Amazon EBS-based block device has become available in the VM after a few seconds.
Expose a Virtual Machine as a Kubernetes Service
Another interesting scenario is exposing a VM as a service. Currently, three types of services are supported: ClusterIP, NodePort, and LoadBalancer. The default type is ClusterIP.
Centos is a VM which has been configured as a web server to print a simple message. Pod1 is just an arbitrary pod to help us connect and test the web server on the VM.
A Kubernetes service needs a label to select its target. Let’s use the default labels of the VM.
Now, create a service to forward the traffic to the VM on port 80:
That’s it! We have successfully created an Elastic Load Balancer in front of a VM that is running on OpenShift.
Conclusion
In this post, we have shown that OpenShift Virtualization offers multiple advantages to customers. It allows you to operate virtual machines and containers on a single platform, simplifying the process of migration to the cloud and facilitating a seamless transition to a cloud-native architecture.
If you wish to explore this topic further, please contact your representatives from IBM Consulting or AWS.
IBM – AWS Partner Spotlight
IBM Consulting is an AWS Premier Tier Services Partner and is recognized as a Global Systems Integrator (GSI) for Red Hat in APAC, which positions IBM Consulting to help customers who use AWS to harness the power of innovation and drive their business transformation.