AWS Partner Network (APN) Blog

Application Modernization Using Microservices Architecture with VMware Cloud on AWS

By Sheng Chen, Sr. Specialist Solutions Architect at AWS

VMware Cloud on AWS-dark

VMware Cloud on AWS provides an elastic and scalable solution for customers to deploy VMware’s Software-Defined Data Center (SDDC) and consume vSphere workloads on the Amazon Web Services (AWS) global infrastructure.

With integrated access to 200+ AWS native services, VMware Cloud on AWS helps customers accelerate the application modernization journey with minimal disruption to their business.

Specifically, customers can start transforming their applications and moving towards microservices architecture by utilizing the VMware Cloud on AWS and its unique capabilities for integrating with AWS services.

Microservices architecture provides the following benefits to modern application development:

  • Large projects are broken into smaller services, resulting in enhanced development agility and fast and frequent deployment cycles.
  • Flexible choice of technology for each microservice eliminates vendor or technology lock-in.
  • Service independence increases application’s failure resistance with improved fault isolation.
  • Microservices allow each service to be independently scaled for improved resource efficiency and cost optimization.
  • Flexibility to set up a continuous integration and continuous delivery (CI/CD) pipeline for each microservice, making it easier to try out new ideas and accelerate time-to-market for new features.

In this post, I will discuss architectural considerations and best practices for integrating VMware Cloud on AWS with Amazon Elastic Kubernetes Service (Amazon EKS), in order for customers to manage container workloads and deploy microservices-based applications.

I will also cover use cases for leveraging AWS DevOps tools and other cloud-native services to accelerate a microservices-oriented Software Development Life Cycle (SDLC) process and further optimize applications deployment.

Integrating Amazon EKS with VMware Cloud on AWS

Kubernetes is an open-source container orchestration platform for automating microservices deployment and management. Amazon EKS is a managed service for customers to run Kubernetes clusters on AWS without the complexity to install and operate their own Kubernetes control plane or nodes.

Amazon EKS is a scalable and reliable cloud service and is 100 percent compliant to upstream Kubernetes, as certified by the Cloud Native Computing Foundation (CNCF).

VMware Cloud on AWS offers access to native AWS services through the connected customer’s virtual private cloud (VPC) using the high-bandwidth, low-latency Elastic Network Interface (ENI). After provisioning a VMware Cloud on AWS environment, customers can easily integrate EKS with their SDDC clusters and begin their microservices development by following these steps:

  1. Lift-and-shift migration of customer’s applications, including existing database workloads, from on-premises vSphere environment to VMware Cloud on AWS by leveraging tools such as Hybrid Cloud Extension (HCX) with minimum or no downtime.
  2. Deploy one or more fully-managed EKS clusters based on customers’ specific environments (dev/test/production) in the connected VPC.
  3. Refactor and containerize legacy systems in the dev/test EKS clusters with minimum disruptions, while keep the existing database tier running on VMware Cloud on AWS to avoid the complexity and delay for database migrations.
  4. The dev team can begin the process of rearchitecting and transforming their applications by leveraging EKS to manage and automate the testing and deployment of container workloads, while still connecting to the existing database workloads on VMware Cloud on AWS via the ENI (see Figure 1).

VMC-Microservices-1

Figure 1 – Integrating Amazon EKS with VMware Cloud on AWS.

Connecting EKS Pods to Database VMs Running on VMware Cloud on AWS

After a lift-and-shift migration of existing database virtual machines (VMs) into VMware Cloud on AWS, customers have the options to replicate the production database workloads to dev/test environments by leveraging the vSphere clone feature.

Next, customers can easily connect their microservices deployed on Amazon EKS clusters (within the connected VPC) to the database workloads running on SDDC via the high-bandwidth and low-latency ENI. Specifically, you can leverage the standard Kubernetes environment variables to pass on the VMware database workload properties (such as IP addresses) to the Kubernetes Pods running on the EKS clusters (see Figure 2).

Furthermore, the NSX firewall capabilities built into the Compute Gateway can also be utilized to secure and limit access to the database VMs.

VMC-Microservices-2

Figure 2 – Connecting EKS Pods to database VMs running on VMware Cloud on AWS.

AWS App2Container (A2C)

For enterprises running on existing Java and .NET applications, you can utilize the AWS App2Container (A2C) command-line tool to rapidly transform and containerize legacy systems to modern applications with minimum disruptions.

The A2C tool automatically discovers the current applications and identifies their dependencies, and then packages the application artifacts and dependencies into container images. In addition, the A2C tool will also generate relevant artifacts for seamless deployment to EKS clusters, which significantly simplifies the container workloads deployment process and increases operational efficiency.

Publishing Microservices Running on Amazon EKS

Once customers have transformed their applications into containers, they’ll need mechanisms to publish microservices-based applications externally for end-user testing or deploying to production. Amazon EKS provides a few different options to securely and efficiently expose microservices to the external network.

First, EKS supports the standard Kubernetes service type LoadBalancer that is a logical abstraction used to expose an microservice running on a set of Pods, which are the basic deployable objects containing one or more containers. Customers can use the Network Load Balancer (instance or IP targets) or the Classic Load Balancer (instance target only) to expose a TCP/UDP-based microservice and redirect incoming traffic to the backend Pods.

Second, EKS supports the Kubernetes Ingress object for exposing HTTP/HTTPS-based microservices with advanced features such as the SSL termination, virtual host, URL rewrite and authentication support.

When an Ingress Controller is deployed in conjunction with Network Load Balancer, you have the flexibility to enable host-based or path-based routing, which significantly reduces the consumption of additional Network Load Balancers and Elastic IPs (EIPs) and increases operational efficiency.

VMC-Microservices-3

Figure 3 – Publishing microservices through Kubernetes Ingress via Network Load Balancer.

Third, Amazon EKS works great with the AWS Load Balancer Controller (formerly the Application Load Balancer Ingress Controller, or ALB Ingress Controller), which adds additional features such as WebSockets, HTTP/2, and unique capabilities of integrating with AWS WAF.

Last but not least, customers could leverage Amazon Route 53 to route incoming DNS requests and distribute traffic to the Network Load Balancer or Application Load Balancer hosting the microservices. Amazon Route 53 offers advanced DNS load balancing features such as latency-based or geolocation-based routing, or failover to a secondary region with its built-in health check capabilities.

Leveraging AWS DevOps tools with Amazon EKS

To further accelerate the application modernization journey and help customers improve their SDLC best practices, you can leverage AWS DevOps tools to set up a CI/CD pipeline for each individual microservice. These fully managed DevOps tools and their native integration capabilities with Amazon EKS enable customers to easily try out new ideas and quickly roll back if something doesn’t work as planned.

Since the legacy applications are broken into microservices, applications become more resilient to failure by only degrading certain functionality and not crashing the entire application. The risks of total failure of an entire application during a microservice testing or upgrading process are significantly reduced. This makes it easier to update code and results in fast and frequent software development tests and releases.

The following diagram illustrates a sample CI/CD pipeline for microservices leveraging AWS DevOps tools such as AWS CodePipeline, AWS CodeCommit, and AWS CodeBuild, with native integrations to EKS and Amazon Elastic Container Registry (Amazon ECR).

VMC-Microservices-4

Figure 4 – Leverage AWS DevOps tools with EKS to improve SDLC best practices.

  1. Dev team commits code to an AWS CodeCommit repository, which triggers AWS CodePipeline to start processing the code changes through the CI/CD pipeline.
  2. AWS CodeBuild packages the code changes and dependencies and builds a new Docker image.
  3. The new Docker image is pushed to Amazon ECR.
  4. CodeBuild uses Kubectl to invoke Kubernetes API and update the image tag for the microservice deployment.
  5. Kubernetes performs a rolling update of the pods in the application deployment as per the new Docker image specified in Amazon ECR.

For more details, please refer to the full reference architecture.

Optimizing Microservices-Based Application Deployments

The diagram below represents an overall architecture on how to leverage additional AWS native services to further optimize microservices-based application deployments.

VMC-Microservices-5

Figure 5 – Optimize microservices-based application deployments with AWS native services.

  • Amazon Elastic File System (Amazon EFS): Provides Kubernetes persistent storage via Container Storage Interface (CSI) driver for microservices running on EKS.
  • Amazon Simple Storage Service (Amazon S3): Provides flexible object storage for the microservices and offers cost-effective application backup options.
  • Amazon Route 53: Provides DNS routing and directs incoming requests to Network Load Balancer or Application Load Balancer for hosted microservices, and also offers advanced DNS load balancing and automatic failover capabilities.
  • AWS WAF: Protects web applications or APIs against common attack patterns and sophisticated web exploits.
  • Amazon CloudFront: Offers content caching and acceleration for the microservices, SSL offloading and a static integration endpoint for AWS WAF.
  • AWS Shield: Managed Distributed Denial of Service (DDoS) protection service that provides always-on detection and automatic inline mitigations at the internet edge.

Customer Benefits

In this post, I have explained how customers can easily integrate VMware Cloud on AWS with Amazon EKS to accelerate their application transformation and modernization. Amazon EKS provides a simplified and fully managed Kubernetes platform to effortlessly manage container workloads and deploy microservices-based applications, along with multiple flexible options to publish microservices externally.

I have discussed use cases of utilizing various AWS DevOps tools to build CI/CD pipelines for the microservices, in order to improve and expedite the SDLC process. This enables customers to quickly try out new ideas and frequently roll out new services and features, delivering fast innovations to their business.

Furthermore, I have explored additional AWS native services for deploying and optimizing microservices-based applications. These additional services help customers further increase their microservices performance and scalability, improve security and reliability, as well as optimize resource efficiency and cost.

Summary

VMware Cloud on AWS provides native access to a large portfolio of AWS services that can help customers accelerate their application modernization journey.

By utilizing the native AWS services, customers can rapidly transform their applications and build containerized microservices architectures without the management overhead and operational complexity.

To learn more, we recommend you review these additional resources: