AWS Partner Network (APN) Blog

Re-Architecting the Application Journey to Cloud-Native Using an AWS Services-Based API Factory Model and Jump-Start Kit

By Ramprasad Nagaraja, CTO, Cloud Native Engineering – Tech Mahindra
By Harish Sidenur, Enterprise Architect, Cloud Native Engineering – Tech Mahindra
By Kannan Ganesan, Solution Architect, Cloud Native Engineering – Tech Mahindra
By Nitin Chahar, Senior Partner Solution Architect – AWS

Tech-Mahindra-AWS-Partners-1

A Tech Mahindra customer from the Energy and Utility vertical in the U.S. had a vision for cloud adoption and application programming interface (API) development in a multi-cloud environment.

They wanted to transition all appropriate old and new applications to an event-driven architecture while moving away from trigger-based synchronization of data between the different systems.

To realize this vision, the customer partnered with Tech Mahindra to build a cloud-native API Factory Model. Using this model, an existing application called Master Premise System (MPS) was re-architected to move it from on-premises to hybrid cloud.

The Master Premise is the system of record for the Premise and Service points. Premise is a service location or real-estate location represented through a physical address, and Service Point is the point of connection between the facilities of the serving utility and the premises.

In this post, we highlight the key aspects of the API Factory Model using AWS services. Additionally, we describe how this model was used for re-architecting the on-premises application (MPS) to a cloud-native application.

Tech Mahindra is an AWS Advanced Consulting Partner and Managed Service Provider (MSP) that specializes in digital transformation, consulting, and business re-engineering solutions. Their expertise in automation and Amazon Web Services (AWS) helped them to build the IT systems patch management process using AWS native services.

Solution Overview

For this project, Tech Mahindra envisaged different styles of cloud migration depending on application workloads. The initial pilot was proposed for applications that warranted re-architecting. As part of this process, Tech Mahindra recommended an event-driven microservices-based API development.

Considering API as a product, two distinct phases of “Produce” and “Consume” lifecycles were defined. This involved a full lifecycle API delivery and management, a development powerhouse-using process, and tools for development and testing.

With API as the recommended approach to drive the re-architecture of application workloads, the requirements were gathered for the candidate application both from a functional and non-functional perspective.

The new system has all the functionalities of the old application, along with the following non-functional requirements:

  • New system would continue to interact with three other on-premises systems that consume its data regularly.
  • Customer’s preference was to have a third-party continuous integration/continuous delivery (CI/CD) pipeline with application deployment on AWS.
  • Design must comply with the internal semantic model (JIM) for the API specifications and with PostgreSQL DB as a foundation database.
  • AWS services (or other open source tools) should align with the client’s Enterprise Information Management (EIM) reference architecture.

Considering these requirements, a solution that is hybrid, multi-cloud, and complies with Common Information Model (CIM) standards for the Energy and Utility domain was required.

Some of the workloads considered were:

  1. Linux Build Agent: To help pull the code from the other cloud platform repos, build Docker container images, and deploy them to the target.
  2. Microservices: To implement the business logic and expose REST API Endpoints for consumption from the customer’s applications and other services.
  3. Anti-Corruption Layer Apps: To use the “Publish” and “Consume” apps for integrating with the on-premises legacy systems with messaging services.
  4. Blazor WebAssembly SPA (Single Page Application): To demonstrate the UI/UX and develop CRUD screens, by using Microsoft Blazor, a web framework.

Tech Mahindra also developed an AWS jump-start kit in the form of an AWS guidance document to provide additional assistance to quick start the solution design. The document had information about the different AWS services and best practices for a given use case, highlighting the process tools and the development and testing guidelines.

Considering the implementation was for a hybrid environment, the choice of AWS services for the application’s business and presentation layers was made.

For the presentation layer, per the customer’s EIM guideline, Tech Mahindra created single SPAs using C# and .NET Core utilizing a component-based architecture.

The choice of business layer components was done with design considerations for scalability, high availability, and performance.

MPS Application Solution Details

AWS services were selected in line with the client’s EIM reference architecture and the application landscape. This included capabilities such as scalability, performance considerations (including average response time, request rate, traffic composition, and application availability), and the multi-cloud hybrid environment.

Business Drivers

The customer wanted to transform their IT landscape into a truly cloud-native suite of applications leveraging the capabilities of a hybrid model.

Cost

Considering this was an enterprise-level application with predominantly structured data, Amazon Relational Database Service (Amazon RDS) was used to provide cost-efficient (charge is for instance and not read/write/update) and resizable capacity.

Speed

Amazon Elastic Compute Cloud (Amazon EC2) was used for the application to avoid a delay in the start time. The microservices part of the core solution was built on .NET 5 and run on Docker containers in an Amazon Linux EC2 instance, orchestrated by Docker Compose.

Security

AWS Secret Manager was used to store RDS connections securely to avoid being added to the application settings. An EC2 instance was configured with an AWS Identity and Access Management (IAM) role to enable reading the secrets from the application running in the EC2 instance through the AWS SDK.

Microservices read the connection string directly from AWS Secrets Manager and use it to connect to RDS. Amazon Cognito was used for authentication and authorization.

Figure 1 – UI/UX integration approach.

Figure 1 – UI/UX integration approach.

Amazon Cognito was integrated with the corporate Microsoft Active Directory to enable single sign-on (SSO) for users. The Blazor App uses the Amazon Cognito User Pool to authenticate users through OIDC protocol. The tokens obtained from the Amazon Cognito User Pool would be passed along with all the requests to Amazon API Gateway.

The API Gateway’s Cognito Authorizer validates the ID token before passing on the request to the microservices. The microservices also receive and validate the ID token, and authorize the user based on the Cognito User Pool Groups that the user belongs to.

Scalability

Amazon API Gateway REST API was used to expose APIs to the customer’s applications. REST API was selected to leverage API management features like API keys and usage plans, which would help to monitor and control the usage of APIs from external users. The APIs were configured to be authenticated through Cognito Authorizer.

The microservices were connected to Amazon API Gateway through a Network Load Balancer and virtual private cloud (VPC) link.

Others

The entity structure exposed through the APIs conforms to the CIM structure, follows standard naming of properties, and is nested by nature. The flat data in the database was transformed to a CIM structure before sending the response.

Similarly, input requests in a CIM structure were transformed to a flat database structure before processing. This enabled a first-of-its-kind CIM data exchange model with APIs from the Energy and Utility industry.

Resilience

The solution leveraged SNS FIFO and SQS FIFO for data communication between systems. Both were used to ensure deduplication and ordered delivery of messages.

The solution used separate SNS-SQS pairs for handling incoming data from on-premises systems to microservices and outgoing data from microservices to on-premises systems. When publishing notifications to SNS FIFO, the SNS notification “message” held the serialized entity, while the message attributes were used to specify the operation (create/edit) and original source of data.

There were some dependency issues tied to the legacy systems which the application was accessing from its on-premises data center and which were not moving to the cloud immediately. After considering several factors, an integration mechanism of Anti-Corruption Layer was used to provide isolation powered by Amazon Simple Notification Service (SNS) and Amazon Simple Queue Service (SQS) for communication.

Data created or updated by on-premises systems was pushed to the Publish app. This data was forwarded to the respective microservices through SNS and SQS pairs, in accordance with the event-driven model.

Similarly, data created or updated by microservices was pushed to the Consume app through SNS FIFO and SQS FIFO pairs. The Consume app further distributed the data to on-premises systems.

Figure 2 – External integration: anti-corruption layer.

Figure 2 – External integration: anti-corruption layer.

The presentation layer was built with Microsoft Blazor WebAssembly on .NET 5. As a typical SPI, this gets downloaded to the browser and runs locally. The app used Amazon Cognito for authentication and authorization. It was hosted on an Amazon Simple Storage Service (Amazon S3) bucket and distributed through Amazon CloudFront.

On the first time load, the app redirected to Cognito login, allowing Cognito to present the login user interface (UI). After the user successfully logged in, Cognito redirected the user back to the application hosted on CloudFront. The application further used Cognito User Pool Groups to authorize the user for various functionalities.

A DevOps pipeline was built on a third-party cloud per the customer’s preference, but the application was deployed on AWS with tool-based automation implemented at different stages.

Lowering Metering Cost

Running and using the different services on AWS provided a scalable, pay-as-you-go pricing model based on the volume of data and compute time used as it moves through the event-driven model.

With different types of users, especially in the development phase, it was necessary to control the cost for some of the services, such as EC2, Amazon API Gateway, and Amazon RDS. The optimized costing was based on:

  • Right-sizing of EC2.
  • Stopping an Amazon RDS DB instance.
  • Stopping unused instances.

However, restarting these services was time consuming and also required a budgetary approach. Hence, AWS Budgets was configured for cost allocation, monitoring, and threshold alerts. API Gateway usage plans and API keys were configured to allow access to selected APIs at agreed-upon request rates and quotas.

Resource Provisioning

Multiple environments like Dev, QA, and Production needed to have all of the required resources and configuration. Recreating them was a time-consuming effort and prone to errors.

AWS addresses this challenge through an AWS CloudFormation service that manages a collection of AWS resources by automating the creation and termination of infrastructure, services, and applications.

Tech Mahindra created CloudFormation templates for the different services used in the application architecture. An alternative was the use of an AWS Cloud Development Kit (AWS CDK), which provided quick and reliable provisioning of the services or “stacks” in different environments.

Separate templates were created for each resource instead of a consolidated template for ease of maintenance. This approach allows developers to easily update or replicate the stacks as needed, allowing for automatic rollbacks, automated state management, and management of resources across accounts and regions.

Customer Benefits

Hybrid Model of Application Usage

Combining user experience with an agile development cycle and controlled cost, the end user experience remained the same, although there was data transfer between the on-premises and cloud application. In fact, their experience was improved as the synchronization moved from trigger- and schedule-based to real time.

Scalable Governance Process

As applications and accounts get added, there’s a need to address AWS account management, cost control, security, and compliance processes. This was achieved through automation and the AWS centralized management toolset.

CIM Model Implementation

All standards for data interchange in the Energy and Utility industry were considered in creating a CIM data exchange model. This could be extensible to any Energy and Utility CIM-based API exchange standard.

Agility in Application Development

An automated CI pipeline using third-party DevOps services and continuous deployments on AWS led to faster delivery.

Infrastructure as Code (IaC) for Rapid Provisioning

The creation of CloudFormation templates for different services automated the creation of environments (Dev, QA, and Production).

Integration of Third-Party Tools

Tools such as XUnit, SonarQube, Postman, and Selenium were integrated in the pipeline for seamless quality delivery.

Positioned for the Future

Tech Mahindra’s Energy and Utility customer is geared for its long-term vision of moving to an API-based, event-driven model by leveraging the API Factory Model and AWS jump-start kit.

The API Factory provides the following benefits:

  • The core team can focus on integrating services and APIs produced rather than development.
  • The adoption of an event-driven model mandates robust planning, conformance to the defined standards, and a streamlined governance structure.
  • Services and APIs would become the pervasive fabric of all applications, leading to highly integrated processes in less time.
  • Reusability and componentization application development become cost effective.
  • Focused teams with increased manageability contribute to business agility and innovation.

The jump-start kit provides best practices, guidance, and a framework by leveraging AWS capabilities and services. The guidance provides an experienced point of view about usage of AWS capabilities and services. This, in turn, enables application developers to migrate or develop cloud applications rapidly.

Implementation of a CIM-compliant solution in the Energy and Utility segment provides the advantages of:

  • Speed of integration with introduction of new applications.
  • Interoperability with future applications.
  • Reduction of data modeling and schema design effort.
  • Economical maintenance in the long term.
  • Traceability of transactions across the enterprise.

Conclusion

The customer’s EIM, simplified data sharing, and controlled costs were the basis for a cloud-first IT business strategy. This was enabled by using AWS services as the core technology and framework modules.

With this new digital technology infrastructure, Tech Mahindra’s customer is poised to work from anywhere, anytime, and from any device. This project achieved their vision of moving the IT landscape into a truly cloud-native suite of applications by leveraging the capabilities of a hybrid model.

As growth occurs, the cloud computing environment will support future initiatives and expansion.

.
Tech-Mahindra-APN-Blog-CTA-1
.


Tech Mahindra – AWS Partner Spotlight

Tech Mahindra is an AWS Advanced Consulting Partner and MSP that specializes in digital transformation, consulting, and business re-engineering solutions.

Contact Tech Mahindra | Partner Overview

*Already worked with Tech Mahindra? Rate the Partner

*To review an AWS Partner, you must be a customer that has worked with them directly on a project.