AWS for Industries

Ten steps to modernizing legacy monoliths in the AWS Cloud

Challenges in modernization

The journey to modernize legacy monolith applications is complex and often spans years in which organizations face several obstacles before achieving success. A key challenge is verifying that the business processes impacted by the change continue to run seamlessly during and after the transition. Organizations must decide between a large big-bang cutover release or minimize disruption by delivering releases in smaller cycles. In the latter option, figuring out how to break up the monolith into smaller parts can present a significant challenge.

An often overlooked yet critical step, prior to initiating the modernization is to clearly identify your objectives, understand motivations, and establish indicators for success. A reason often stated for modernization is the necessity to retire outdated systems, prompted by difficulties in maintaining older hardware or software, rising support costs, and near end-of-life support contracts. However, outdated technology isn’t reason enough for replacement. A successful modernization strategy prioritizes business goals such as cutting costs, increasing efficiency, and making the most of existing investments. Ultimately, it aims to transform legacy systems into agile, scalable, and flexible environments that deliver tangible business benefits.

Migration path

There are seven migration strategies for moving applications to the cloud, known as the 7 Rs: retire, retain, rehost, relocate, repurchase, replatform, and refactor or rearchitect. Refactoring involves modernizing the application during migration to the cloud and transforming its architecture to use cloud-based capabilities, enhancing agility, performance, and scalability. It is chosen to meet demands for faster development, scalability, and cost reduction. Typical scenarios include overcoming limitations of legacy systems, addressing monolithic application challenges in rapid delivery, managing unmaintainable legacy software, improving testability and security.

This post describes a ten-step approach for refactoring legacy monoliths, based on experiences at Volkswagen AG and AWS (Figure 1). It advocates for decomposing monoliths by business processes and business capabilities followed by applying the strangler fig pattern. This pattern involves a gradual replacement of the legacy system’s functionalities with new services. The goal is a smooth transition, allowing the legacy and modernized systems to coexist until the full replacement is achieved.

Figure 1 - Ten step process flow diagram for refactoringFigure 1 – Ten step process flow diagram for refactoring

Step 1 – Understand and outline goals, KPIs for success
Understand and outline both business and technical goals of the modernization initiative to align with the organization’s overall strategic objectives, while identifying key drivers and understanding the primary motivations for modernization such as cost reduction, improved flexibility, enhanced operational efficiency, or specific business objectives, such as shorter cycle times and higher production output.

KPIs for success
To measure the success of modernization, set both business and technical key performance indicators (KPIs) to ensure alignment with customer needs and drive operational excellence. Use the goal, question, metric (GQM) framework to identify objectives and relevant metrics. Metrics can be categorized into two main groups.

1. Product metrics, which focus on delivering customer value, such as the impact of new features on business operations.
2. Operational metrics, which focus on enhancing the software delivery process, like lead time, cycle time, deployment frequency, time to restore service, and change failure rate.

Step 2 – Map business processes and capabilities
Understanding a company’s value streams (lean principles) and enterprise asset management (EAM) processes ensures the identified business process efficiencies align with and meet the established goals. Business process analysis (BPA) examines internal workflows to optimize processes. Creating process flow diagrams simplifies complex processes, clarifies interactions, and aids in the analysis and improvement of user-related workflows. These diagrams provide a clear overview for stakeholders and inspire ideas for enhancements, such as automating operations to increase efficiency and production.

Modernization aims not just to migrate from legacy systems but to reimagine them in a new environment. Identify current and potential business capabilities, integrating strategic themes from the scaled agile framework (SAFe). A thorough “as-is” analysis aligned with a target vision will highlight gaps for introducing new capabilities. Often processes or capabilities exist in an application for historical, political, or technical reasons unrelated to the actual value stream. This step addresses the manual processes and system gaps caused by inflexible legacy systems. For example, routine spreadsheet calculations could be integrated into the new system to eliminate separate management and interactions managed through calls and emails could be automated via event-triggered notifications.

Step 3 – Map business capabilities to future services
This step maps the previously identified business capabilities to future services by creating a detailed capability-service map that links business requirements to technical solutions. Applications often start as monoliths tailored for specific use cases but typically face limitations in modern environments due to poor internal structures, high maintenance costs, and difficulties in onboarding new developers escalating support costs. High coupling and low cohesion can significantly delay adding new features due to extensive coordination across teams for major updates.

To address these challenges, a microservices architecture is adopted, facilitating faster development and easier scaling through continuous integration and deployment (CI/CD) practices. This step identifies and defines new microservices and target APIs. The new microservices are designed with clear boundaries and specific functionalities, ensuring they can be developed, deployed, and scaled independently.

Step 4 – Slicing the monolith and establishing product teams
Slicing the legacy monolith application involves logically organizing the identified services into coherent units and establishing their operational boundaries. These units might operate as independent applications, or as parts of broader products, or span multiple applications. The slicing can be done by: (1) identifying and separating products by bounded contexts (a central pattern in domain-driven design), (2) identifying dependencies and data flows, and (3) creating a migration path with prioritization using the strangler fig pattern.

This step aims to align services with their respective systems to optimize efficiency and integration, creating a well-defined domain model that supports the organization’s strategic direction and responsiveness to change. Additionally, this step forms product teams around new or existing bounded contexts using domain-driven design principles. It considers business user distribution, identifies process owners, and establishes autonomous product teams. Aligning these teams with specific domains or contexts facilitates focused development, streamlined delivery, and better product management.

Step 5 – Identify data flows and establish an integration layer
Map internal and external upstream and downstream data flows using a data flow diagram, identifying true data owners, as legacy systems often become the primary data source for downstream systems over time. This helps route data directly from original sources to new applications, reducing reliance on the legacy system. Account for downstream data flows assigning new data responsibilities to the newly created applications to assume after they are deployed and operational. By diverting the data flow away from the legacy system, you can minimize data sync needed between the legacy system and the new applications until full transition. Additionally, evaluate how reporting and analytics functions will adapt post-modernization, deciding between decentralized or centralized reporting, and determining their new data sources.

To build an effective integration layer, assess current communication protocols and determine future changes to facilitate data flows between applications. The integration layer must address three key data flows: (1) data sync between legacy and new applications, (2) upstream and downstream data responsibilities assigned to the newly created applications, and (3) internal data flows within the new systems.

Step 6 – Identify and structure shared and platform-wide services
After identifying capabilities and mapping them to services, it will become apparent that some services are common and can be shared. Classify these as either shared services or foundational platform services. Shared services cater to cross-cutting concerns that span multiple applications, such as an authentication or authorization service for secure access control or a notification system for alerts and updates or logging. Platform services are fundamental components, like an integration service that facilitates the construction of event-driven architectures across the entire platform. This strategic classification into shared or platform services streamlines operations by removing redundant capabilities and fostering a cohesive, efficient application environment. These services provide a scalable and consistent framework, supporting a range of applications through standardization and reuse.

Step 7 – Document nonfunctional requirements
Detail critical nonfunctional aspects, such as latency, throughput, and data residency. Understanding these elements is essential for determining deployment strategies and ensuring modernized applications meet performance benchmarks, adhere to regulatory and compliance standards, and align with organization’s operational objectives. This step guides the optimization of resources, enhances security measures, and provides scalability and reliability. By clearly defining these requirements, teams can set clear expectations, facilitate effective planning and testing, and deliver solutions that provide seamless user experience and robust operational continuity.

Step 8 – Make technology choices and create a target state design
Determine the appropriate technology stack for the application, including the database, backend services, and frontend components. Use containerization for backend services to enhance portability and scalability, aligning with the organization’s platform choices and team expertise. Incorporate principles such as modularity and adherence to industry standards in technology decisions.

Draft a detailed target state design outlining the application and data architecture, target data model, integration points, data flows, API endpoint definitions, and service interactions. The aim is to create a coherent blueprint that captures the envisioned end state of the system, aligns with best practices, supports the long-term vision, and optimizes for future development while quickly adapting to changing business needs.

Step 9 – Develop an MVP
Plan the deployment of the first set of minimal viable product (MVP) applications, focusing on those with low complexity and low dependencies and high business value for early decoupling. This enables rapid demonstration of business value and sets up necessary implementation processes. Prioritize MVPs based on factors like user onboarding speed and geographical impact to maximize immediate benefits. Starting with manageable, low-risk projects facilitates early successes, builds trust, and establishes a scalable modernization foundation.

Following this approach, product teams will launch MVP applications assessed against predefined success criteria. If successful, expand modernization using a split-and-seed strategy where team members form new groups to develop further applications, supporting a steady progression.

Step 10 – Create a rollout strategy and data migration plan
A well-planned strategy for rollout, data migration, and cutover is crucial for a seamless transition and successful application launch. Minimize risks by using canary releases for specific user groups and coordinating any necessary downtime with business teams. For mission-critical applications, use temporary solutions like queues to manage incoming requests and redirect them to the new system once operational. Operate the existing system alongside new MVPs to ensure data consistency and transactional security until the new system fully takes over. Ensure data is consistent in both systems and actions are transactionally secure, with migration-specific code designed for later removal.

Plan data migration and data synchronization ahead of cutover, considering data volume and migration performance. AWS provides a comprehensive suite of data transfer services like AWS Data Migration Service (AWS DMS) for database migrations, AWS DataSync to automate and accelerate moving data between on-premises and AWS Storage services. The AWS Transfer Family securely scales your recurring business-to-business file transfers to AWS Storage services. These services can streamline the integration of on-premises systems with cloud storage, facilitating both online and offline data migrations.

Address operational needs by gradually transitioning from legacy operations to robust DevOps approach, preparing for potential rollback if issues arise post-launch. Refine change control processes to achieve faster agile delivery and accelerate time to market. While running the legacy system in parallel with new MVPs, plan and schedule the decommissioning of legacy functionalities once the modernized application meets its objectives.

An example legacy modernization at Volkswagen AG

This example of an IT transformation at Volkswagen AG (VW) illustrates how teams utilized certain patterns and steps to upgrade a legacy monolith application, adding new features and cross-process functionalities while reducing business risk during migration to a modern AWS cloud infrastructure (Figure 2).

Figure 2 – Volkswagen refactor exampleFigure 2 – Volkswagen migration process example

After outlining business processes, the legacy system was sliced following a process-wise split, each overseen by a dedicated business owner to make final decisions on the capabilities of the new system. Due to knowledge decline and outdated documentation of the legacy system, there were multiple stakeholder interviews with the business owners to define new product capabilities. The VW teams prioritized MVPs based on goals, business impact, dependencies, synergies, and the overall complexity of the module to be implemented. Rollout plans were considered for dependent systems needing modernization and were executed brand by brand. As some brands continued to use the legacy system, data synchronization from the modernized system into the legacy system was necessary. Establishing a common integration layer was a key step, enabling new data exchange between source, modernized, and downstream systems, and syncing data back to the legacy system.

Lessons learned

In projects modernizing complex, monolithic systems, similar challenges have occurred again and again.

  • The systems often had a common database, usually a central relational database shared by all the modules of the system.
  • The data flows had grown over the years based on requirements and were only revised system-wide in rare cases.
  • With new requirements, new capabilities were added to the systems as add-ons without removing outdated capabilities, so the complexity continued to grow.
  • The individual modules were closely coupled to each other and the initial clear separation of responsibilities and boundaries between the modules became blurred in many places.

The totality of these points meant the following.

  • Systems could only be expanded at great expense.
  • Knowledge of the implemented capabilities declined.
  • New use cases could only be implemented slowly or at great expense.

Often the renewal could not be carried out in a greenfield context or as a big-bang replacement. The challenge was always that the system to be replaced had to be operated productively in parallel with the new system over an extended period. This meant that data had to be kept synchronized on both sides and transactions had to be carried out securely.

To achieve a successful renewal in such cases, it was important to:

  • Understand the vision of the organization and its stakeholders.
  • Know the business processes, value streams, and information flows within the respective domain.
  • Derive the necessary capabilities for the target system to support the business and technical goals.
  • Develop a strategy for the migration.

Conclusion

Modernizing legacy monolith applications is a strategic journey that requires careful planning, execution, and monitoring. Following the ten steps in this blog helps organizations address complexities of legacy modernizations and meet evolving demands. Key elements include aligning with business goals, using cloud capabilities, ensuring seamless data migration and continuously measuring performance against key metrics. Rewards of a successful transformation are vast, unlocking new opportunities for growth and innovation alongside business benefits. For guidance on modernizing your legacy applications and to discover how AWS can help aid in your modernization journey, visit the AWS for automotive page, or contact your AWS team today.

Chandana Keswarkar

Chandana Keswarkar

Chandana Keswarkar is a Senior Solutions Architect at AWS, who specializes in guiding automotive customers through their digital transformation journeys by using cloud technology. She helps organizations develop and refine their platform and product architectures and make well-informed design decisions. In her free time, she enjoys traveling, reading, and practicing yoga.

Dr. André Moetz

Dr. André Moetz

Dr. André Moetz is a Senior Product Manager at Volkswagen AG, with broad experience in IT innovation and modernization product development in the areas of purchasing, production, and logistics. He values data-driven value streams and decision-making and therefore strives for continuous data harmonization and superior data quality. Outside of this, he loves to backpack and hike in the mountains as well as jog in the surrounding forests.

Jens Starke

Jens Starke

Jens Starke is a Senior Product Manager and Business Partner Manager at Volkswagen AG. As an expert in the field of digitalization, he works on the transformation and modernization of legacy systems and product development in the domain of production and logistics with a focus on customer-specific smart manufacturing and shop floor solutions. Outside of Volkswagen, he works as a trainer and coach to share his passion for agile transformation, cultural change and conscious leadership to create a supportive and positive organizational and work culture.