AWS Marketplace

Migrate JD Edwards EnterpriseOne seamlessly to the AWS Cloud using AWS Application Migration Service

ScaleCapacity, a cloud-native services company that helps organizations architect, migrate, and optimize their workloads on AWS. ScaleCapacity recently helped one of the largest multinational commercial real estate companies migrate their data center to the AWS Cloud. The migration involved several weeks of intensive planning, testing, and execution. One of the key workloads migrated was JD Edwards EnterpriseOne.

JD Edwards EnterpriseOne is a critical financial system that enables resource planning and supply chain management solutions for enterprises in the finance, consumer goods, human resources, distribution and manufacturingsectors. In moving to the cloud, JD Edwards required an approach that ensured seamless migration of their business-critical workloads with continuity, integrity, and minimal downtime.

The migration of JD Edwards EnterpriseOne posed unique challenges due to its interdependencies with several other business applications that were hosted internally and a few that were external third-party applications. The level of complexity with integration and interdependence led to business continuity and application integrity concerns, during and after migration.

In this blog post, Gowri Shankar, Pawan Janakiram, and ScaleCapacity Inc. demonstrate how to successfully migrate the workloads of their customers’ JD Edwards applications from an on-premises data center to the AWS Cloud using AWS Application Migration Service.


To accomplish this lift-and-shift solution, the following prerequisites were required.

  • Have the following services in a running state both on-premises and in the AWS Cloud.
    • Oracle 19c and JDE 9.1 in
    • Oracle Linux 6 & 7, RHEL 7, Windows Server 2016
  • Install Linux CLI utilities “fio” (Flexible IO Tester) and “dd” (data duplication)
  • Be proficient in using AWS Application Migration Service.
  • Gather performance metrics, such as on CPU, memory, I/O, throughput peak, and average and burst values, from on-premises workloads, specifically for the Oracle server.

Solution overview

ScaleCapacity’s migration plan needed to meet the following key criteria:

  • Minimize application downtime during migration.
  • Maintain connectivity and interdependency between applications.
  • Tightly secure applications.
  • Improve performance Service Level Agreements.

To achieve these results, ScaleCapacity developed a migration blueprint for the JD Edwards EnterpriseOne workload that included the following steps:

  • Promote low-priority applications up the order in the migration plan to uncover risks and roadblocks, then mitigate as necessary.
  • Migrate critical applications simultaneously on migration day.
  • Identify necessary steps to reduce downtime during cutover


The following architecture diagram shows the source servers in the on-premises corporate data center on the left. AWS Application Migration Service was used to replicate these servers to AWS for a lift-and-shift migration. After the cut-over to AWS, the on-premises servers were backed up and decommissioned.

High-level architecture diagram - migration of JDE from on-premises to AWS cloud using AWS Migration Service

Enforcing the volume initialization using fio[i] and dd processes helped with planning the production migration cut-over timelines. Please note that it may result in data loss if “fio” and “dd” utilities were not used properly.

Solution walkthrough

Migration planning included documentation of all application dependencies and integrations with other internal applications. ScaleCapacity used tools, where applicable, to gather this information. Once all requirements were documented, migration strategy, testing, and validation phases were planned in detail. Extensive interviews and workshops were conducted with the application owners to validate the collected information and to fill any gaps.

Applications that were to be migrated simultaneously were properly identified. This required the engagement of customer architects, partner resources, and SMEs familiar with the customer’s ecosystem to identify workloads that could be migrated separately with no latency, or minimum latency consequences, preventing business disruption.

AWS Application Migration Service was the service of choice to re-host the JD Edwards application environment from the on-premises data center to AWS.

The AWS Application Migration Service front-end migration processes are fully managed and easy to use; however, JD Edwards generates many chatty writes and reads. For latency-sensitive applications such as JD Edwards, it is essential that any post-migration processes that happen in the background are identified and accounted for.

ScaleCapacity documented utilization and performance metrics from the existing stack at the data center to enable sizing of target infrastructure. This included minimum, average, and maximum input/output operations per second (IoPS), throughput, and latency, among other metrics. At the very top of these processes is the re-hydration of the migrated and replicated Amazon Elastic Block Store (Amazon EBS) volumes from Amazon EBS snapshots. Benchmarking the application’s I/O requirements prior to migration provides a smoother resource optimization journey after migration.

EBS volumes are uploaded by loading Amazon EBS snapshot data in the background. While data is being uploaded, applications that access the volume will encounter a much higher latency than expected. Additionally, depending on the size of the volumes, this can last for hours, during which the application may be rendered unusable. Therefore, it is essential to account for the full extent of re-hydration time of EBS volumes to plan the migration cutover window.

ScaleCapacity performed rigorous testing with different Amazon Elastic Compoute Cloud (Amazon EC2) instances and EBS volume types. Enforcing the volume initialization using fio[i] and dd processes helped with planning the production migration cut-over timelines.ScaleCapacity used scripts to automatically kick off volume initialization on all available drives and used API calls to monitor the progress of initialization.

To ensure disk hydration completion within an acceptable window, the team considered using EBS-optimized EC2 instances as well as appropriate EBS volume types. The goal was to reduce the storage cost during and after migration while maintaining the capability to migrate and hydrate the EBS volumes within the approved migration window. The following factors were considered while selecting the EC2 instance and EBS volume types for the workload servers:

  • Target EC2 instance types (e.g., x2iedn) for database servers. These instance types provide high Amazon EBS baseline throughput and IoPs that can help expedite EBS volume hydration after migration, as well as a local NVMe SSD storage volume that can be incorporated into the application architecture in the target environment.
  • Baseline metrics from source environments to select the appropriate EBS volume type and EBS-optimized EC2 instance types that can handle the required throughput and IoPs after migration.
  • In cases where GP3 I/O limits satisfy the application and database server requirements, the team selected an appropriate or a larger EC2 instance size that provided a large enough throughput and IoPs to ensure disk hydration was completed within the allotted window. If necessary, the instance size was modified after migration during compute optimization activities with minimal downtime. If the migration window required faster hydration, other options, such as the use of IO2 EBS volume types or splitting the source volumes, were considered. The team noted base and maximum IoPs and throughput windows for selected EBS-optimized instances, because for many of the EC2 instance types, the maximum throughput and IoPs durations are limited to a small window per 24 hours.

Most importantly, the entire process was tested multiple times with appropriate team members in all environments to document and validate the procedure and the expected outcome. This ensured well-documented post-migration validation checks before the application was made available to users after cutover.

Planning and post-migration tasks in the target environment utilized AWS Compute Optimizer to analyze the AWS resource utilization and to assist with fine-tuning of the target resources. This enabled properly configured AWS and third-party backup solutions and policies to satisfy application SLA requirements. In addition, replication and disaster recovery processes were put in place to ensure data replication to an alternate Region. At the same time, some of the shared storage services were migrated to AWS managed services such as Amazon FSx.


In this blog post, we showed you how the successful migration of the JD Edwards EnterpriseOne workload to the AWS Cloud using AWS Application Migration Service is a testament to ScaleCapacity’s expertise and experience in cloud migration. The approach ensured the seamless migration of a business-critical workload with minimal downtime and ensured the application’s continuity and integrity.

To get started, please visit AWS Application Migration Service and explore AWS online training. Please reach out to the authors or connect with ScaleCapacity ( for more information.

About the authors

Gowri Shankar Dara

Gowri Shankar Dara is a Sr. Partner Solutions Architect at AWS with nearly 20 years of experience in the IT industry. He works with partners to help migrate on-premises workloads into AWS and is passionate about building resilient workloads and machine learning.

Pawan Janakiram

Pawan Janakiram is a Sr. Partner Solutions Architect and brings extensive experience in Enterprise Architecture for Telco, Media, and Entertainment industry segments. In a career of nearly 27 years, Pawan has delivered secure, high-performance, highly available solutions to cost-sensitive business enterprises in positions of ideation, innovation, architecture, and delivery. Containerizing workloads has been his passion, and Blockchain is his area of depth.

Javeed Shaik

Javeed Shaik is a Director of Cloud Platform Engineering at ScaleCapacity. Over a career of 23 years, he has led multiple data center modernizations, cloud migration, and application modernization for Fortune 100 and Fortune 500, companies and large public sector customers. His unique ability to find innovative solutions to business challenges has been a key contribution to customer success.