Using Amazon DynamoDB and Amazon RDS, we can handle any level of transaction volume. When hundreds of millions of people are relying on you, nothing could be more important.
Phani Bhamidipati Senior Manager, Software Development, Amazon.com

Across its many business entities, Amazon processes more than 20 billion financial transactions each month, including accounts receivable, accounts payable, royalties, amortizations, remittances, payments, and cash.

The Financial Ledger and Accounting Systems Hub (FLASH) is a suite of microservices that ingests these financial transactions, performs complex and business-critical functions to substantiate the general ledger, and generates financial statements such as balance sheet, cash flow, and income.

Over the years, FLASH grew to require more than 90 Oracle databases, totaling more than 120 terabytes of data and growing. “Simply maintaining and administering the database technology took hundreds of hours per month,” says Marisamy Krishnan, senior technical program manager at Amazon.

Hardware provisioning for the on-premises solution was complicated and inefficient, costing the FLASH team nearly 100 hours annually. At one point, some of the FLASH services were running on the largest available Oracle-certified hardware.

With transaction volumes growing by a double-digit percentage each year and twofold spikes in volume during peak sales periods, Amazon needed to find a solution that was more scalable and easier to manage. In 2017, the FLASH organization decided to migrate all its Oracle databases to AWS services.

While migrating, the FLASH team had to ensure compliance with Sarbanes-Oxley (SOX) guidelines and tax audits. “We have more than 20 years of financials, subledger details, and other data that is critical to reporting and audits,” says Phani Bhamidipati, senior manager of software development at Amazon. “It had to be accessible during and after the migration.”

They reviewed AWS services with the Amazon security team and created service templates using AWS CloudFormation that met data-protection requirements. To ensure accurate transfer of historical data, AWS Database Migration Service performed row-by-row validations. The team added random sampling of data to the tests for additional verification. To ensure SOX compliance, the FLASH team sent recommendations to service owners on access-control mechanisms and encryption parameters. The team used Amazon Simple Storage Service (Amazon S3) for inexpensive, long-term storage of relational and nonrelational data.

Re-architecting FLASH and adopting highly available and reliable AWS services significantly improved database performance. Most of the critical services that were moved to Amazon DynamoDB saw a 40 percent reduction in latency, despite handling twice the number of transactions. At the same time, costs dropped by 70 percent. Overall, the annual database operating costs for FLASH have remained the same despite provisioning higher capacity on AWS.

Additionally, the move reduced administrative overhead by 70 percent, enabling the engineering team to spend its time improving service performance through activities such as query optimization and performance analysis.

The elastic capacity of preconfigured database hosts on AWS eliminated the administrative overhead required to manage scaling. AWS services have abstracted the hardware away from the engineers, allowing them to focus on optimizing configurations. Using Amazon DynamoDB auto scaling enabled the team to reduce cost by dynamically responding to traffic spikes.

Since the migration of FLASH to DynamoDB, each team in the FLASH organization manages its own compute and storage infrastructure. To assist them, AWS solutions architects developed guidelines concerning the optimal proportion of instance types based on service growth, usage cycles, and pricing. Using preconfigured templates enabled each service team to deploy AWS instances with compliant, tested configurations.

The shift to managed database services has liberated engineers to work more efficiently. “Previously, anytime you changed a schema or addressed a performance issue, you had to work with a database administrator,” says Bhamidipati. “Since we adopted Amazon DynamoDB, the engineers have the freedom to make the changes they need directly.”

Few business functions are more critical than keeping finances in order—and when your business processes transactions in the billions, migrating to a new database is no easy feat. “Through careful design and auditing, we were able to leverage AWS for historical and ongoing financial reporting that is accurate and SOX-compliant,” says Phani Bhamidipati, senior product manager at Amazon. “Using Amazon DynamoDB and Amazon RDS, we can handle any level of transaction volume. When hundreds of millions of people are relying on you, nothing could be more important.”