AWS for Industries

Building a core banking system with Amazon Quantum Ledger Database

Background on core banking systems

Banks around the world rely on core banking systems as their system of record (SoR). A core banking system includes a ledger for all money movement transactions, organized into accounts with computed balances, along with the relevant business logic and workflows for each product.

The transactions in the ledger are based on double-entry accounting methods, and are added throughout the business day from actions taken by the bank. This includes (1) actions by the bank, such as accruing interest or charging a fee, (2) actions by the customer, such as initiating a payment or a money transfer, and (3) batch processing of payments from payment networks like Visa/Mastercard, ATM, or Wire.

Each account is assigned a product template, with specific business logic and workflows assigned based on the bank’s specifications. For example, a credit card account will authorize incoming transactions that fall below a given customer’s credit limit, and if the transaction involves the customer’s card with chip and the customer’s zip code was entered correctly at the point of sale.

Account balances and transaction history are used for ongoing reporting to customers, bank operators, and regulators. These systems are also accessed regularly by bank employees to support the customer.

How core banking systems were built historically

Core banking systems were introduced in the 1960s and 1970s to share account data across their physical branch network to enable customers to transact at any of their physical branches. Account balances would be calculated at the end of the business day. With networking in its infancy, banks relied on monolithic architectures that combined the bank’s data and business logic on the same mainframe machines, as this was the only configuration capable of handling the number of connections and simultaneous computations at the time.

These systems were built for resiliency and high performance, which necessitated a narrow design based on the product and regulatory requirements of the time while also meeting peak demand, namely end-of-month reconciliation. The resulting systems were monolithic and rather rigid and able to handle enormous volumes of transactions at scale as long as there were minimal configuration changes.

As consumer banking habits changed in the ‘80s and ‘90s, banks chose to augment their legacy mainframe systems to support ATMs and call centers, augmenting the branch teller and support teams. In the ‘00s, the advent of web and mobile banking drove banks to augment their legacy cores again to support these new direct-to-consumer interfaces and self-service models.

 Challenges of the historical design

While a few banks used the technology waves of the ‘80s, ‘90s and ‘00s to replace their legacy mainframes with distributed computing systems, the vast majority stuck with the tried-and-true mainframe, which they rely on to this day.

Today, the typical core banking system is monolithic with rigid product parameters and transaction settings, run on expensive mainframe hardware, and reliant on a declining population of COBOL developers. Vendors who have built these tried-and-true solutions have been slow to adapt those systems to modern architectures and designs, emphasizing reliability over innovation, dependability over agility and having little consideration towards reducing cost. Banks who want to create personalized financial products or streamline back-office processes struggle to do so within these constraints.

 How core banking systems are being built today

Over the last decade, as banks and Fintechs have embraced modern cloud and microservices architectures in other parts of their infrastructure, they have begun to disentangle the core banking system from its original monolithic design. Business logic is spread over multiple services in the microservices architecture, with each microservice using a custom database suited for its specific need. Modern queues like Kafka have replaced proprietary vendor-licensed queues. API interfaces have been published to enable application development for consumers, bank operators, customer support, and third parties. Proprietary vendor-licensed relational databases have been replaced by open-source relational databases.

Relational database as a ledger

Relational databases have been the choice of architects building core banking ledgers in the last couple of decades, primarily due to their reliability and also because of their ubiquitous nature.

A core banking ledger database stores transaction data and requires that the data stored in it be secure and trustable. To account for security and trust, database architects using relational databases have maintained a separate journal in the database to record modifications to data and as a result making the transaction data immutable. They have also built additional mechanisms to verify that data has not been inadvertently changed or modified, thus increasing cost and complexity of the design of the system.

In addition, relational databases, due to their rigid table schema, are more complicated to manage as they are often incompatible with the microservices pattern where frequent changes are made to the application as well as to the database schema.

And finally, to maintain transaction atomicity and isolation, a traditional relational database system would implement a locking mechanism over some portion of the database (e.g., rows, tables, pages), make the necessary changes, and then release the locks. This locking mechanism introduces additional overhead and can also degrade the performance of some requests that are competing for the same resource.

 Adding purpose-built DBs to the design: Amazon Quantum Ledger Database — why do it?

Today, we will demonstrate the next paradigm shift for core banking systems – the adoption of a purpose-built database designed to manage transactional ledgers in an immutable, cryptographically verifiable, secure and performant manner – Amazon Quantum Ledger Database (QLDB).

Incorporating QLDB into your design helps solve many of the problems listed in the previous sections.  The database has immutability and verifiability built-in, so you do not need to build a separate journal to record changes or mechanisms to verify data yourself. It also stores each transaction as a document without a predefined data model, supports SQL-like query capabilities, and delivers full ACID transactions. Concurrency control in QLDB is implemented using Optimistic Concurrency Control (OCC). OCC operates on the principle that multiple transactions can frequently complete without interfering with each other. With OCC, transactions in QLDB don’t acquire locks on databases and operates with full serializable isolation. Finally, QLDB efficiently streams data downstream, to support more use cases such as real-time analytics or feeding event-driven applications that can react to changes happening in the ledger in real-time.

Key architecture considerations

The following diagram showcases how one might leverage Amazon QLDB as the ledger for real-time transaction data. Here, we separate reads from writes, to enable both to scale separately as needed. Real-time transactions are processed and written to Amazon QLDB which acts as the immutable system-of-record, and data is replicated in real-time from Amazon QLDB into a secondary database for read heavy workloads.

Let’s consider a scenario here to understand the components in the diagram and the flow in which the traffic flows. Consider a customer of Bank XYZ (Issuing bank) who uses a credit card issued by the bank at a restaurant (merchant) to make a transaction.

Transaction data VPC

All the microservices that interfaces directly with Amazon QLDB, and the services that stream data into and out of Amazon QLDB are logically contained in its own Amazon Virtual Private Cloud (VPC)

Amazon Managed Streaming for Apache Kafka (Amazon MSK) – In the scenario we just described, the restaurant transaction is transmitted in real-time through one of the payment networks (e.g., Master Card, Visa etc.) and issuer processors (e.g., Global Payments) into Apache Kafka, to be ultimately processed by microservices that consume the Kafka stream data. Apache Kafka, a widely used streaming platform is preferred over other messaging platforms because of its ability to scale to millions of transactions per second and the fact that the messages can be persisted (and replayed if required) even after the messages have been consumed by the consumers.

Amazon Elastic Kubernetes Service (Amazon EKS) – The majority of the applications developed today are developed as microservices where each microservice is built to solve a specific business problem. In this scenario, multiple microservices are built on Amazon EKS, one to manage customers, one to manage accounts, and one to process the transactions. The transaction processing microservice consumes the transactions in real-time from Amazon MSK, authorizes the transaction based on various factors including – (1) ensuring the transaction amount does not exceed the set card limit, and (2) validating the merchant and amount are not fraudulent – before reverting back to the merchant with an “Approved” or a “Declined” response.

Amazon Quantum Ledger Database (Amazon QLDB) – Once the transaction has been processed by the microservices layer, the transaction gets written to Amazon QLDB. Amazon QLDB is built upon an append-only log called a journal. Once a transaction is committed to the ledger, they cannot be modified or overwritten, which gives an immutable record of every insert, update, delete and select ever committed to the ledger and access to every revision of every document. QLDB also provides a cryptographic verification feature that enables anyone to mathematically prove the integrity of the transaction history. This feature is very useful for cases where the bank needs to prove the integrity of the data to a third party such as a regulator or auditor, i.e., to prove that the transaction history has not been altered, tampered or falsified once written to Amazon QLDB.

Amazon Kinesis Data Streams – Setting up the QLDB stream will capture every document revision that is committed to the journal and delivers this data to Amazon Kinesis Data Streams in real-time. Streaming lets you use QLDB as a single, verifiable source of truth while also integrating the journal data with other services.

AWS Lambda Stream Consumer – Here, an AWS  Lambda function implements a Kinesis Data Streams consumer and writes the data in real-time to the secondary database (i.e., Amazon Aurora).

Apps and Data VPC

All the downstream applications that perform actions based on real-time data from Amazon QLDB are logically contained in this VPC (not shown in the diagram). A secondary database offers various advantages including separating certain read patterns away from the ledger database.

Secondary database – Amazon QLDB addresses the needs of high-performance online transaction processing (OLTP) workloads and is optimized for specific query patterns, specifically writes, and equality seeks against indexes. It is critical to design applications and their data models to work with these query patterns. For Online Analytical Processing (OLAP) queries, reporting queries or text search queries, customers can stream data using QLDB’s streaming feature through Amazon Kinesis Data Streams to a secondary database that is optimized for these query patterns. In this case, the secondary Amazon Aurora Postgres database is used for cases such as identifying all the transactions at a certain merchant for which rewards (or points) are to be added to the customer’s account once the transactions have been cleared at the end of day processing.

Batch processing

Transactions that are not cleared in real-time are cleared in the overnight batch processes. In this case, the transaction is from the customer dining at a restaurant, and while the transaction is authorized in real-time for the specific dollar amount of services rendered, typically customers update the transaction amount with an additional tip after the transaction has been authorized. These updated transaction amounts are uploaded as a batch file at the end of the day to be processed by the issuing bank.

AWS Transfer for SFTP (AWS SFTP) – Issuing banks can support SFTP as a batch file upload mechanism using AWS SFTP. The service can be configured to store the files in either Amazon S3 or Amazon Elastic File System (Amazon EFS).

Amazon Simple Storage Service (Amazon S3) – The batch files received from AWS SFTP are stored in an Amazon S3 bucket. Amazon S3 supports encryption at rest for all data stored, and also sends notifications to a Lambda function every time a new file is uploaded.

AWS Lambda – The batch file uploaded to Amazon S3 is processed by a Lambda function that reads and processes the batch file, and the transactions are published to Apache Kafka for them to be consumed by the microservices to be processed.

Network connectivity

AWS Transit Gateway is a network transit hub that connects AWS VPC to AWS Direct Connect using AWS Direct Connect Gateway. Payment network routers such as a Mastercard Interface Processor (or MIP) are hosted on the customer’s on-premises network or in a co-location facility, and can be accessed using AWS Transit Gateway through the AWS Direct Connect connection.

Conclusion

In this blog post, we have shown how Amazon QLDB can be used to build a core banking ledger system, and why it is a good fit-for-purpose database suiting the needs of the financial services industry. In the next blog post, we will dive deeper into how we go about building a core banking ledger.

Pradeep Dhananjaya

Pradeep Dhananjaya

Pradeep Dhananjaya is a Banking Specialist Solutions Architect in the Worldwide Financial Services industry group at AWS. He spends much of his time working with fintechs and traditional banks solving for their business problems with technology. Prior to joining AWS, Pradeep spent more than a decade building technology solutions at JP Morgan Chase and Morgan Stanley.

Dan Blaner

Dan Blaner

Dan Blaner is a Senior Solutions Architect specializing in Amazon QLDB. He spends his days helping customers understand what ledgers are all about and helping them design and build system-of-record applications on Amazon QLDB.

Ben Weiss

Ben Weiss

Ben Weiss is a Banking Specialist on the Worldwide Financial Services team at AWS. Prior to AWS, Ben led financial products at Carta, where he led their efforts in banking and lending to startups and venture capital firms. Ben previously built a startup in consumer banking that became a division of Cross River Bank and helped lead Two Sigma Investments’ activities in the insurance space. He’s also been a venture capitalist at Greylock Partners, Social Leverage, and Two Sigma Ventures. Ben holds a BSc in Operations Research and Industrial Engineering from Cornell University and an MBA from INSEAD.