AWS HPC Blog
Harnessing the scale of AWS for financial simulations
In the dynamic world of finance, numerical simulations have become indispensable tools for tackling complex problems where analytical solutions are elusive. However, these simulations can be incredibly time-consuming and computationally intensive, especially when dealing with large-scale tasks.
This is where the benefits of cloud computing come into play. Clouds like AWS offer a plethora of advantages that have made us increasingly popular in the financial industry. Flexible pricing models, massive compute environments, and versatile instance choices allow financial institutions to scale their simulations seamlessly and make use of multiple AWS Regions and Availability Zones for both scale and resiliency.
In this blog post, we’ll show how you can use AWS to execute large-scale financial simulations more easily. We’ll focus on a specific example: calculating option prices using Monte Carlo simulations with the QuantLib open-source library. QuantLib is a powerful tool for modelling, trading, and risk management in the financial industry. Finally, there’s a link to a workshop you can follow – step-by-step – to deploy this in your own AWS account.
Optimizing financial simulations with AWS Batch and AWS Lambda
When it comes to running large-scale financial simulations, the choice of compute service can have a significant impact on the overall efficiency and cost-effectiveness of the workflow. In this regard, AWS Batch and AWS Lambda offer complementary solutions that can be tailored to different workload requirements.
AWS Batch, a job scheduler and resource orchestrator, is well-suited for running batch-oriented workflows at scale. By automatically provisioning the appropriate compute resources and managing the execution of jobs, Batch enables financial institutions to run their simulations in a cost-effective and efficient manner, particularly for workloads that require longer durations or need to handle high concurrency.
On the other hand, for smaller jobs that require a fast turnaround, typically within a couple of minutes, AWS Lambda can be the preferred choice. Lambda’s serverless architecture and rapid response times make it an ideal solution for scenarios where the low latency of less than a second is a critical requirement, like in real-time risk analysis or portfolio optimization.
The selection of the appropriate compute service depends on the specific workload requirements. For fast response time (<1s) or high throughput (over 500 transactions per second), Lambda is the choice we’d recommend. However, for workloads with durations longer than 5 minutes, or the need to scale quickly to over 20,000 concurrent jobs, Batch is a more suitable option. Additionally, for managing multiple workloads with a scheduler, Batch is the recommended service.
Solution architecture
Figure 1 illustrates a solution architecture for running cloud-native financial simulations on AWS in multiple regions. The simulation workflow to calculate American Option prices using QuantLib is described like this:
- Upload an input CSV file with financial asset information to the input Amazon Simple Storage Service (Amazon S3) buckets for AWS Batch or AWS Lambda to compute.
- Apply Amazon EventBridge to monitor the designated Amazon S3 buckets.
- Once the input data has been fed into the Amazon S3 input bucket, an AWS Batch job is triggered through EventBridge, or a Lambda job is invoked through event-driven process of the S3 bucket, depending on the location of the input file.
- The job splits the input file to multiple files and put on S3 to run in parallel. For Lambda, we use the same event-driven process to run the job. For Batch, we run jobs in parallel using Array Jobs, for efficiency. The input file is processed directly if the number of assets is under a threshold (configurable through an environment variable).
- The result files are put on the result S3 bucket with the same path as the input file. Users get the result back by copying from the result S3 bucket.
The common workflow in financial simulations is to keep experimenting with – and optimizing – a compute algorithm. Adopting continuous integration and continuous deployment (CI/CD) is a natural fit. Figure 2 illustrates the high-level workflow for CI/CD development.
AWS CodePipeline is a continuous delivery service that allows you to model, visualize, and automate the steps required to release application software. With it, we model the full release process for building the code, deploying to pre-production environments, testing the application and releasing it to production every time there is a code change.
You can do this
We’ve published a workshop that offers a quick and effective implementation of this, which will take you about 20 minutes to set up.
You’ll have the opportunity to see for yourself the impressive performance of AWS Lambda to process hundreds of equities in about a minute, and AWS Batch processing tens of thousands of equities in less than ten minutes by chomping through 10 to 100 equities for each individual job.
Unlock the future of financial simulations with AWS
The cloud-native Monte Carlo simulation for American Option pricing using QuantLib we’ve outlined here is just the beginning. By using AWS, financial institutions can unlock new levels of efficiency, scalability, and innovation in their simulation workflows.
Ready to get started? Check out the published workshop to dive deeper into the step-by-step implementation details and see how you can apply these cloud-native techniques to your own financial simulation.
This hands-on experience empowers financial professionals to harness the full potential of cloud technologies, revolutionizing the approaches to complex financial simulations.