Imagine the problems you can solve with virtually unlimited infrastructure

Create a Free Account
Contact Sales

From weather modeling to genome mapping to the search for extraterrestrial intelligence, HPC has always been about solving the world’s most complex problems. HPC has primarily been an on-premises business, and engineers and researchers working on these HPC applications have always been constrained by infrastructure capacity available on-premises - often limiting their exploration of potential solutions to the constraints of available infrastructure capacity.

High Performance Computing on AWS enables engineers, analysts, and researchers to think beyond the limitations of on-premises HPC infrastructure. AWS HPC solutions address the infrastructure capacity, secure global collaboration, technology obsolescence, and capital expenditure constraints associated with on-premises HPC clusters to give you the freedom to tackle the most challenging HPC workloads and get to your results faster.



Instantly launch or scale up High Performance Computing clusters on AWS. By eliminating job queue times and scaling your cluster as large as needed, when needed, you can reduce the time to market or publication.


Focus on applications and research output over infrastructure maintenance and upgrades. When AWS upgrades hardware, you can gain access instantaneously — simply rewrite your cluster configuration file and reboot to move to the latest hardware.


Let your research dictate infrastructure, not the other way around. With the flexible configuration options AWS provides, you can start with your hypothesis and create HPC clusters that are optimized for your unique application requirements – GPU today, CPU tomorrow.


In addition to core service options for compute, storage, and databases, take advantage of the breadth of services and partners in the AWS ecosystem to enhance your workload. Options range from familiar solutions like NICE and Thinkbox to experimental builds with AWS Lambda.


Collaborate without compromising on security. Every AWS service provides encryption and options to grant granular permissions for each user while maintaining the ability to share data across approved users. Build solutions compliant with HIPAA, FISMA, FedRAMP, PCI, and more.


Let every dollar contribute meaningfully to your mission. Choose from a range of AWS services and only pay for what you use. No more paying for idle compute capacity, no long-term contracts, and no complex licensing involved. Optimize costs further with Amazon EC2 Spot Instances. 

  • Life Sciences


    The Algorithms, Machine, and People (AMP) Lab at UC Berkeley leveraged AWS to quickly scale the compute resources needed to analyze the algorithms that are used in genomics work. Learn more >>

    Computational Chemistry

    Novartis built a platform leveraging AWS to run approximately 87,000 compute cores to conduct 39 years of computational chemistry in 9 hours for a cost of $4,232. Learn more >>

    Biological Systems Simulation

    Penn State moved its research portal to AWS and made it easy for 6,000 researchers worldwide to design more than 50,000 synthetic DNA sequences. Learn more >>

    Protein Modeling

    The Computer Science department at San Francisco State University used Amazon EC2 to reduce costs and turnaround time to run machine learning workloads.  Learn more >>

  • Financial Services

    Capital Management and Reporting

    MAPFRE saved 88% on infrastructure costs and gained the ability to spin up a supercomputer on demand and shut it down when finished. Learn more >>

    Risk Management Portfolio Optimization

    Yuanta Securities Korea benefits from increased speed and lower costs by running financial models on AWS to assess market risk. Learn more >>

    Contract Pricing and Valuation  

    Aon Benfield moved its infrastructure to AWS and built a processing system that reduced policy recalculation time from hours or days to minutes. Learn more >>

  • Design and Engineering

    Electronics Design Automation

    Cadence Design Systems used AWS to isolate workloads from one another and ensure users and applications didn’t compete for resources, which resulted in reduced regression times, quicker iterations, and shifted focus on optimization and agility. Learn more >>

    Computational Fluid Dynamics (CFD)

    TLG Aerospace used EC2 Spot Instances to access more memory and cores at a lower cost, allowing them to scale the number and size of increasingly demanding simulations. Learn more >>

    Engineering Simulation

    Ansys ran a simulation with Enhanced Networking-compatible EC2 instances and demonstrated near-ideal scalability well past 1000 cores and a reduced overall solution time even beyond 2000 cores. Learn more >>

    3D Rendering

    ZeroLight used AWS GPU instances to build an award-winning 3D car configuration tool to create a totally new, interactive user experience for customers on the showroom floor to select and configure their next automobile.

    Learn More>>

  • Energy & Earth Sciences

    Weather Simulation

    The Weather Company redesigned its big data platform, forecasting systems, and applications to run natively in a cloud environment and reduced their on-premises environments from 13 to 6 data centers, freeing engineers to build network and application efficiency. Learn more >>

    Reservoir Simulation

    Rock Flow Dynamics used on-demand computing resources to run workloads to optimize the location of oil wells and water injection wells. What would have taken several years to complete was done over a 12-day period using AWS resources. Learn more >>

    Geographic Information Systems (GIS)

    Digital Globe used AWS to deliver petabytes of high-resolution Earth imagery, data, and analysis to its customers in weeks instead of months while saving on costs. Learn more >>

    Operations, Management, and Analytics

    Fugro Roames used AWS and Amazon EC2 Spot Instances to enable Ergon Energy to reduce the annual cost of vegetation management from AU$100 million to AU$60 million. Learn more >>

Continuous Delivery

High Performance Computing workloads on AWS are run on virtual servers, known as instances, enabled by Amazon Elastic Compute Cloud (Amazon EC2). Amazon EC2 provides secure, resizable compute capacity in the cloud and is offered in a wide range of instance types so you can choose one optimized for your workload.

 Instance Type
Recommended HPC Use
Technical Highlights


Compute Optimized

Compute-intensive workloads, such as engineering and financial simulations, materials science and genomics processing, seismic processing, digital and analog simulations, fluid dynamics, computational lithography and metrology, weather simulations, and many more
  • Based on Intel Xeon Platinum processors, (Skylake)
  • Provides up to 36 cores (72 vCPUs) and up to 192 GiB of memory
  • Supports Intel Advanced Vector Extension 512 (AVX-512) vector processing instruction set
  • C5n instances provide up to 100 Gbps of network bandwidth and up to 14 Gbps of dedicated bandwidth to Amazon EBS. C5n instances also feature an over 30% larger memory footprint compared to C5 instances.


General Purpose

Applications and workloads requiring a balance of memory-to-cores, and for general purpose computing such as HPC management nodes, license servers, remote login nodes, and others
  • Based on Intel Haswell and Broadwell processors
  • Provides up to 48 cores (96 vCPUs) and up to 384 GiB of memory


Memory Optimized

Applications that require a higher ratio of memory-to-cores than C5 or M4 instances, including memory-intensive engineering and scientific simulations, semiconductor mask verification, and many others
  • Based on Intel Broadwell processors
  • Provides up to 48 cores (96 vCPUs) and up to 768 GiB of memory


 Accelerated Computing

Engineering simulations, computational finance, seismic analysis, molecular modeling, genomics, rendering, and other GPU compute workloads
  • Provides up to 8 NVIDIA Tesla V100 Accelerators, powered by NVDIA Volta GV100 GPUs
  • Up 1 PFLOPS of mixed-precision, 125 TFLOPS of single-precision and 62 TFLOPS of double-precision floating point performance
  • Up to 300 GB/s throughput with NVIDIA NVLink GPU-to-GPU interconnect
  • Up to 64 vCPUs, 488 gigabytes of DRAM, and 25 gigabytes per second of dedicated aggregate network bandwidth


Accelerated Computing

Parallel, hardware accelerated applications including video analytics, image processing, financial computing, genomics, and accelerated data analytics and search
  • Provides up to 8 Xilinx Virtex UltraScale+ VU9P FPGA devices in a single EC2 instance


Accelerated Computing

High performance graphical applications, including graphical remote desktops, 3D modeling and simulation, medical and geospatial imaging, and video content delivery
  • Provides up to 4 NVIDIA Tesla M60 Accelerators, powered by NVDIA GM204 GPUs (Maxwell)
  • Optimized for graphics processing and remote visualization
  • Available with Amazon AppStream 2.0, a fully managed application streaming service allowing pre and post-processing of HPC workloads. Deliver HPC visualization applications to large groups of users on any desktop with an HTML5 browser.
  • Utilized by Amazon WorkSpaces Graphics bundles which enables GPU-accelerated virtual Windows desktops in the cloud. WorkSpaces Graphics bundles are designed for engineers and 3D application developers to use as an alternative to expensive graphics-capable workstations.


Memory Optimized

Applications that require large amounts of memory per core, including in-memory analytics graph and sparse matrix processing, semiconductor timing analysis, and others
  • Based on Intel Haswell processors
  • Provides up to 64 cores (128 vCPUs) and up to 1,952 GiB of memory


Memory Optimized

Applications that require the highest amounts of memory per core, including high-performance databases, in-memory databases and other memory intensive enterprise applications
  • Based on Intel Haswell processors
  • Provides up to 64 cores (128 vCPUs) and up to 3,840 GiB of memory

Continuous Delivery

High Performance Computing workload management gains new levels of flexibility in the cloud, making resource and job orchestration an important consideration for your workload. AWS provides a range of solutions for workload orchestration: fully-managed services enable you to focus more on job requests and output over provisioning, configuring and optimizing the cluster and job scheduler, while self-managed solutions enable you to configure and maintain cloud-native clusters yourself, leveraging traditional job schedulers to use on AWS or in hybrid scenarios.

 AWS Offering
AWS Batch AWS Batch is a fully-managed service that enables you to easily run large-scale compute workloads on the cloud without having to worry about resource provisioning or managing schedulers. Interact with AWS Batch via the web console, AWS CLI, or SDKs.
  • Fully-managed service
  • Focus on your jobs and their resources instead of infrastructure
  • Reduce costs by easily using EC2 Spot and Reserved Instances
  • Easily prioritize work across tens of thousands of cores
AWS Lambda Run code without provisioning or managing servers, paying only for the compute time you consume.  Define short-duration functions written in a number of languages and allow Lambda to manage execution at scale.
  • Fully-managed service
  • Optimized for short-duration operations
  • Lambda is “Serverless” – pay only for what you use while your functions are running
AWS Step Functions A fully-managed service that makes it easy to coordinate the components of distributed applications and microservices using visual workflows.
  • Fully-managed service
  • Easily integrated with AWS Batch, AWS Lambda, and other services

AWS ParallelCluster
AWS ParallelCluster is a fully supported and maintained open source cluster management tool that makes it easy for scientists, researchers, and IT administrators to deploy and manage HPC clusters in the AWS cloud.
  • AWS supported  and maintained open source cluster management tool
  • Quickly deploy a cluster using AWS Batch or  third-party schedulers
  • Uses AWS CloudFormation for a base template 
EnginFrame HPC portal integrated with a wide range of open source and commercial batch scheduling systems. One-stop-shop for job submission, control and data management.
  • Runs on-premises, in the cloud or hybrid
  • "Single pane of glass” for multiple schedulers
  • Application templates

Continuous Delivery

AWS provides several options for storage, ranging from file systems attached to an EC2 instance to high performance object storage. Most HPC applications require shared access to data from multiple EC2 instances via a file system interface. AWS provides a native, scale-out shared file storage service (Amazon EFS) that provides a file system interface and file system semantics. HPC applications can also use AWS’ block storage offerings, either Amazon EBS or EC2 instance store, for general purpose working storage. Amazon S3 and Glacier provides low-cost storage options for long-term storage of large data sets.

 AWS Product
Description and recommended HPC usage

Amazon EFS


A highly available and durable, multi-AZ, fully-managed file system

Recommended HPC Usage: Use as a shared file system for working storage

  • Scales to tens of thousands of cores
  • NFS mountable

Amazon EBS


Persistent block storage volumes for use with Amazon EC2 instances

Recommended HPC Usage: Use for high-IOPS and general purpose working storage


  • Lustre compatible
  • NFS mountable
  • Supports high-speed parallel computing systems via tools like Lustre and GPFS
  • Offers a range of choices for speed and cost optimization

Amazon EC2 Instance Store


Block storage included at no additional charge with select Amazon EC2 instance types

Recommended HPC Usage: Use for read-often temporary working storage

  • Included with select EC2 instance types
  • Fast I/O
  • Ephemeral Storage

Amazon S3


Object storage built to store and retrieve any amount of data from anywhere

Recommended HPC Usage: Primary durable and scalable storage for HPC data

  • Highly available
  • Highly durable
  • API accessible with PUT and GET requests

Amazon Glacier


A secure, durable, and extremely low-cost cloud storage service for data archiving an dlong-term backup

Recommended HPC Usage: Use for long-term, lower-cost archival of HPC data

  • Life cycle tools archive data automatically
  • Extremely economical
  • Retrieval times on the order of hours

Continuous Delivery

The AWS network is designed for scale. Whether your application requires thousands of cores for one tightly-coupled workload, hundreds-of-thousands of cores for embarrassingly-parallel, high-throughput computing (HTC) applications, or a mixture of both, the AWS network offers performance (high bandwidth, low latency) and scalability.

AWS optimizes and custom builds hardware specifically for AWS infrastructure. Cut-through routing combined with AWS large scale means even the biggest customers see consistent latency and high bandwidth when using the most challenging application communication patterns. 

Networking Feature
Description and EC2 Instance Type Compatibility
Cluster Placement Groups

Cluster Placement Groups are logical groupings or clusters of instances in the selected AWS region.

EC2 Instance Type Compatibility: All instance types that support enhanced networking can be launched within a Cluster Placement Group. Learn more >>

  • Allow for reliably low latency with up to 25 Gbps bandwidth between instances
  • Elastically scalable as desired
Elastic Network Adapter (ENA)

Elastic Network Adapter (ENA) is a custom network interface optimized to deliver high throughput and packet per second (PPS) performance.

EC2 Instance Type Compatibility: ENA is currently supported on M5, C5, H1, I3 P3, P2, G3 R4, X1, and m4.16xlarge instance types. Learn more »

  • All the advantages of generation one 
  • Future-proofed driver: designed to support up to 400 Gbps networking without requiring a driver change
  • Utilize up to 25 Gbps of network bandwidth on certain EC2 instance types
Elastic Fabric Adapter (EFA)

Elastic Fabric Adapter (EFA) is a network interface for Amazon EC2 instances that enables customers to run HPC applications requiring high levels of inter-instance communications, like computational fluid dynamics, weather modeling, and reservoir simulation, at scale on AWS. With EFA, HPC applications using popular HPC technologies like Message Passing Interface (MPI) can scale to thousands of CPU cores. 

EFA is available as an optional EC2 networking feature that you can enable on C5n.18xl, and P3dn.24xl instances. Additional instance types will be supported in the coming months.

  • Supports industry-standard libfabric APIs, so applications that use a supported MPI library can be migrated to AWS with little or no modification
  • Supported on EC2 instances that provide 100Gbps sustained network throughput
  • EFA support can be enabled at instance startup or on a stopped instance


Continuous Delivery

From preparing simulation input data to interpreting computing job outputs, high performance graphics tasks are part of many HPC workloads. AWS offers several products to improve the performance, cost and flexibility of running OpenGL, Direct/X and other graphics applications. You can accelerate graphics performance by using the GPU-powered G2 and G3 instances or Elastic GPU, and stream Windows graphics with AppStream 2.0, WorkSpaces, or NICE DCV. If you prefer a Linux-based graphics platform, combining the streaming performance of NICE DCV and the EnginFrame HPC portal can deliver end-to-end workflows to end users across on-premises, hybrid cloud, or full-AWS configurations.

NICE DCV A secure streaming protocol optimized for high end graphics, with dynamic bandwidth management
  • Move pixels and keep HPC data centralized
  • Enable remote access to Linux and Windows 3D applications
  • Fluid and responsive experience over a wide network area
  • Consistent experience on premises and on AWS
NICE EnginFrame An HPC Portal with built-in interactive session management and batch-interactive workflow support
  • One-stop-shop for all HPC user needs
  • Simplify collaboration
  • Consistent experience on premises and on AWS
Amazon EC2 Elastic GPU and G3 Instances
Allow you to easily attach low-cost graphics acceleration to current generation EC2 instances
  • Ideal If you need a small amount of GPU for graphics acceleration, or have applications that could benefit from some GPU, but also require high amounts of compute, memory, or storage
  • Capable of running a variety of graphics workloads, such as 3D modeling and rendering, with similar workstation performance compared to direct-attached GPUs.
Amazon AppStream 2.0
A fully managed, secure application streaming service that allows you to stream desktop applications from AWS to any device running a web browser
  • Visualization applications run next to your HPC data ensuring a high quality, low latency visualization experience
  • Users have secure, anywhere, anytime access to their applications so they can be productive wherever there is a web connection
  • Application delivery using NICE DCV protocol which is optimized for graphics
Amazon Workspaces A fully managed, secure Desktop-as-a-Service (DaaS) solution which runs on AWS. WorkSpaces includes GPU-accelerated bundles, which supports engineering, design, and architectural applications while providing the benefits of security, economics, flexibility, and agility in the cloud.
  • Faster visualization of simulation results because your apps can reside next to your data in the cloud
  • Support for 3D application development, 3D modeling, CAD, CAM, and CAE tools
  • Desktop streaming to a multitude of supported devices including Windows and Mac PCs, PCoIP zero clients, Chromebooks, iPads, Fire tablets, Android tablets, and even select smartphones
Continuous Delivery

AWS offers you a pay-as-you-go approach for pricing for over 70 cloud services. With AWS you pay only for the individual services you need, for as long as you use them, and without requiring long-term contracts or complex licensing. AWS pricing is similar to how you pay for utilities like water or electricity. You only pay for the services you consume, and once you stop using them, there are no additional costs or termination fees. Learn more about how pricing works on AWS >>

There are three main ways to pay for your compute capacity on Amazon EC2: On-Demand, Reserved Instances, and Spot Instances.

Compute Pricing Model
Recommended HPC Use:
On Demand Instances With On-Demand instances, you pay for compute capacity by the hour with no long-term commitments or upfront payments. You can increase or decrease your compute capacity depending on the demands of your application and only pay the specified hourly rate for the instances you use.
  • Users that prefer the low cost and flexibility of Amazon EC2 without any up-front payment or long-term commitment
  • Applications being developed or tested on Amazon EC2 for the first time (POCs)
  • Applications with short-term, spiky, or unpredictable workloads that cannot be interrupted
  • Urgent and high-priority workloads
Spot Instances Spot Instances is a pricing model that enables you to bid on unused Amazon EC2 capacity at whatever price you choose. When your bid exceeds the Spot price, you gain access to the available Spot Instances and run as long as the bid exceeds the Spot Price. Historically, the Spot price has been 50% to 93% lower than the on-demand price. Learn more about optimizing scientific computing costs with Spot Instances >>
  • Workloads that can tolerate interruptions
  • Applications that have flexible start and end times
  • Applications that are only feasible at very low compute prices
Reserved Instances Reserved Instances provide you with a significant discount (up to 75%) compared to On-Demand instance pricing. In addition, when Reserved Instances are assigned to a specific Availability Zone, they provide a capacity reservation, giving you additional confidence in your ability to launch instances when you need them.
  • Customers that can commit to using EC2 over a 1 or 3 year term to reduce their total computing costs
  • Applications with steady state usage

AWS Partners provide professional services or software solutions to enable workloads on AWS. Browse our selection of featured partners and learn more.


Sign up for an account and launch a sample HPC workload today.


Your account will be within the AWS Free Tier, which enables you to gain free, hands-on experience with the AWS platform, products, and services.


Build your HPC production solution quickly and easily once you're ready.

Get Started for Free