AWS HPC Blog

Running large-scale CFD fire simulations on AWS for Amazon.com

This post was contributed by Matt Broadfoot, Senior Fire Strategy Manager at Amazon Design and Construction, and Antonio Cennamo ProServe Customer Practice Manager, Colin Bridger Principal HPC GTM Specialist, Grigorios Pikoulas ProServe Strategic Program Leader, Neil Ashton Principal, Computational Engineering Product Strategy, Kevin Tuil, Snr HPC SA, Roberto Medar, ProServe HPC Consultant, Taiwo Abioye ProServe Security Consultant, Talib Mahouari ProServe Engagement Manager at AWS.

Historically, advances in Computational Fluid Dynamics (CFD) were achieved through large investments in on-premises computing infrastructure. Upfront capital investment and operational complexity have been the accepted norm of large-scale HPC research but this model presents challenges for companies wishing to run large-scale CFD workloads in the shortest possible time whilst also minimizing capital investment.

Figure 1 – Example of an FDS fire scenario rendered by Smokeview

Figure 1 – Example of an FDS fire scenario rendered by Smokeview

In recent work, Amazon EU Design and Construction (Amazon) utilized AWS to facilitate additional project oversight and faster CFD modeling for Amazon construction projects. CFD is used for fire strategy approvals for all core buildings, and minimizing the time to run CFD change variations to meet permit time lines is critical to meeting business milestones. Previously, alterations to building configurations created delays of up to 14-21 days. By leveraging AWS for CFD simulations, Amazon shortened the runtimes of these models to less than 1 day.

In this blog post, we discuss the architecture deployed by Amazon on AWS to conduct large-scale CFD fire simulations of Amazon construction projects as part of their Fire Strategy solutions to demonstrate both Associate Life Safety and of the Fire Service arriving to site. This is a real world application based on a previous blog, and a step-by-step workshop, that we recommend you reference after reading this post.

The deployed system provides a simple and consistent replicable process for multiple CFD applications, such as FDS, Pyrosim and Ansys Fluent. It allows internal and external consultants to remain in control of the overall process with zero intervention from Amazon. Thanks to this approach, each individual project can meet strict governance of design requirements.

Overview of solution

A cloud CFD solution was required to provide an efficient adaptation of the previous process running on on-premises HPC, optimizing model criteria and providing efficient interfaces with fire simulation CFD applications such as FDS, Pyrosim and ANSYS Fluent that are used by internal and external consultants. Due to type, size and configuration, Amazon building construction does not fit into ‘standard’ regulatory criteria and requires a custom Fire Strategy to be prepared for each building in each geography requiring a consistent replicable process/architecture with fastest possible simulation complete time combined with lowest possible cost.

System capabilities requirement and operators guide were created for Amazon to supply to internal and third-party consultants, along with optional consultant onboarding, problem solving, technical support, and monthly output/ monitoring reports. A break-down of the costs e.g. per model run, fixed overheads are built in to AWS CFD Cloud that may be accessed independently and according to security requirements by Amazon as the host/customer and by internal.

Two variations of the solution are possible using the described architecture:

  1. A standardized solution that supports the most popular open-source code, Fire Dynamics Simulation (FDS) developed by National Institute of Standards and Technology (NIST), as a default CFD application. Users access a portal to run only FDS simulation on Linux with the portal not customizable or expandable to other application software apart than FDS. This has the advantage of a simpler interface applicable to more users.
  2. An enhanced multi-ISV application solution can be created in which users will have access to a portal where they can run FDS simulations on Linux, with the portal is also customizable to expand it to other fire modelling CFD software such as Pyrosim, and Ansys Fluent.

Solution

Amazon’s CFD fire simulation has several components in addition to the HPC Cluster services used such as user and administrator interfaces, storage, authentication and authorization, and monitoring for security and operational costs. A high-level description of the architecture diagram of the solution is as below:

Figure 2 – An architecture diagram of Amazon’s multiuser CFD fire simulation architecture. A more detailed diagram of the tenant module is provided in Figure 4.

Figure 2 – An architecture diagram of Amazon’s multiuser CFD fire simulation architecture. A more detailed diagram of the tenant module is provided in Figure 4.

Key Architecture Elements

The six key elements of the architecture are described below with references to the key HPC services used following this description.

Security

At AWS, security is the top priority and AWS has strict security requirements for every solution built that rely on the shared responsibility model. Best practices for security, identity, and compliance can be found here. The following steps were followed to secure the solution:

Data Classification

Data is classified into one of the available data classes specified by Amazon. The data class of an application determines the level of security controls that would be applied to that application and the infrastructure it is running on, hence, this is the first step performed from a security perspective.

Security Controls and Assessment

Once the data has been classified, the next step is to identify the AWS services that the application would be running on e.g., RDS, DynamoDB, etc. and then create a hardening hierarchy – The hardening hierarchy defines a set of security controls that must be applied to every AWS service that the application may be on. This guide will be used to ensure that the services the application would be running on meet the minimum specified security controls defined by Amazon for the data class of the application.

Security Control Services

In this phase, the architecture was assessed and the security services that were required were implemented. Some of the AWS services implemented here include AWS Web Application Firewall (AWS WAF) access lists, logging and monitoring, and encryption at rest and in transit.

AWS WAF provides the option of using AWS WAF managed rule sets and/or creating your own rules. The option you choose would depend on the services and application types you are running. For this use case, some of the managed rules implemented are listed below:

  1. Bot Control
  2. Core Rule Set
  3. Known Bad Inputs
  4. Linux Operating System
  5. SQL Database

Logging and Monitoring: CloudTrail and CloudWatch were used to ensure that the appropriate level of logging and monitoring were implemented. CloudTrail records all API calls made within the AWS accounts in use while CloudWatch Logs were used to monitor, store, and access log files from certain AWS services in use. Also, a log retention period was set for all the stored logs. some of the logs stored on CloudWatch Logs include:

  1. VPC Flow Logs
  2. Application Load Balancer access logs
  3. S3 Bucket Access Logs
  4. RDS Database Logs

CloudWatch Logs can be used with CloudWatch Alarms to create security alarms when certain security thresh holds are exceeded.

Encryption at Rest and in Transit: Encryption is a critical requirement for data protection, hence, every data transiting the AWS services used for this application was encrypted in transit. Also, every AWS service storing data at rest e.g., S3, EBS volumes, RDS, etc. was configured to encrypt the data.

Threat Modelling

Threat modelling was done on all the AWS services used to build this application. This is a step-by-step process which allows us to check if all the required controls were accurately applied on the individual AWS services used to build the application. Some of the controls that are checked here include authentication, authorization, encryption of data at rest and in transit, logging, etc. This process helps us to detect missing controls that may have not been implemented due to oversight.

Code Reviews

Code reviews are an important part of ensuring security for Amazon CDOs. For this application Automated Code Reviews were done with several code scanning tools. All the codes used to build this application and the ones used to automate the AWS services that were set up were scanned and reviewed over several iterations at every stage of the building of the application and all identified vulnerabilities were fixed.

Web Browser Based Access and GUI

A key requirement was to abstract the HPC cluster access and management into a simple GUI front end for fire simulation engineers. To implement this fundamental aspect, we used NICE EnginFrame HPC portal.

EnginFrame is a leading grid-enabled application portal for user-friendly submission, control, and monitoring of HPC jobs and interactive remote sessions. It includes sophisticated data management for all stages of HPC job lifetime and is integrated with most popular job schedulers and middleware tools to submit, monitor, and manage jobs.

Users and administrators log into EnginFrame portal using their email address.
New users can request to be enrolled by using signup request functionality that administrators can consequently approve or reject. Administrators can directly add users by providing their supplier organization, team and email.

User directory is managed by AWS Directory Service (Simple AD), fully integrated with the HPC cluster Linux nodes via System Security Services Daemon (SSSD).

Once logged, the portal GUI enables administrators to perform all their management functions, such as defining HPC clusters blueprints, assigning them with a credit cost per hour, setting each team credit, managing software installed on the compute nodes AMI and so on.

It also helps end-users to:

  • purchase their HPC cluster, see their status, costs and usage and connect to each one
  • submit, monitor and manage HPC jobs on each of their clusters
  • transfer files to and from their organization S3 bucket
  • create and connect to DCV sessions running on each cluster.
Figure 3 – NICE EnginFrame enabled GUI dashboard

Figure 3 – NICE EnginFrame enabled GUI dashboard

Multi-Tenancy

Amazon Design and Construction started by a very specific need: Offer HPC services to its suppliers through one and unique portal. In other words, they wanted to deploy a multi-tenant solution in which each of their suppliers would be completely isolated from each other. The three core components of the portal are NICE EnginFrame, AWS ParallelCluster and AWS Directory Service.

Each tenant has access to various AWS services through a Graphical User Interface. When a user in a particular team inside its organization needs to run a simulation, an entire underlying automated infrastructure is deployed.

The underlying architecture is built of multiple components: FDS and Smokeview are the main computation and visualization services accessed by end users for their simulation.

Figure 4 – Tenant Module

Figure 4 – Tenant Module

The high-level view of the multi-tenancy is a dedicated and isolated environment for each of the tenants:

Figure 5 – AWS ParallelCluster orchestrated control plane with multiple Tenant modules

Figure 5 – AWS ParallelCluster orchestrated control plane with multiple Tenant modules

The solution allows a high level of flexibility and customizability. It can offer to end users’ various types of applications and usages using the SaaS approach hiding the complexity of building or managing any AWS Cloud infrastructure components.

Cost and Monitoring Dashboard

Administrators

Resources allocated to each supplier are uniquely and consistently tagged. In particular, we have a tag for each supplier organization, team and user. This provides administrators with a fine-grained monitoring and control of AWS usage and costs associated to each supplier.

Administrators can review usage and costs AWS Cost Explorer, leveraging its full set of features such as filters, forecasts, dimension selection and so on. Additionally, the Dashboard portal provides them with the possibility to get live charts with preset dimensions that they can filter according to:

  • a time frame (default 4 weeks)
  • organization and each of its teams
  • stack name
Figure 6 - The cost exploration view

Figure 6 – The cost exploration view

End users

End users do not have access to AWS usage and cost data, but are provided with credits.
Credits are an abstract currency fully customizable by administrators. Administrators can:

  • Provide each user team with a credit amount that he can spend to purchase HPC clusters
  • Assign each HPC cluster with the desired credit cost per hour

Team credits are consumed according to the overall price per hour of their purchased HPC clusters.
Each end user can check its current available credit amount, and has all the historical records of his purchases.

Storage

Amazon needed both a secure persistent storage space as well as a fast cost-efficient ephemeral file system solutions.

The portal addresses both need by integrating Amazon S3 and Amazon FSx for Lustre. Every time an engineer needs to perform a simulation, both storage spaces become available and, depending on their need, can target one or the other storage spaces. Simulations after being performed can be saved and archived in a permanent way and shared among their organization for future post-processing purposes.

Summary and discussion of the solution

One of the major motivations for Amazon to move from on-premises to an AWS HPC solution was to accelerate their simulations by using more cores per simulations, reducing the time to complete from up to twenty-one days to less than one day. This has a material business impact in enabling Amazon to meet rigid planning application timelines reducing time to construction and reducing potential late application fines. Amazon have removed compute bottlenecks and created a standardized CFD solution that can be spun up/down on demand moving the cost model from fixed capital expense to a flexible operational expense model. The standardized solution provides governance and security not previously available in individual third-party HPC implementations.

FDS has built-in MPI (Message Passing Interface) and OMP (Open Multi-Processing) support. To achieve better performance AWS worked with Fire Engineers to reshape their model to both balance the load being computed by each allocated resource as well as maintaining high quality results.

Greater transparency and visibility across the fire simulation process, executed by internal and third-party external partners, enabling Amazon Design and Construction team to supervise simulations and pro-actively assist users where needed. The standardized CFD cluster has led to dramatically reduced time to results from multiple days/weeks to less than a day, reducing fire simulation runtime by >75% in overall fire simulations, meeting all of Amazon’s stated business needs.

Conclusion

In this blog we have outlined how AWS helped Amazon Design and Construction accelerated their fulfilment and data center fire simulations using AWS Cloud HPC. We’ve outlined the five key steps taken that resulted in simulation times that are approximately 15-20x faster than previous on-premises architectures used and with a proportionate cost reduction in operating expenses and no capital expenditure.

If you would like to explore running simulations on AWS further, you can read more on our CFD landing page, or in this blog post about implementing Fire Dynamics simulations on AWS, or by trying it out yourself with one of our step-by-step workshops.

Matt Broadfoot

Matt Broadfoot

Matt Broadfoot is a the Snr EU Fire Strategy Manager within Amazon Design & Construction covering the UK/ EU & EMEA regions. He is fire engineer with global experience of developing fire strategy solutions across multiple sectors from heritage to transport hub and storage.

Taiwo Abioye

Taiwo Abioye

Taiwo Abioye is a security transformation consultant with the Professional Services team within Amazon Web Services. He has a wide range of experience across network engineering, cloud engineering and cyber security. He is passionate about helping customers meet their cyber security goals within and outside the cloud environments. In his spare time, Taiwo enjoys learning history, traveling around the world and visiting scenic sites.

Colin Bridger

Colin Bridger

Colin is a Principal HPC Specialist with AWS and gained his HPC experience with networking On-Premises vendors. After working with end users and partners in verticals such as academic and climate research, CFD, FSI, and health care life sciences, he pivoted to focus on deployment of workloads in those industries on AWS cloud. He is passionate about deploying economic HPC at scale to advance research and scientific outcome.

Antonio Cennamo

Antonio Cennamo

Antonio Cennamo is a Customer Practice Manager at Amazon Web Services. He comes with international experience across consulting and professional services organizations. He is passionate about helping customers in their digital transformation journey and enables them to build innovative cloud solutions. In AWS, he leads the Professional Services organization with the strategic customers in Europe, Middle East and Africa. In his spare time, Antonio enjoys football, traveling, and gaming.

Kevin Tuil

Kevin Tuil

Kevin Tuil is a Senior HPC Specialist Solutions Architect at AWS. He has been working in the HPC field for more than 10 years, specifically in the Oil&Gas, Mechanical Engineering and Naval Engineering verticals. Prior to AWS he has worked for global companies mixing research, technical and presales roles. He owns two MSc respectively from Arts et Métiers Paristech and the French Naval Academy and a BSc from the Pierre et Marie Curie University.

Talib Mahouari

Talib Mahouari

Talib Mahouari is an Engagement Manager in AWS Professional Services who comes with a wide experience across multiple enterprise organizations in digital TV industry. He leads internal AWS, external partner, and customer teams to deliver AWS cloud services that enable the customers to realize their business outcomes. In his free time,Talib enjoys cycling and swimming.

Roberto Meda

Roberto Meda

Roberto spent 10+ years in the NICE Software Professional Services team , and has been in the HPC field since 2003. Currently, he is a Senior Consultant for AWS Professional Services in the HPC global practice.

Neil Ashton

Neil Ashton

Neil is an ex-F1 & NASA engineer who specializes in developing cutting edge Computational Fluid Dynamics methods with a particular focus on turbulence modelling, deep learning and high-performance computing. He is a Principal Computational Engineering Specialist within the Advanced Computing and Simulation product team.

Grigorios Pikoulas

Grigorios Pikoulas

Greg Pikoulas is a Strategic Program Leader within the AWS Professional Services Strategics team. He leads the delivery function across the EMEA region, driving the digital evolution of the customers as they innovate and transform their business in partnership with AWS. As a single threaded executive, he strives to accelerate customer outcomes through deep understanding and knowledge of the customer needs and strategic objectives.