Finding Answers in the Cloud: MIT’s Wright Brothers Wind Tunnel Re-design
A post by Scott Eberhardt, Principlal Solutions Architect, HPC, Amazon Web Services EMEA SARL, UK Branch
MIT is replacing the Wright Brothers Wind Tunnel (WBWT) with a new, state-of-the-art facility. And they’re relying on AWS to do it. Post-refresh, the WBWT, first commissioned in 1938, will be the largest and most advanced wind tunnel to reside in a U.S. academic setting. But first, it helps to understand why the re-design is happening in the first place.
The Design Flaw
The current structure is a classic, single-return design (see Figure 2). It is low speed (meaning wind whips below the transonic and supersonic flight regimes) but the tunnel can accommodate winds in excess of 200 MPH. The team at MIT recently uncovered a technical design flaw that was optimizing the turning vanes in the outside bottom corner of the tunnel.
Why this matters: an improper design can result in inefficient boundary layer separation. However, re-designing the turning vanes to reduce or eliminate this inefficiency required massive simulations.
New Design, New Requirements
To remedy the issue, the university built advanced turning vanes, or screen vanes, to improve the structure’s efficiency. Still, the physics of flow separation in the presence of pressure changes is complex. It requires sophisticated computational fluid dynamics (CFD) tools to simulate. With MIT researcher Arthur Huang’s experience running simulations at NASA Ames on their Pleaides supercomputer, this soon became the tool of choice for the WBWT. However, single solutions took five days to return results, and much of that time was spent waiting in queues. A faster process was required to meet the design and construction schedule for the new WBWT design.
Here’s Where AWS Comes in
MIT could provide 600 processing cores in their own lab. However, this was inadequate to meet their needs and would have excluded other researchers from the cluster while design simulations were running. For easy access to 1000 cores and to avoid disrupting workflows, MIT turned to AWS.
The university had the option of running two CFD codes for the design – one a commercial product, ANSYS/FLUENT, the other, their own code, named Solution Adaptive Numerical Simulator (SANS). The latter is still in development. The goal had been to run both codes in parallel and compare results. With SANS being an advanced code with high-memory requirements, the researchers opted for the FLUENT workload to meet their tight deadline.
The run matrix is shown in Table 1 with sample analysis solutions in figures 3 and 4. An upcoming paper[i], written by the MIT team, will be presented at the AIAA (American Institute of Aeronautics and Astronautics) SciTech meeting in January with a thorough technical discussion of the experimental, analytical, and computational models used in the design.
A Windfall for WBWT
Mr. Huang requested an Ubuntu environment and, using cfncluster, he was able to launch his own 1000 core cluster using C4-8xlarge instances (18 cores, 60GB) and maintain it for as long as necessary. The ability to shut down and re-launch the tool as needed helped minimize costs, as did the option for spot-market pricing. Results for each run could be returned in a day on Mr. Huang’s own, personal cluster – a significant improvement over their homegrown solution.
In the end, MIT was able to move forward with the WBWT re-design, thanks to a faster turn-around time from a dedicated cluster and the benefit of no queues.
Keep an eye out for the opening of the new WBWT in 2020, and congratulations to MIT!