AWS HPC Blog

Randy Seamans

Author: Randy Seamans

Randy is an industry storage veteran and a Principal Storage Specialist and advocate for AWS, specializing in High Performance Storage, Artificial Intelligence (HPC/AI), Enterprise Storage, and Disaster Recovery. For more Storage Insights and Fun, follow him at https://www.linkedin.com/in/storageperformance.

Figure 1: High level architecture of the file system.

Scaling a read-intensive, low-latency file system to 10M+ IOPs

Many shared file systems are used in supporting read-intensive applications, like financial backtesting. These applications typically exploit copies of datasets whose authoritative copy resides somewhere else. For small datasets, in-memory databases and caching techniques can yield impressive results. However, low latency flash-based scalable shared file systems can provide both massive IOPs and bandwidth. They’re also easy to adopt because of their use of a file-level abstraction. In this post, I’ll share how to easily create and scale a shared, distributed POSIX compatible file system that performs at local NVMe speeds for files opened read-only.