AWS Startups Blog

FloodMapp Leverages AWS for Real-Time Inundation Flood Mapping to Save Lives and Assets

Guest post by Ryan Prosser, CTO, FloodMapp

Flooding is becoming more frequent and more severe, but current flood warnings communicated to businesses and the public are too broad, resulting in lives, assets, and productivity lost. The World Bank estimates that at least 35% of flood damage is preventable with real-time, accurate, and understandable flood information. That’s equivalent to over $20 billion a year in avoidable losses on a global scale. A key part of this problem is that traditional technology used for flood modeling and mapping cannot scale and run in real time.

Flood forecasting is a very complex process, and there are three different phases of modeling that are needed to produce a visual flood forecast map to show the areas and assets at risk of flooding. First, meteorological forecasting is undertaken to predict the weather and rainfall (precipitation) based on atmospheric conditions. Second, meteorological rainfall forecasts need to be fed into hydrology models along with other inputs such as catchment characteristics. Hydrology models can predict river flows and height. Finally, to then produce a flood forecast map, you need to feed the hydrology model results into a hydraulic model to simulate the two dimensional flow behavior of the water across the land, based on the catchment topography.

Current warning systems implement meteorological forecasting and hydrology forecasting but not hydraulic forecasting, due to the complexity of model set up and computational limitations. For example, a traditional 2D hydraulic flood model may take 600 hours to run for a catchment area of 5000km2. It is therefore not feasible to run these models at scale to produce a forecast ahead of a flood event if you only have a 24 to 96 hours window.

This means that current warning systems cannot generate a flood extent map, and instead communicate risk in terms of river catchment and flood height, a unit of measure that is not readily relatable to individuals or businesses.  The efficacy of these warnings relies on users having knowledge of their catchment, nearby rivers, asset floor height, and the gauge ID and river bed elevation datum of the nearest river gauge.  Typically, businesses and individuals do not have this knowledge. Without it, they can be left unprepared with no actionable insights on what assets may be affected and little to no warning time to take practical steps to safely evacuate and prevent asset losses and damage.

Who we are

Myself and FloodMapp’s co-founder and CEO, Juliette Murphy, saw the technical gap in the capability of traditional flood forecasting technologies. After both working in engineering firms for over a decade and experiencing two major floods first-hand, we set out to create a new rapid flood model to drastically improve emergency response and save lives that should have never been lost.  Today, we lead a highly specialized team of flood engineers, data scientists, hydrologists, software engineers, and growth specialists who are passionate about applying technology to solve global problems.

At FloodMapp, our vision is to create a safer future.  Aimed at improving safety and preventing damage, our products provide highly accurate, real-time, property-specific, and dynamic flood inundation and depth insights for businesses exposed to flooding. Our products, Forecast, Nowcast and Postcast, support all phases of the emergency management process before, during, and after a flood event. Having this speed and precision of information drastically improves situational awareness, informed decision making, and most importantly, saves lives. This technology is a game-changer for emergency and asset managers, as well as for resilience leaders that want to keep their communities safe.

How it works

FloodMapp’s technology and products are incredibly technically complex. With AWS, our team is able to create large scale, high throughput data pipelines for scientific computation that leverages machine learning to enable critical and visual real-time flood information.

At FloodMapp, the team has developed a new flood modeling technology named DASH (Dynamic Automated Scalable Hydraulics). DASH is a world-first, computationally-efficient flood modeling solution that has been purpose-built for flood forecasting and early warning. It is 10,000 times faster and up to 200 times higher resolution than traditional models in an emergency response setting. DASH has been developed using traditional hydrology and hydraulics foundations and leveraging machine learning and big data techniques.

We chose AWS to help us to launch a solution that was highly computationally intensive and with billions of data points (1.3 billion as of May 27th) and 350,000 new measurements every hour. They have always been present and supportive in the startup community in Queensland with a proven track record of hosting large scale data pipelines. Our data pipelines currently rely heavily on the AWS Batch and ECS services. Without these services, we would not be able to run our rapid flood models across QLD and the entire continental United States.

Our large-scale data pipelines then feed into our DASH modeling technology to produce Forecast, Nowcast and Postcast mapping products before, during and after a flood event.  Like most things, it’s been an iterative approach to get where we are. We didn’t start out with the ability or need to have up to 207 concurrent hydraulic models each with their own EBS snapshot of terrain data running on Batch. We’ve been in development for 2.5 years, starting small with our proof of concepts and pushing each new service until it was no longer fit for purpose. The biggest change for us was moving to containerization. It was a transition, but a worthwhile one.

What have we learned?

We’ve been focused on scale since day one. It’s been an interesting journey of discovery and design to construct a system capable of the high performance scientific workloads required to deliver hydraulic model results for over 20,000 catchments across Australia and the United states. We ended up learning the hard way that it’s hard to be certain what the performance bottleneck is going to be beforehand. We have an incredible team who’s always trying to get the best performance out of every component. That also means we’ve run into bottlenecks on processes being CPU bound, processes running out of memory, running out of disk space, or even being constrained by disk write speed. The big insight comes from one of my favorite quotes: “How do you eat a whale? One bite at a time.” It’s important to figure out how to make a problem embarrassingly parallel so that you can containerize it with docker and use ECR and Batch (or ECS/EKS) to run these computations in concert. Once you can split a problem into 1,000 smaller problems, you can start to look at EC2 Launch templates to add disk size and provided write speed. But it all starts by breaking it down into small bites.

We work predominantly with innovative government, utility, insurance, and mining leaders in the US and Australia to bring this solution to communities that are exposed to flooding to save lives and assets. If you’d like to know more about how we are helping communities and companies become more resilient to flooding events with our new rapid flood model, please visit our website at www.floodmapp.com.

We also just love being connected with forward-thinking engineers, data scientists, emergency managers and disaster management enthusiasts so please follow us on  LinkedIn and Twitter (@floodmapp) for company updates and interesting information about the current state of global and local flood data.