AWS Database Blog
How a small DevOps team at Deutsche Bahn unlocked analytics for their SaaS product
This is a guest blog post by Oliver Jägle, Software Architect at DB Curbside Management, in partnership with AWS Senior Solutions Architect Ben Freiberg.
Have you ever rented a scooter or a bicycle instead of walking or taking a bus? You’re not alone. More and more people around the globe have adopted this sort of mobility: micro mobility. But not all drivers care much about where they leave their bikes or scooters after their ride. These become obstacles to other users of the curb and road. This is where the concept of curbside management comes into play. This enables agencies such as cities or counties to regulate parking and driving, sanction misbehavior, and observe where vehicles are located. Based on collected data, physical infrastructure such as parking places, roads, and bicycle lanes can be planned accordingly.
We at Deutsche Bahn Curbside Management provide a browser-based software as a service (SaaS) offering to lower the barrier for agencies that care about micro mobility. Until recently, our product supported the regulative aspect but couldn’t help much with respect to analytical questions. For example, how was the availability throughout the last week in a particular area, or how long are the trips starting in an observed zone? These answers are based on large amounts of data and need to be presented in a flexible yet easy to digest manner. Below we describe the challenges our team faced and the solution we built to address these requirements. AWS has since launched a new offering called Zero-ETL Integrations for Aurora PostgreSQL for seamlessly making transactional data written into a database available in a data warehouse for analytics. This capability was not available at the time of this engagement.
Background
Our software development team is a DevOps team, which not only provides business functionality, but also operates the service. Being a comparatively small team, we need to keep complexity low, especially on the operations’ side.
As we started to explore the vast space of vehicle analytics, we realized that we needed a way to aggregate and present the data we already collect without operating a full-blown data infrastructure, which can include a data warehouse, ETL (extract, transform, and load) tools, and analytical front ends. As we didn’t have experts in this area on our team, an AWS Prototype Engagement was exactly what we needed to move fast.
This post is therefore not only about a specific solution, but also about what an AWS Prototype Engagement is and why it is enormously valuable to do it early on.
The challenge
Before starting into the actual engagement, we discussed the architectural options and noted our decision in an Architecture Decision Record (ADR). Our objectives were as follows:
- Decouple the persistence used for transactional data from the analytical query workload
- Provide an infrastructure that allows us to efficiently model, query, and visualize vehicle-related data
- Enable analysis and reports that use data from multiple data sources
As a small team operating the overall system, our key drivers were as follows:
- Maintenance – Low maintenance effort of the infrastructure (observability, patch management, scaling, and so on)
- Complexity – Low overall system complexity (coupling, diversity of technologies, traceability, and so on)
- Time-to-market – Low implementation effort for each new key performance indicator (KPI)
Given the traditional ETL processes, key aspects of the solution were straightforward to identify. They just needed to be performed in near-real time. AWS services with native integration were best for data extraction, analytical persistence, data transformation and visualization.
The proposal
The key services and features that we use in this solution:
- AWS Database Migration Service (AWS DMS) is a managed migration and replication service that helps move your database and analytics workloads to AWS quickly, securely, and with minimal downtime and zero data loss. AWS DMS provided the fastest and easiest way to replicate the data into the database used for the analytical workload.
- Amazon Relational Database Service (Amazon RDS) is a managed database service that makes it easy to set up, operate, and scale a relational database in the cloud.
- Amazon Aurora is a relational database management system built for the cloud with full MySQL and PostgreSQL compatibility. Aurora gives you the performance and availability of commercial-grade databases at one-tenth the cost.
- Amazon QuickSight is a very fast, easy-to-use, cloud-powered business analytics service that makes it easy to build visualizations, perform ad-hoc analysis, and quickly get business insights from data, anytime, on any device.
The following diagram illustrates the high-level architecture of the full solution and the most suitable AWS services for each component.
The components in the overall architecture need to provide the following responsibilities:
- Replication of reporting relevant tables of multiple databases to a dedicated database. This is done live as the changes happen.
- This secondary database, which decouples workloads for analytical read access from transactional performance and provides a repository that holds analytical artifacts.
- Developers design the sources for analytical queries as database views with potential spatial operations. Those views can be defined in SQL under version control and be unit-tested against a containerized PostgreSQL.
- Database views are exposed using a thin layer as a REST API. Within this service, implicit restrictions such as the organization can be added.
- Custom Vue.js components formulate queries, filters, and aggregations based on the REST-wrapped database views.
- When possible, analytical widgets can also be designed and embedded with in Amazon QuickSight. Those widgets include options to filter and aggregate data as well as pass the selection context to other widgets.
- QuickSight itself supports row-level security to filter data. Different mechanisms are available when integrating into the front end.
- Visuals are created by using QuickSight. Artifacts can be exported with the AWS Command Line Interface (AWS CLI) or API and persisted in a repository.
- A periodically scheduled job extracts relevant data.
- Amazon Simple Storage Service (Amazon S3) is used as a cost-effective long-term storage for further analysis. For example with Amazon Athena or Amazon Redshift Spectrum.
Solution overview
Prior to starting the engagement with the whole team, we evaluated various options for all of the responsibilities together with an architect from AWS, ranking them based upon their suitability.
Also, we removed aspects such as the REST API or AWS Glue jobs from the scope of the engagement that weren’t mission critical in order to focus on the key outcomes. The result is shown in the following diagram.
During the build week, we implemented an automated pipeline to ingest and integrate geospatial and vehicle data from different transactional source databases (Source Database 1 and Source Database 2) to an analytical database in near-real time.
We used DMS to perform a full load of the existing data first and then implemented change data capture (CDC) on the source databases. This replicated the changes to the analytical database. Amazon Aurora Serverless is the target, which acts as a secondary persistence storage and analytical database. This decouples workloads for analytical read access from transactional systems and provides more flexibility compared to a simple PostgreSQL read replica. This secondary persistence is used to store and serve only analytics artifacts.
We chose Aurora as the team was already very familiar with PostgreSQL and we wanted to continue to use the PostGIS extension. Aurora Serverless v2 was chosen because analytical queries vary greatly in complexity and frequency. Aurora Serverless provides capacity adjustments automatically, on demand. You’re charged only for the resources that your database clusters consume.
Within this secondary persistence, the ingested data is transformed and modeled to support geospatial analytical use cases using PostgreSQL procedures in PL/pgSQL, database views, and rules with geospatial operations. The source code of these artifacts is under version control and unit-tested against a containerized PostgreSQL database. The transformed data is then joined with reference tables and additional aggregations are applied to create materialized tables, which make visualization and analytics on geospatial data faster.
The resulting reporting tables are used in QuickSight to create analysis and dashboards, which are embedded in the web application. The following screenshot shows an example of a dashboard that visualizes the amount of vehicle rentals and returns over a timespan of several days. There are additional charts that categorize rentals and returns by provider and vehicle type such as bike or e-scooters.
Conclusion
It was an intense week getting things built, and it was great to have the support of experienced AWS experts. Within 4 days, we built an automated near-real-time data ingestion pipeline from multiple source databases all the way down (or up the stack) to the embedded dashboards integrated into our web app. These dashboards help our designers get better insight into the data and support the business’s decision-making process.
Seeing the solution being built in our application in just 4 days was amazing. Additionally, there was a key insight that we could not have gained prior to actually implementing the whole system end to end. The system was able to answer the queries we defined in the beginning, but it wasn’t able to extend to more general queries without changing the way that the raw data is persisted initially.
Together with AWS, we designed an upgraded system based on event sourcing that we’re testing right now. It was crucial for us to have the support of AWS and to experiment quickly. If we get to the point of architecting a major part of our application, we’d very much appreciate an AWS Prototype Engagement again!
About the authors
Ben Freiberg is a Senior Solutions Architect with Amazon Web Services. He works with large enterprise customers and enables them to design, architect, and innovate highly scalable and resilient solutions on AWS.
Oliver Jägle is a software developer and architect at DB Systel with an inexplicable passion for business solutions. Currently, his focus is on demystifying complex streams of unbounded data, crafting them into actionable insights.