How Siemens built a fully managed scheduling mechanism for updates on Amazon S3 data lakes
Siemens is a global technology leader with more than 370,000 employees and 170 years of experience. To protect Siemens from cybercrime, the Siemens Cyber Defense Center (CDC) continuously monitors Siemens’ networks and assets. To handle the resulting enormous data load, the CDC built a next-generation threat detection and analysis platform called ARGOS. ARGOS is a hybrid-cloud solution that makes heavy use of fully managed AWS services for streaming, big data processing, and machine learning.
Users such as security analysts, data scientists, threat intelligence teams, and incident handlers continuously access data in the ARGOS platform. Further, various automated components update, extend, and remove data to enrich information, improve data quality, enforce PII requirements, or mutate data due to schema evolution or additional data normalization requirements. Keeping the data always available and consistent presents multiple challenges.
While object-based data lakes are highly beneficial from a cost perspective compared to traditional transactional databases in such scenarios, they hardly allow for atomic updates or require highly complex and costly extensions. To overcome this problem, Siemens designed a solution that enables atomic file updates on Amazon S3-based data lakes without compromising query performance and availability.
This post presents this solution, which is an easy-to-use scheduling service for S3 data update tasks. Siemens uses it for multiple purposes, including pseudonymization, anonymization, and removal of sensitive data. This post demonstrates how to use the solution to remove values from a dataset after a predefined amount of time. Adding further data processing tasks is straightforward because the solution has a well-defined architecture and the whole stack consists of fewer than 200 lines of source code. It is solely based on fully managed AWS services and therefore achieves minimal operational overhead.
This post uses an S3-based data lake with continuous data ingestion and Amazon Athena as query mechanism. The goal is to remove certain values after a predefined time automatically after ingestion. Applications and users consuming the data via Athena are not impacted (for example, they do not observe downtimes or data quality issues like duplication).
The following diagram illustrates the architecture of this solution.
Siemens built the solution with the following services and components:
- Scheduling trigger – New data (for example, in JSON format) is continuously uploaded to a S3 bucket.
- Task scheduling – As soon as new files land, an AWS Lambda function processes the resulting S3 bucket notification events. As part of the processing, it creates a new item on Amazon DynamoDB that specifies a Time to Live (TTL) and the path to that S3 object.
- Task execution trigger – When the TTL expires, the DynamoDB item is deleted from the table and the DynamoDB stream triggers a Lambda function that processes the S3 object at that path.
- Task execution – The Lambda function derives meta information (like the relevant S3 path) from the TTL expiration event and processes the S3 object. Finally, the new S3 object replaces the older version.
- Data usage – The updated data is available for querying from Athena without further manual processing, and uses S3’s eventual consistency on read operations.
About DynamoDB Streams and TTL
TTL for DynamoDB lets you define when items in a table expire so they can be deleted from the database automatically. TTL comes at no extra cost as a way to reduce storage use and reduce the cost of storing irrelevant data without using provisioned throughput. You can set a timestamp for deletion on a per-item basis, which allows you to limit storage usage to only those records that are relevant, by enabling TTL on a table.
To implement this solution manually, complete the following steps:
- Create a DynamoDB table and configure DynamoDB Streams.
- Create a Lambda function to insert TTL records.
- Configure an S3 event notification on the target bucket.
- Create a Lambda function that performs data processing tasks.
- Use Athena to query the processed data.
If you want to deploy the solution automatically, you may skip these steps, and use the AWS Cloudformation template provided.
To complete this walkthrough, you must have the following:
- An AWS account with access to the AWS Management Console.
- A role with access to S3, DynamoDB, Lambda, and Athena.
Creating a DynamoDB table and configuring DynamoDB Streams
Start first with the time-based trigger setup. For this, you use S3 notifications, DynamoDB Streams, and a Lambda function to integrate both services. The DynamoDB table stores the items to process after a predefined time.
Complete the following steps:
- On the DynamoDB console, create a table.
- For Table name, enter
- For Primary key, enter path and choose String.
- Select the table and click on Manage TTL next to “Time to live attribute” under table details.
- For TTL attribute, enter
- For DynamoDB Streams, choose Enable with view type New and old images.
Note that you can enable DynamoDB TTL on non-numeric attributes, but it only works on numeric attributes.
The DynamoDB TTL is not minute-precise. Expired items are typically deleted within 48 hours of expiration. However, you may experience shorter deviations of only 10–30 minutes from the actual TTL value. For more information, see Time to Live: How It Works.
Creating a Lambda function to insert TTL records
The first Lambda function you create is for scheduling tasks. It receives a S3 notification as input, recreates the S3 path (for example,
s3://<bucket>/<key>), and creates a new item on DynamoDB with two attributes: the S3 path and the TTL (in seconds). For more information about a similar S3 notification event structure, see Test the Lambda Function.
To deploy the Lambda function, on the Lambda console, create a function named
NotificationFunction with the Python 3.7 runtime and the following code:
Configuring S3 event notifications on the target bucket
You can take advantage of the scalability, security, and performance of S3 by using it as a data lake for storing your datasets. Additionally, you can use S3 event notifications to capture S3-related events, such as the creation or deletion of objects within a bucket. You can forward these events to other AWS services, such as Lambda.
To configure S3 event notifications, complete the following steps:
- On the S3 console, create an S3 bucket named
- Click on the bucket and go to “Properties” tab.
- Under Advanced Settings, choose Events and add a notification.
- For Name, enter
- For Events, select All object create events.
- For Prefix, enter
- For Send to, choose Lambda Function.
- For Lambda, choose NotificationFunction.
This configuration restricts the scheduling to events that happen within your previously defined dataset. For more information, see How Do I Enable and Configure Event Notifications for an S3 Bucket?
Creating a Lambda function that performs data processing tasks
You have now created a time-based trigger for the deletion of the record in the DynamoDB table. However, when the system delete occurs and the change is recorded in DynamoDB Streams, no further action is taken. Lambda can poll the stream to detect these change records and trigger a function to process them according to the activity (
This post is only concerned with deleted items because it uses the TTL feature of DynamoDB Streams to trigger task executions. Lambda gives you the flexibility to either process the item by itself or to forward the processing effort to somewhere else (such as an AWS Glue job or an Amazon SQS queue).
This post uses Lambda directly to process the S3 objects. The Lambda function performs the following tasks:
- Gets the S3 object from the DynamoDB item’s S3 path attribute.
- Modifies the object’s data.
- Overrides the old S3 object with the updated content and tags the object as
Complete the following steps:
- On the Lambda console, create a function named
JSONProcessingFunctionwith Python 3.7 as the runtime and the following code:
- On the Lambda function configuration webpage, click on Add trigger.
- For Trigger configuration, choose DynamoDB.
- For DynamoDB table, choose objects-to-process.
- For Batch size, enter
- For Batch window, enter
- For Starting position, choose Trim horizon.
- Select Enable trigger.
batch size = 1 because each S3 object represented on the DynamoDB table is typically large. If these files are small, you can use a larger batch size. The batch size is essentially the number of files that your Lambda function processes at a time.
Because any new objects on S3 (in a versioning-enabled bucket) create an object creation event, even if its key already exists, you must make sure that your task schedule Lambda function ignores any object creation events that your task execution function creates. Otherwise, it creates an infinite loop. This post uses tags on S3 objects: when the task execution function processes an object, it adds a
processed tag. The task scheduling function ignores those objects in subsequent executions.
Using Athena to query the processed data
The final step is to create a table for Athena to query the data. You can do this manually or by using an AWS Glue crawler that infers the schema directly from the data and automatically creates the table for you. This post uses a crawler because it can handle schema changes and add new partitions automatically. To create this crawler, use the following code:
Replace <AWSGlueServiceRole-crawler> and <data-bucket> with the name of your
AWSGlueServiceRole and S3 bucket, respectively.
When the crawling process is complete, you can start querying the data. You can use the Athena console to interact with the table while its underlying data is being transparently updated. See the following code:
SELECT * FROM data_db.dataset LIMIT 1000
You can use the following AWS CloudFormation template to create the solution described on this post on your AWS account. To launch the template, choose the following link:
This CloudFormation stack requires the following parameters:
- Stack name – A meaningful name for the stack, for example,
- Bucket name – The name of the S3 bucket to use for the solution. The stack creation process creates this bucket.
- Time to Live – The number of seconds to expire items on the DynamoDB table. Referenced S3 objects are processed on item expiration.
Stack creation takes up to a few minutes. Check and refresh the AWS CloudFormation Resources tab to monitor the process while it is running.
When the stack shows the state
CREATE_COMPLETE, you can start using the solution.
Testing the solution
To test the solution, download the mock_uploaded_data.json dataset created with the Mockaroo data generator. The use case is a web service in which users can upload files. The goal is to delete those files some predefined time after the upload to reduce storage and query costs. To this end, the provided code looks for the attribute
file_contents and replaces its value with an empty string.
You can now upload new data into your
data-bucket S3 bucket under the
dataset/ prefix. Your
NotificationFunction Lambda function processes the resulting bucket notification event for the upload, and a new item appears on your DynamoDB table. Shortly after the predefined TTL time, the
JSONProcessingFunction Lambda function processes the data and you can check the resulting changes via an Athena query.
You can also confirm that a S3 object was processed successfully if the DynamoDB item corresponding to this S3 object is no longer present in the DynamoDB table and the S3 object has the
This post showed how to automatically re-process objects on S3 after a predefined amount of time by using a simple and fully managed scheduling mechanism. Because you use S3 for storage, you automatically benefit from S3’s eventual consistency model, simply by using identical keys (names) both for the original and processed objects. This way, you avoid query results with duplicate or missing data. Also, incomplete or only partially uploaded objects do not result in data inconsistencies because S3 only creates new object versions for successfully completed file transfers.
You may have previously used Spark to process objects hourly. This requires you to monitor objects that must be processed, to move and process them in a staging area, and to move them back to their actual destination. The main drawback is the final step because, due to Spark’s parallelism nature, files are generated with different names and contents. That prevents direct file replacement in the dataset and leads to downtimes or potential data duplicates when data is queried during a move operation. Additionally, because each copy/delete operation could potentially fail, you have to deal with possible partially processed data manually.
From an operations perspective, AWS serverless services simplify your infrastructure. You can combine the scalability of these services with a pay-as-you-go plan to start with a low-cost POC and scale to production quickly—all with a minimal code base.
Compared to hourly Spark jobs, you could potentially reduce costs by up to 80%, which makes this solution both cheaper and simpler.
Special thanks to Karl Fuchs, Stefan Schmidt, Carlos Rodrigues, João Neves, Eduardo Dixo and Marco Henriques for their valuable feedback on this post’s content.
About the Authors
Pedro Completo Bento is a senior big data engineer working at Siemens CDC. He holds a Master in Computer Science from the Instituto Superior Técnico in Lisbon. He started his career as a full-stack developer, specializing later on big data challenges. Working with AWS, he builds highly reliable, performant and scalable systems on the cloud, while keeping the costs at bay. In his free time, he enjoys to play boardgames with his friends.
Arturo Bayo is a big data consultant at Amazon Web Services. He promotes a data-driven culture in enterprise customers around EMEA, providing specialized guidance on business intelligence and data lake projects while working with AWS customers and partners to build innovative solutions around data and analytics.