AWS Compute Blog

Integrating B2B using event notifications with Amazon SNS

This post is courtesy of Murat Balkan, AWS Solutions Architect

Event notification patterns are popular among B2B integrations. Their scalable and decoupled structure helps implement complex integration scenarios in a variety of enterprises.

This post introduces a generic serverless architecture that applies to external integrations that use event notifications with Amazon SNS and Event Fork Pipelines. Some business scenarios involving B2B integrations include:

  • Inventory information sourcing to customers
  • Catalog sourcing to suppliers or partners
  • Real-time event sourcing to customers (for example, in an online auction)

External integration use cases vary, but a fundamental fact unifies them: External integration is difficult because target systems are not under your control. For example, your IT capabilities may differ. You may not have an internal development team and might rely on tools that can read data from a specific source type.

Alternatively, you may have a large development team and therefore have more data-processing needs and capabilities. These systems may replicate the information from the source, run complex machine learning algorithms against historical data, and must act upon real-time data.

Overview of event notification

Event notification is the sharing of state changes that occur in an application or domain with other applications or domains. You can relate events to any domain object such as orders, products, shipments, or financial transactions. The owner of these entities publishes changes to subscribers. The subscribers subscribe to different events and receive notifications accordingly when a new event is available.

After receiving an event, the integrating party application must determine what to do with the event. The application can store the event, enrich it with additional information, relay it to another party, or ignore it.

To ensure scalability, the publishing application must write or publish to a durable and scalable destination, for you to read from there. These destinations can be message queues (such as Amazon MQ and Amazon SQS) or data streams (such as Amazon Kinesis and Apache Kafka, and the managed AWS version Amazon MSK). To choose between streams and queues, evaluate the traffic characteristics and business use cases. However, the main principles are the same for both.

While creating your B2B external integration architecture, consider different needs, and introduce a mechanism to subscribe to only events of interest. In the example of an online auction site, those that perform active and automated bidding might be interested in real-time bidding events. Others, such as shippers, are only interested in tomorrow’s auction inventory. For the latter, an InventoryItemCreated event can be enough.

Events reflect the nature of a business environment, which can be unpredictable. If a worldwide event affects the markets, event counts might soar dramatically. A marketing event can also cause order events to rise. You need a scalable infrastructure to support your architecture. Serverless is a perfect fit for these kinds of scenarios, and this post’s architecture leverages several AWS serverless components.

In this architecture, you interact with a self-subscription application that exposes a REST API. To start interacting with the system, you also select one or more integration channels for receiving the events. You may prefer SFTP, while others prefer webhooks or multiple channels at the same time. Your IT and development capabilities play an essential role in this selection.

After you determine the integration channels, optionally select types of events channels. The self-subscription application knows all possible event types. As part of its development process, the application provisions them each time a new event appears.

The architecture’s notification channels are as follows:

  • Submission of read-time updates using webhook integration
  • Direct S3 access or SFTP integration
  • Access to Kinesis directly from other AWS accounts

Main data flow

The data flow begins when the publisher applications publish all of their events to a single Amazon SNS topic. Amazon SNS follows the publish and subscribe pattern to fan out a published message to all subscribers of that message topic.

It is worth mentioning Amazon’s new serverless service offering for event-based integrations, Amazon EventBridge. Amazon EventBridge is an event bus that makes it easy to connect applications together using data from your own applications, Software-as-a-Service (SaaS) applications, and AWS services. Amazon EventBridge comes with a powerful rules engine which allows you to put the business logic onto the bus. It can manipulate the payload of the messages and deliver specific payloads to specific consumer applications. Native event integration capabilities with AWS services make it a good candidate for event-based integrations.

For this architecture, I used SNS because AWS offers a quick deployment option through Event Fork Pipelines, a collection of open-source event handling pipelines, based on the AWS Serverless Application Model (AWS SAM).

You can deploy Event Fork Pipelines directly from the AWS Serverless Application Repository into your AWS account. The proposed SNS based architecture also allows the use of custom message payloads in any JSON format, including raw.

SNS has a powerful feature called subscription filter policies. These policies serve as intercepting filters and pass only the desired types of messages to subscribers. Because SNS performs the filtering, you don’t have to implement filtering logic, which decreases their complexity. The policies look for specific attributes and their values in the message. You can use the message attribute Event_Type for filtering.

After filtering the events, route them to the previously selected notification channels. The events land in a queue at each channel before the channel’s logic processes. SNS has built-in integration with SQS, a powerful serverless queueing service. SQS holds your events and acts as a buffer. Every delivery channel’s characteristics and handlers are different. You need a different SQS queue per delivery channel type.

Lambda functions subscribed to the webhook queue handle the webhook notification mechanism. You can convert the polled events to external HTTPS calls against your web servers. Internet delivery over the HTTP protocol is always slower than internal message propagation.

To keep up with the constant flow of events and increase message throughput, webhooks are sent in parallel. SQS provides different features for handling errors that might occur on the subscribing side. For example, the visibility timeout mechanism causes messages to be available after a specified time period, which serves as an auto-retry mechanism for consumed but not properly processed messages. You can also reject functional errors, which cause SQS to send you to dead-letter queues (DLQ) for further troubleshooting.

Amazon S3 handles the file-based notification mechanism. In this integration, a Lambda function polls a dedicated queue that integrates with S3. This function forwards the events to Amazon Kinesis Data Firehose. Kinesis Data Firehose acts as a buffer to consolidate individual messages into bigger files. SQS provides up to 10 messages in a single batch.

After reaching the Kinesis Data Firehose batch size or batch interval, Kinesis Data Firehose delivers the files to an S3 bucket. You can share this bucket with accounts using cross-account access. If you rely on SFTP for file transfers, AWS Transfer for SFTP can expose the objects over SFTP.

Kinesis Data Firehose also lets you define Lambda functions for the transformation of data before your Lambda function writes it to S3. You can use this part of the process to cleanse, filter, or enrich your data. For direct system integration, SQS cross-account access is always an option if you have an account. For more information, see Basic Examples of Amazon SQS Policies.

Example architecture that uses different event types for different delivery channels

Figure 1: Example architecture that uses different event types for different delivery channels

Self-subscription application

A self-subscription application collects channel and event type selections. You can use a single-page application that interacts with a REST API that Amazon API Gateway hosts. API Gateway uses AWS Lambda for backend processing and Amazon DynamoDB for user profile persistence. After collecting integration channel selections and optionally event type filters for these channels, the self-subscription application also orchestrates cloud provisioning tasks.

As subscriptions occur, the self-subscription application’s backend Lambda function triggers AWS CloudFormation to update the subscriptions, subscription filters, and other notification infrastructure components. A different AWS CloudFormation stack manages every integrating party.

Because the whole architecture is serverless, you can use AWS SAM during your provisioning and let AWS SAM interact with AWS CloudFormation. AWS SAM aims to simplify infrastructure as code practices for serverless resources. AWS SAM also allows you to inject application resources from AWS Serverless Application Repository.

AWS provides a set of serverless applications via AWS Serverless Application Repository to cover common integration scenarios for event-driven architectures. These off-the-shelf applications speed up the development and delivery of common event-driven mechanisms such as Command Query Responsibility Segregation (CQRS), Event Replay, and Event Storage or Backup. You can reference Event Fork   applications within AWS SAM templates to use in your applications.

While designing the provisioning pipeline of the proposed architecture, reuse two of the Event Fork Pipeline applications: Event Replay Pipeline (fork-event-replay-pipeline) and Event Storage and Backup Pipeline (fork-event-storage-backup-pipeline). Your webhooks use case is a custom Event Fork Pipeline application.

Because a single AWS SAM template contains these applications, you can deploy and manage the subscription filters and Event Fork Pipelines as a single self-subscription application stack for each integrating party.

CI/CD Pipeline that deploys the serverless application via AWS SAM

Figure 2: CI/CD Pipeline that deploys the serverless application via AWS SAM

In this architecture, Lambda converts the user input into an AWS SAM template and puts it into an S3 bucket. This PUT action triggers an AWS CodePipeline pipeline. The pipeline’s build phase downloads, packages, and deploys the provided AWS SAM template. You can also enhance the pipeline with features such as notifications, manual approvals, or external integrations.

Conclusion

Architectures such as this help you share your business or data events with suppliers, partners, and customers while minimizing integration time and streamlining your business processes. You can try out existing Event Fork Pipelines that was published by AWS, create custom pipelines for your internal use or share them with other AWS users in the AWS Serverless Application Repository.