Application integration is a suite of services that enables communication between decoupled components within microservices, distributed systems, and serverless applications. Amazon Web Services (AWS) offers more than half a dozen application integration services to support a diverse set of workloads running in the cloud.
Choosing an integration service that is the best fit for your organization and workloads can become difficult. This decision guide will help you ask the right questions to discover your requirements and provides clear guidance on how to evaluate and choose the right integration services for your workloads.
This eight and a half minute clip is from a one hour recording of a presentation by AWS director of enterprise strategy Gregor Hohpe at AWS re:Invent 2022. It provides an overview of available AWS application integration services.
As you start to explore and understand your criteria, environment, and the suite of integration services that AWS offers, we recommend that you review some best practices. These best practices are applicable regardless of which service (or suite of services) you choose.
Understand integration in your environment
It's common that some organizations spend more time than they would like on maintaining open-source integrations. We recommend that you consider community sources, and/or backing from enterprises or foundations when making these investments. An investment in these projects is not just financial, but also an investment in knowledge capital and potentially technical debt, as these components and associated integrations will typically need updating. For more information, see the AWS Open Source blog.
Understand your architecture characteristics
The ability to support a wide range of architectures is important. We recommend that you leverage the AWS Well-Architected Framework as a guide to help you understand the decisions you make when building architectures on AWS. Additionally, using the Well-Architected Framework allows you to learn architectural best practices for designing and operating reliable, scalable, secure, efficient, and cost-effective systems in the Cloud.
Use a combination of integration services
If you are using purpose-built services, a combination of services may be the best fit for your use case. The following lists out a few common ways AWS customers are using a combination of services.
- Routing Amazon EventBridge or Amazon Simple Notification Service (Amazon SNS) events to an Amazon Simple Queue Service (Amazon SQS) queue as a buffer for downstream consumers.
- Pulling events directly from a stream (Kinesis Data Streams or Amazon Managed Streaming for Apache Kafka (Amazon MSK)) or a queue (SQS or Amazon MQ) with EventBridge Pipes and sending events to an EventBridge bus to push out to consumers.
- Routing EventBridge or SNS events to a Kinesis Data Streams or Amazon MSK for gathering and viewing analytics.
Once you have a clearer picture of your criteria, environment, strategic direction, and available services (including both deployment hosted and managed modalities), you need to identify your integration requirements. You might already know some of your requirements if you are migrating to an existing integration platform or message broker. However, you need to establish how these requirements would change if you move to a cloud environment, if at all.
Messaging or streaming platforms
These platforms are expected to fulfill a certain business functionality. Use the following example use cases when considering which functionalities you will need.
Consider an insurance company which receives different claims as messages for different claim types (auto, home, or life) with different business rules. It may mean that the message consumer should have the functionality to route claims to a different destination based on header properties in the message.
Consider an airline in which a flight status update needs to notify all connected systems such as baggage or gate operations using a protocol such as Advanced Messaging Queuing Protocol (AMQP). The big question with functional and business use case primitives is what constitutes a best fit messaging platform. We have multiple choices that can determine the suitability of the platform based on the use case.
Market adoption: This platform is widely adopted by a huge customer community and a good enough fit for most use cases. It is tried and tested with a vibrant support community for any issues that may be encountered. It is a low-risk decision with enough training available for the development resources.
Best fit for use case: These platforms will be tailored for specific industry use cases such as airlines, logistics, or healthcare. They may be the best fit for those use cases with ready-made templates available for adoption. These platforms can be easy to get started but can lack the level of adoption in the market as well as flexibility. Adopting this type of platform may require extensive time and resources for validation and building in house expertise.
Modern: These platforms are built with the next generation architecture to address cloud-scale deployments, multi-tenancy, disaster recovery, and serverless type of pricing. Using this type of platform may require some refactoring of workloads for long-term viability. It uses a cloud
native platform and focuses on using the well-architected principles of modern applications.
If the messaging platform is part of a bigger loan processing workflow which needs to be multi-region, the messaging platform also needs to support the same business requirements. If the business needs the ability to recover and rollback to a previous state in rainy day situations, the underlying messaging or streaming platform also needs capability to have some snapshotting or replay capability to recreate the state of the system.
The integration platform you choose should facilitate asynchronous processing of loan applications or act as the store and forward channel for a multi-step media processing workflow. The criticality of the business process would determine the capabilities needed from the messaging or streaming platform.
When considering a major application integration architecture in the cloud, there are different ways to determine the functional requirements for each of the integration points.
The following is some of the criteria to consider when choosing an application integration service.
Managed service and operation overhead
Rapid iteration and feature velocity
Organization size and skills
Consider moving to the cloud to reduce operational cost by standardizing on managed services that shift the operational burden to AWS. Higher levels of abstraction allow developers and operators to focus on their own unique value-add activities, instead of undifferentiated tasks.
Consider standardization on open source technologies. Open-source can enable an organization to find the right skills - and avoid some risk around lock-in.
Making the wrong choices in an open-source ecosystem can lead to being locked into abstractions and homegrown integrations. Additionally, the responsibility for making different open-source components work together often sits with the organization making the choice. This can lead to organizations to spending significant time maintaining open-source integrations.
When choosing the right integration service, it is important to understand the characteristics of the messages that need to be sent between the applications. Key characteristics like the message format, size, retention and priority can drive the decision of the integration service.
Some integration services are better suited for small text based messages whereas some are designed to support multiple formats such as text and binary and offer bigger message sizes. The need to have a replay capability can also be an important factor alongside ordering of messages in some scenarios.
For example, message ordering can be implemented by using the FIFO functionality offered by Amazon SNS and Amazon SQS. There is also a consideration of having a pull or push based architecture, such as EventBridge or SNS invoking a Lambda function asynchronously.
A pull based architecture could use services such as SQS or Kinesis Data Streams, where messages are stored on a queue or a stream and then can be retrieved by a consuming system. Messaging services like Amazon MQ offer capabilities around bigger message payloads and have unlimited retention. However, they do not offer replay capability.
If your primary focus is building and iterating quickly, serverless services may provide the best value. Serverless services let you build applications without managing infrastructure. They provide managed functionality and integrations to reduce time spent writing boiler plate code.
Another benefit of serverless when testing new ideas is that these services offer usage based pricing. Your code only runs when the service is invoked, so an experiment does not require an upfront investment.
Many applications use certain protocols - such as Advanced Message Queuing Protocol (AMQP) or MQ Telemetry Transport (MQTT) - to connect to a messaging service. Alternatively, they have some library dependency which uses a certain messaging protocol. Examples of such libraries or frameworks include Spring Boot, Celery, or MassTransit.
You may want to preserve such applications for different reasons. In these cases, the choice of your integration service also depends on the support of the required protocols to have portability with your applications.
You might need to have a service that provides compatibility with your infrastructure and deployment tools - and run the same integration system that you host on-premises (such as Apache ActiveMQ, RabbitMQ, and Apache Kafka).
Managed open source services (such as Amazon MQ and Amazon MSK) provide the benefits of the cloud, while being compatible with many popular deployment tools used for on-prem deployments.
If refactoring the application is an option, you can benefit from using serverless services for providing this capability natively, as well as rich integration with a variety of AWS services.
The skills of your organization are a major factor when deciding on the right integration service. If your teams are familiar with a self managed product and it meets your needs, then having a managed service for the same provides the path of least impact. Doing so, can help you apply the best practices for the service and focus on value add activities.
Now that you know the criteria you will use to evaluate your application integration needs, you are ready to choose which AWS service(s) are right for your workloads in your environment.
You should now have a clear understanding of what each AWS application integration service does - and which one might be right for you. To explore how to use and learn more about each of the available AWS application integration services - we have provided a pathway to explore how each of the services work. The following section provides links to in-depth documentation, hands-on tutorials, and resources to get you started.
Amazon Kinesis Data Streams
AWS Step Functions
Getting started with Amazon SNS
We show you how to manage topics, subscriptions, and messages using the Amazon SNS console.
Filter Messages Published to Topics with Amazon SNS and Amazon SQS
Learn how to use the message filtering feature of Amazon SNS.
Amazon SNS - Troubleshooting
Learn how to view configuration information, monitor processes, and gather diagnostic data about Amazon SNS.
Build a turn-based game with Amazon DynamoDB and Amazon SNS
Learn how to build a multiplayer, turn-based game using Amazon DynamoDB and Amazon Amazon SNS.
Introduction to Amazon SQS
A high-level overview of Amazon Simple Queue Service (SQS) and the advantages of using a loosely coupled system.
Getting started with Amazon SQS
This guide shows you how to manage queues and messages using the Amazon SQS console.
Orchestrate Queue-based Microservices
Learn how to design and run a serverless workflow that orchestrates a message queue-based microservice.
Send Messages Between Distributed Applications
Use the Amazon SQS console to create and configure a message queue, send a message, receive and delete that message, and then delete the queue.
Get started with Amazon EventBridge
The basis of EventBridge is to create rules that route events to a target. In this guide, you create a basic rule.
Amazon EventBridge get started tutorials
These tutorials will help you explore the features of EventBridge and how to use them.
Build event-driven architectures
Learn the basics of event-driven design, how to choose the right AWS service for the job, as well as how to optimize for both cost and performance.
Building event-driven applications with Amazon EventBridge
Learn how to build event-driven applications by connecting multiple applications, including SaaS applications and AWS services, using the serverless event bus provided by Amazon EventBridge.
Accelerating messaging modernization
We introduce you to Amazon MQ and you can participate in several hands-on labs to better understand it.
Create a connected message broker
Learn how to set up an Amazon MQ message broker and connect a Java application without rewriting your code.
Creating and connecting to an ActiveMQ broker
Learn how you can use the AWS Management Console to create a basic broker.
Explore messaging concepts such as queues, topics and features of Amazon MQ such as failover, network of brokers.
Amazon Kinesis Data Streams
Introduction to Amazon Kinesis Data Streams
We explain how Amazon Kinesis Streams is used to collect, process and analyze real-time streaming data to create valuable insights.
Getting started with Amazon Kinesis Data Streams
Learn fundamental Kinesis Data Streams data flow principles and the steps necessary to put and get data from an Kinesis data stream.
Build highly available streams with Amazon Kinesis Data Streams
We compare and contrast different strategies for creating a highly available Kinesis data stream in case of service interruptions, delays, or outages in the primary Region of operation.
Example Tutorials for Amazon Kinesis Data Streams
These tutorials are designed to further assist you in understanding Amazon Kinesis Data Streams concepts and functionality.
Using AWS Lambda with Amazon Kinesis
Learn how to create a Lambda function to consume events from a Kinesis stream.
Getting started using Amazon MSK
This tutorial shows you an example of how you can create an MSK cluster, produce and consume data, and monitor the health of your cluster using metrics.
Getting started using MSK Serverless clusters
This tutorial shows you an example of how you can create an MSK Serverless cluster, create a client machine that can access it, and use the client to create topics on the cluster and to write data to those topics.
AWS Step Functions
Getting started with AWS Step Functions
These tutorials walk you through creating a basic workflow for processing credit card applications.
Introduction to Step Functions
This course introduces the key components of Step Functions to help you get started managing workflows within an application.
Design Patterns for AWS Step Functions
Learn how to implement design patterns in your Step Functions state machines and why to use each one.
Schedule a Serverless Workflow with AWS Step Functions and Amazon EventBridge Scheduler
We show you how to invoke a state machine using EventBridge Scheduler based on the schedule you define.
Get started with Amazon Managed Workflows for Apache Airflow
This guide describes the prerequisites and the required AWS resources needed to get started with Amazon MWAA.
Configuring the aws-mwaa-local-runner in a CD pipeline
This tutorial guides you through the process of building a continuous delivery (CD) pipeline in GitHub using Amazon Managed Workflows for Apache Airflow's aws-mwaa-local-runner to test your Apache Airflow code locally.
Restricting an Amazon MWAA user's access to a subset of DAGs
We show how you can restrict individual Amazon MWAA users to only view and interact with a specific DAG or a set of DAGs.
Amazon MWAA for Analytics Workshop
Learn to build and orchestrate data and ML pipelines that include many of the above mentioned services, and with that you will gain familiarity and a better understanding of the hooks and operators available as part of Airflow to manage your pipelines/workflows on AWS.
Once you have determined which approach best fits your workload for your environment, we recommend that you review these resources to help you begin implementing your approach. You can find service-specific resources in the previous section, and general event-driven architecture resources in the following section.
Explore reference architecture diagrams to help you create a highly available, secure, flexible, and cost effective architectures.
Explore whitepapers to help you get started, and learn best practices around event driven architectures.
Explore blogs to help you stay up to date on the latest technologies, and modernize your applications.