General
Using Apache Beam to create your Kinesis Data Analytics application is very similar to getting started with Apache Flink. Please follow the instructions in the question above and be sure to install any components necessary for applications to run on Apache Beam, per the instructions in the Developer Guide. Note that Kinesis Data Analytics supports Java SDK’s only when running on Apache Beam.
Yes, using Apache Flink DataStream Connectors, Amazon Kinesis Data Analytics for Apache Flink applications can use AWS Glue Schema Registry, a serverless feature of AWS Glue. You can integrate Apache Kafka/Amazon MSK and Amazon Kinesis Data Streams, as a sink or a source, with your Amazon Kinesis Data Analytics for Apache Flink workloads. Visit the Schema Registry user documentation to get started and to learn more.
Key concepts
Managing applications
- Monitoring Kinesis Data Analytics in the Amazon Kinesis Data Analytics for Apache Flink Developer Guide.
- Monitoring Kinesis Data Analytics in the Amazon Kinesis Data Analytics for SQL Developer Guide.
- Granting Permissions in the Amazon Kinesis Data Analytics for Apache Flink Developer Guide.
- Granting Permissions in the Amazon Kinesis Data Analytics for SQL Developer Guide.
Pricing and billing
For Apache Flink and Apache Beam applications, you are charged a minimum of two KPUs and 50 GB running application storage if your Kinesis Data Analytics application is running. For SQL applications, you are charged a minimum of one KPU if your Kinesis Data Analytics application is running.
Building Apache Flink applications
Authoring application code for applications using Apache Flink
DataStream <GameEvent> rawEvents = env.addSource(
New KinesisStreamSource(“input_events”));
DataStream <UserPerLevel> gameStream =
rawEvents.map(event - > new UserPerLevel(event.gameMetadata.gameId,
event.gameMetadata.levelId,event.userId));
gameStream.keyBy(event -> event.gameId)
.keyBy(1)
.window(TumblingProcessingTimeWindows.of(Time.minutes(1)))
.apply(...) - > {...};
gameStream.addSink(new KinesisStreamSink("myGameStateStream"));
- Streaming data sources: Amazon Managed Streaming for Apache Kafka (Amazon MSK), Amazon Kinesis Data Streams
- Destinations, or sinks: Amazon Kinesis Data Streams, Amazon Kinesis Data Firehose, Amazon DynamoDB, Amazon Elasticsearch Service, and Amazon S3 (through file sink integrations)
Yes. You can use Kinesis Data Analytics Apache Flink applications to replicate data between Amazon Kinesis Data Streams, Amazon MSK, and other systems. An example provided in our documentation demonstrates how to read from one Amazon MSK topic and write to another.
Yes, supports streaming applications built using Apache Beam Java SDK version 2.23. You can build Apache Beam streaming applications in Java and run them using Apache Flink 1.8 on Amazon Kinesis Data Analytics, Apache Spark running on-premises, and other execution engines supported by Apache.
Q: What is Apache Beam?
Apache Beam is an open-source, unified model for defining streaming and batch data processing applications that can be executed across multiple execution engines.
Building SQL applications
Configuring input for SQL applications
Authoring application code for SQL applications
- Always use a SELECT statement in the context of an INSERT statement. When you select rows, you insert results into another in-application stream.
- Use an INSERT statement in the context of a pump. You use a pump to make an INSERT statement continuous, and write to an in-application stream.
- You use a pump to tie in-application streams together, selecting from one in-application stream and inserting into another in-application stream.
CREATE OR REPLACE STREAM "DESTINATION_SQL_STREAM" (
ticker_symbol VARCHAR(4),
change DOUBLE,
price DOUBLE);
CREATE OR REPLACE PUMP "STREAM_PUMP" AS
INSERT INTO "DESTINATION_SQL_STREAM"
SELECT STREAM ticker_symbol, change, price
FROM "SOURCE_SQL_STREAM_001";
Configuring destinations in SQL applications
Comparison to other stream processing solutions
Service Level Agreement
Q: What does the Amazon Kinesis Data Analytics SLA guarantee?
Our Amazon Kinesis Data Analytics SLA guarantees a Monthly Uptime Percentage of at least 99.9% for Amazon Kinesis Data Analytics.
Q: How do I know if I qualify for a SLA Service Credit?
You are eligible for a SLA credit for Amazon Kinesis Data Analytics under the Amazon Kinesis Data Analytics SLA if more than one Availability Zone in which you are running a task, within the same region has a Monthly Uptime Percentage of less than 99.9% during any monthly billing cycle.
For full details on all of the terms and conditions of the SLA, as well as details on how to submit a claim, please see the Amazon Kinesis SLA details page.
Get started with Amazon Kinesis Data Analytics

Learn how to use Amazon Kinesis Data Analytics in the step-by-step guide for SQL or Apache Flink.

Build your first streaming application from the Amazon Kinesis Data Analytics console.