General

Q: What is Amazon Kinesis Video Streams?

Amazon Kinesis Video Streams makes it easy to securely stream media from connected devices to AWS for storage, analytics, machine learning (ML), playback, and other processing. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest streaming media from millions of devices. It durably stores, encrypts, and indexes media in your streams, and allows you to access your media through easy-to-use APIs. Kinesis Video Streams enables you to quickly build computer vision and ML applications through integration with Amazon Rekognition Video, Amazon SageMaker, and libraries for ML frameworks such as Apache MxNet, TensorFlow, and OpenCV. For live and on-demand playback, Kinesis Video Streams provides fully-managed capabilities for HTTP Live Streaming (HLS) and Dynamic Adaptive Streaming over HTTP (DASH). Kinesis Video Streams also supports ultra-low latency two-way media streaming with WebRTC, as a fully managed capability.

Q: What is time-encoded data?

Time-encoded data is any data in which the records are in a time series, and each record is related to its previous and next records. Video is an example of time-encoded data, where each frame is related to the previous and next frames through spatial transformations. Other examples of time-encoded data include audio, RADAR, and LIDAR signals. Amazon Kinesis Video Streams is designed specifically for cost-effective, efficient ingestion, and storage of all kinds of time-encoded data for analytics and ML use cases.

Q: What are common use cases for Kinesis Video Streams?

Kinesis Video Streams is ideal for building media streaming applications for camera-enabled IoT devices and for building real-time computer vision-enabled ML applications that are becoming prevalent in a wide range of use cases such as the following:

Smart Home

With Kinesis Video Streams, you can easily stream video and audio from camera-equipped home devices such as baby monitors, webcams, and home surveillance systems to AWS. You can then use the streams to build a variety of smart home applications ranging from simple media playback to intelligent lighting, climate control systems, and security solutions.

Smart City

Many cities have installed large numbers of cameras at traffic lights, parking lots, shopping malls, and just about every public venue, capturing video 24/7. You can use Kinesis Video Streams to securely and cost-effectively ingest, store, playback, and analyze this massive volume of media data to help solve traffic problems, help prevent crime, dispatch emergency responders, and much more.

Industrial Automation

You can use Kinesis Video Streams to collect a variety of time-encoded data such as RADAR and LIDAR signals, temperature profiles, and depth data from industrial equipment. You can then analyze the data using your favorite machine learning framework including Apache MxNet, TensorFlow, and OpenCV for industrial automation use cases like predictive maintenance. For example, you can predict the lifetime of a gasket or valve and schedule part replacement in advance, reducing downtime and defects in a manufacturing line.

Q: What does Amazon Kinesis Video Streams manage on my behalf?

Amazon Kinesis Video Streams is a fully managed service for media ingestion, storage, and processing. It enables you to securely ingest, process, and store video at any scale for applications that power robots, smart cities, industrial automation, security monitoring, machine learning (ML), and more. Kinesis Video Streams also ingests other kinds of time-encoded data like audio, RADAR, and LIDAR signals. Kinesis Video Streams provides you SDKs to install on your devices to make it easy to securely stream media to AWS. Kinesis Video Streams automatically provisions and elastically scales all the infrastructure needed to ingest media streams from millions of devices. It also durably stores, encrypts, and indexes the media streams and provides easy-to-use APIs so that applications can retrieve and process indexed media fragments based on tags and timestamps. Kinesis Video Streams provides a library to integrate ML frameworks such as Apache MxNet, TensorFlow, and OpenCV with video streams to build machine learning applications. Kinesis Video Streams is integrated with Amazon Rekognition Video, enabling you to build computer vision applications that detect objects, events, and people.

Key concepts

Q: What is a video stream?

A video stream is a resource that enables you to capture live video and other time-encoded data, optionally store it, and make the data available for consumption both in real time and on a batch or ad-hoc basis. When you choose to store data in the video stream, Kinesis Video Streams will encrypt the data, and generate a time-based index on the stored data. In a typical configuration, a Kinesis video stream has only one producer publishing data into it. The Kinesis video stream can have multiple consuming applications processing the contents of the video stream.

Q: What is a fragment?

A fragment is a self-contained sequence of media frames. The frames belonging to a fragment should have no dependency on any frames from other fragments. As fragments arrive, Kinesis Video Streams assigns a unique fragment number, in increasing order. It also stores producer-side and server-side time stamps for each fragment, as Kinesis Video Streams-specific metadata.

Q: What is a producer?

A producer is a general term used to refer to a device or source that puts data into a Kinesis video stream. A producer can be any video-generating device, such as a security camera, a body-worn camera, a smartphone camera, or a dashboard camera. A producer can also send non-video time-encoded data, such as audio feeds, images, or RADAR data. One producer can generate one or more video streams. For example, a video camera can push video data to one Kinesis video stream and audio data to another.

Q: What is a consumer?

Consumers are your custom applications that consume and process data in Kinesis video streams in real time, or after the data is durably stored and time-indexed when low latency processing is not required. You can create these consumer applications to run on Amazon EC2 instances. You can also use other Amazon AI services such as Amazon Rekognition, or third party video analytics providers to process your video streams.

Q: What is a chunk?

Upon receiving the data from a producer, Kinesis Video Streams stores incoming media data as chunks. Each chunk consists of the actual media fragment, a copy of media metadata sent by the producer, and the Kinesis Video Streams-specific metadata such as the fragment number, and server-side and producer-side timestamps. When a consumer requests media data through the GetMedia API operation, Kinesis Video Streams returns a stream of chunks, starting with the fragment number that you specify in the request.

Q: How do I think about latency in Amazon Kinesis Video Streams?

There are four key contributors to latency in an end-to-end media data flow.

  • Time spent in the device’s hardware media pipeline: This pipeline can comprise of the image sensor and any hardware encoders as appropriate. In theory, this can be as little as a single frame duration. In practice it rarely is. All encoders in order to work effectively for media encoding (compression) will accumulate several frames to construct a fragment. This process and any corresponding motion compensation algorithms will add anywhere from one second to several seconds of latency on the device before the data is packaged for transmission.
  • Latency incurred on actual data transmission on the internet: The quality of the network throughput and latency can vary significantly based on where the producing device is located.
  • Latency added by the Kinesis Video Streams as it receives data from the producer device: The incoming data is made available immediately on the GetMedia API operation for any consuming application. If you choose to retain data, then Kinesis Video Streams will ensure that the data is encrypted using AWS Key Management Service (AWS KMS) and generate a time-based index on the individual fragments in the video stream. When you access this retained data using the GetMediaforFragmentList API, Kinesis Video Streams fetches the fragments from durable storage, decrypt the data, and make it available for the consuming application.
  • Time latency on data transmission back to the consumer: There can be consuming devices on the internet or other AWS regions that request the media data. The quality of the network throughput and latency can vary significantly based on where the consuming device is located.

Publishing data to streams

Q: How do I publish data to my Kinesis video stream?

You can publish media data to a Kinesis video stream via the PutMedia operation, or use the Kinesis Video Streams Producer SDKs in Java, C++, or Android. If you choose to use the PutMedia operation directly, you will be responsible for packaging the media stream according to the Kinesis Video Streams data specification, handle the stream creation, token rotation, and other actions necessary for reliable streaming of media data to the AWS cloud. We recommend using the Producer SDKs to make these tasks simpler and get started faster.

Q: What is the Kinesis Video Streams PutMedia operation?

Kinesis Video Streams provides a PutMedia API to write media data to a Kinesis video stream. In a PutMedia request, the producer sends a stream of media fragments. As fragments arrive, Kinesis Video Streams assigns a unique fragment number, in increasing order. It also stores producer-side and server-side time stamps for each fragment, as Kinesis Video Streams-specific metadata.

Q: What is the Kinesis Video Streams Producer SDK?

The Amazon Kinesis Video Streams Producer SDK are a set of easy-to-use and highly configurable libraries that you can install and customize for your specific producers. The SDK makes it easy to build an on-device application that securely connects to a video stream, and reliably publishes video and other media data to Kinesis Video Streams. It takes care of all the underlying tasks required to package the frames and fragments generated by the device's media pipeline. The SDK also handles stream creation, token rotation for secure and uninterrupted streaming, processing acknowledgements returned by Kinesis Video Streams, and other tasks.

Q: In which programming platforms is the Kinesis Video Streams Producer SDK available?

Kinesis Video Streams Producer SDK's core is built in C, so it is efficient and portable to a variety of hardware platforms. Most developers will prefer to use the C, C++ or Java versions of the Kinesis Video Streams producer SDK. There is also an Android version of the producer SDK for mobile app developers who want to stream video data from Android devices.

Q: What should I be aware of before getting started with the Kinesis Video Streams producer SDK?

The Kinesis Video Streams producer SDK does all the heavy lifting of packaging frames and fragments, establishes a secure connection, and reliably streams video to AWS. However there are many different varieties of hardware devices and media pipelines running on them. To make the process of integration with the media pipeline easier, we recommend having some knowledge of: 1) the frame boundaries, 2) the type of a frame used for the boundaries, I-frame or non I-frame, and 3) the frame encoding time stamp.

Reading data from streams

Q: What is the GetMedia API?

You can use the GetMedia API to retrieve media content from a Kinesis video stream. In the request, you identify stream name or stream Amazon Resource Name (ARN), and the starting chunk. Kinesis Video Streams then returns a stream of chunks in order by fragment number. When you put media data (fragments) on a stream, Kinesis Video Streams stores each incoming fragment and related metadata in what is called a "chunk." The GetMedia API returns a stream of these chunks starting from the chunk that you specify in the request.

Q: What is the GetMediaForFragmentList API?

You can use the GetMediaForFragmentList API to retrieve media data for a list of fragments (specified by fragment number) from the archived data in a Kinesis video stream. Typically a call to this API operation is preceded by a call to the ListFragments API.

Q: What is the ListFragments API?

You can use the ListFragments API to return a list of Fragments from the specified video stream and start location - using the fragment number or timestamps - within the retained data.

Q: How long can I store data in Kinesis Video Streams?

You can store data in their streams for as long as you like. Kinesis Video Streams allows you to configure the data retention period to suit your archival and storage requirements.

Q: What is the Kinesis Video Streams parser library?

The Kinesis Video Streams parser library makes it easy for developers to consume and process the output of Kinesis Video Streams GetMedia operation. Application developers will include the library in their video analytics and processing applications that operate on video streams. The applications themselves will run on your EC2 instances, although they can be run elsewhere. The library has features that make it easy to get a frame-level object and its associated metadata, extract and collect Kinesis Video Streams-specific metadata attached to fragments, and consecutive fragments. You can then build custom applications that can more easily use the raw video data for your use cases.

Q: If I have a custom processing application that needs to use the frames (and fragments) carried by the Kinesis video stream, how do I do that?

In general, if you want to consume video streams and then manipulate them to fit your custom application's needs, then there are two key steps to consider. First, get the bytes in a frame from the formatted stream vended by the GetMedia API. You can use the stream parser library to get the frame objects. Next, get the metadata necessary to decode a frame such as the pixel height, width, codec id, and codec private data. Such metadata is embedded in the track elements. The parser library makes extracting this information easier by providing helper classes to collect the track information for a fragment.

The steps after this are highly application dependent. You may wish to decode frames, format them for a playback engine, transcode them for content distribution, or feed them into a custom deep learning application format. The Kinesis Video Streams stream parser library is open-sourced so that you can extend it for your specific use cases.

Playing back video from streams

Q: How do I playback the video captured in my own application?

You can use Amazon Kinesis Video Streams’ HTTP Live Streams (HLS) and Dynamic Adaptive Streaming over HTTP (DASH) capabilities to playback the ingested video in fragmented MP4 or MPEG_TS packaged format. HLS and DASH are industry-standard, HTTP-based media streaming protocols. As you capture video from devices using Amazon Kinesis Video Streams, you can use the HLS or DASH APIs to playback live or recorded video. This capability is fully managed, so you do not have to build any cloud-based infrastructure to support video playback. For low-latency playback and two-way media streaming, see the FAQs on WebRTC–based streaming.

Q: How do I get started with Kinesis Video Streams HLS or DASH APIs?

To view a Kinesis video stream using HLS or DASH, you first create a streaming session using GetHLSStreamingSessionURL or GetDASHStreamingSessionURL APIs. This action returns a URL (containing a session token) for accessing the HLS or DASH session, which you can then use in a media player or a standalone application to playback the stream. You can use a third-party player (such as Video.js or Google Shaka Player) to display the video stream, by providing the HLS or DASH streaming session URL, either programmatically or manually. You can also play back video by entering the HLS or DASH streaming session URL in the Location bar of the Apple Safari or Microsoft Edge browsers. Additionally, you can use the video players for Android (Exoplayer) and iOS (AVMediaPlayer) for mobile apps.

Q: What are the basic requirements to use the Kinesis Video Streams HLS APIs?

An Amazon Kinesis video stream has the following requirements for providing data through HLS:

  • The media must contain h.264 or h.265 encoded video and, optionally, AAC encoded audio. Specifically, the codec ID of track 1 should be V_MPEG/ISO/AVC for h.264 or V_MPEG/ISO/HEVC for h.265. Optionally, the codec ID of track 2 should be A_AAC.
  • The video track of each fragment must contain codec private data in the Advanced Video Coding (AVC) for h.264 format or HEVC for h.265 format (MPEG-4 specification ISO/IEC 14496-15). For information about adapting stream data to a given format, see NAL Adaptation Flags.
  • Data retention must be greater than 0.
  • The audio track (if present) of each fragment must contain codec private data in the AAC format (AAC specification ISO/IEC 13818-7)

Q: What are the basic requirements to use the Kinesis Video Streams DASH APIs?

An Amazon Kinesis video stream has the following requirements for providing data through DASH:

  • The media must contain h.264 or h.265 encoded video and, optionally, AAC or G.711 encoded audio. Specifically, the codec ID of track 1 should be V_MPEG/ISO/AVC (for h.264) or V_MPEGH/ISO/HEVC (for H.265). Optionally, the codec ID of track 2 should be A_AAC (for AAC) or A_MS/ACM (for G.711).
  • The video track of each fragment must contain codec private data in the Advanced Video Coding (AVC) for H.264 format and HEVC for H.265 format. For more information, see MPEG-4 specification ISO/IEC 14496-15. For information about adapting stream data to a given format, see NAL Adaptation Flags.
  • Data retention must be greater than 0.
  • The audio track (if present) of each fragment must contain codec private data in the AAC format (AAC specification ISO/IEC 13818-7) or the MS Wave format.

Q: What are the available playback modes for HLS or DASH streaming in Kinesis Video Streams?

There are two different playback modes supported by both HLS and DASH: Live and On Demand.

LIVE: For live sessions, the HLS media playlist is continually updated with the latest fragments as they become available. When this type of session is played in a media player, the user interface typically displays a "live" notification, with no scrubber control for choosing the position in the playback window to display.

ON DEMAND: For on-demand, the HLS media playlist contains all the fragments for the session, up to the number that is specified in MaxMediaPlaylistFragmentResults. The playlist can only be retrieved once for each session.
Additionally, HLS also supports playback in LIVE_REPLAY mode. In this mode, the HLS media playlist is updated similarly to how it is updated for LIVE mode except that it starts by including fragments from a given start time. This mode is useful for cases when you want to start playback from a point in the past from stored media and continue into live streaming.

Q: What is the delay in the playback of video using the API?

The latency for live playback is typically between 3 and 5 seconds, but this could vary. We strongly recommend running your own tests and proof-of-concepts to determine the target latencies. There are a variety of factors that impact latencies, including the use case, how the producer generates the video fragments, the size of the video fragment, the player tuning, and network conditions both streaming into AWS and out of AWS for playback. For low-latency playback, see the FAQs on WebRTC–based streaming.

Q: What are the relevant limits to using HLS or DASH?

A Kinesis video stream supports a maximum of ten active HLS or DASH streaming sessions. If a new session is created when the maximum number of sessions is already active, the oldest (earliest created) session is closed. The number of active GetMedia connections on a Kinesis video stream does not count against this limit, and the number of active HLS sessions does not count against the active GetMedia connection limit. See Kinesis Video Streams Limits for more details.

Q: What’s the difference between Kinesis Video Streams and AWS Elemental MediaLive?

AWS Elemental MediaLive is a broadcast-grade live video encoding service. It lets you create high-quality video streams for delivery to broadcast televisions and internet-connected multiscreen devices, like connected TVs, tablets, smart phones, and set-top boxes. The service functions independently or as part of AWS Media Services.

Amazon Kinesis Video Streams makes it easy to securely stream video from connected devices to AWS for real-time and batch-driven machine learning (ML), video playback, analytics, and other processing. It enables customers to build machine-vision based applications that power smart homes, smart cities, industrial automation, security monitoring, and more.

Q: Am I charged to use this capability?

Kinesis Video Streams uses a simple pay as you go pricing. There are no upfront costs and you only pay for the resources you use. Kinesis Video Streams pricing is based on the data volume (GB) ingested, volume of data consumed (GB) including through the HLS or DASH APIs, and the data stored (GB-Month) across all the video streams in your account. Please see the pricing page for more details.
 

What is the Amazon Kinesis Video Streams Edge Agent

Q: What is the Amazon Kinesis Video Streams Edge Agent?

The Kinesis Video Streams edge agent is a set of easy-to-use and highly configurable libraries that you can install and customize for local video storage and scheduled upload o the cloud. You can download the edge agent and deploy it at your on-premise edge compute devices. Alternatively, you can easily deploy them in docker containers running on Amazon EC2 machines. Once deployed, you can use the Amazon Kinesis Video Streams APIs to update video recording and cloud uploading configurations. The feature works with any IP camera that can stream over RTSP protocol, and requires no additional firmware deployment on the cameras. We offer the Amazon Kinesis Video Streams Edge Agent installations on AWS Snowball Edge devices, as an AWS Greengrass component, or on a native IoT deployment. For access to the Amazon Kinesis Video Streams Edge Agent, see here.

 

Low-latency two-way media streaming with WebRTC

Q: What is WebRTC and how does Kinesis Video Streams support this capability?

WebRTC is an open technology specification for enabling real-time communication (RTC) across browsers and mobile applications via simple APIs. It leverages peering techniques for real-time data exchange between connected peers and provides low media streaming latency required for human-to-human interaction. WebRTC specification includes a set of IETF protocols including Interactive Connectivity Establishment (ICE RFC5245), Traversal Using Relay around NAT (TURN RFC5766), and Session Traversal Utilities for NAT (STUN RFC5389) for establishing peer-to-peer connectivity, in addition to protocol specifications for real-time media and data streaming. Kinesis Video Streams provides a standards compliant WebRTC implementation, as a fully-managed capability. You can use this capability to securely live stream media or perform two-way audio or video interaction between any camera IoT device and WebRTC compliant mobile or web players. As a fully-managed capability, you do not have to build, operate, or scale any WebRTC related cloud infrastructure such as signaling or media relay servers to securely stream media across applications and devices.

Q: What does Amazon Kinesis Video Streams manage on my behalf to enable live media streaming with WebRTC?

Kinesis Video Streams provides managed end-points for WebRTC signaling that allows applications to securely connect with each other for peer-to-peer live media streaming. Next, it includes managed end-points for TURN that enables media relay via the cloud when applications cannot stream peer-to-peer media. It also includes managed end-points for STUN that enables applications to discover their public IP address when they are located behind a NAT or a firewall. Additionally, it provides easy to use SDKs to enable camera IoT devices with WebRTC capabilities. Finally, it provides client SDKs for Android, iOS, and for Web applications to integrate Kinesis Video Streams WebRTC signaling, TURN, and STUN capabilities with any WebRTC compliant mobile or web player.

Q: What can I build using Kinesis Video Streams WebRTC capability?

With Kinesis Video Streams WebRTC, you can easily build applications for live media streaming or real-time audio or video interactivity between camera IoT devices, web browsers, and mobile devices for usecases such as helping parents keep an eye on their baby’s room, enable home-owners use a video doorbell to check who’s at the door, allow owners of camera-enabled robot vacuums to remotely control the robot by viewing the live camera stream on a mobile phone, and much more.

Q: How do I get started with Kinesis Video Streams WebRTC capability?

You can get started by building and running the sample applications in the Kinesis Video Streams SDKs for WebRTC available for Web browsers, Android or iOS based mobile devices, and for Linux, Raspbian, and MacOS based IoT devices. You can also run a quick demo of this capability in the Kinesis Video Streams management console by creating a signaling channel, and running the demo application to live stream audio and video from your laptop’s built-in camera and microphone.

Q: What is a Signaling Channel?

A signaling channel is a resource that enables applications to discover, set up, control, and terminate a peer-to-peer connection by exchanging signaling messages. Signaling messages are metadata that two applications exchange with each other to establish peer-to-peer connectivity. This metadata includes local media information such as media codecs and codec parameters, and possible network candidate paths for the two applications to connect with each other for live streaming.

Q: How do applications use a signaling channel to enable peer-to-peer connectivity?

Streaming applications can maintain persistent connectivity with a signaling channel and wait for other applications to connect to them or they can connect to a signaling channel only when they need to live stream media. The signaling channel enables applications to connect with each other in a one to few model using the concept of one master connecting to multiple viewers. The application that initiates the connection assumes the responsibility of a master via the ConnectAsMaster API and wait for viewers. Upto 10 applications can then connect to that signaling channel by assuming the viewer responsibility via the ConnectAsViewer API. Once connected to the signaling channel, the master and viewer applications can send each other signaling messages to establish peer-t0-peer connectivity for live media streaming.

Q: How do applications live stream peer-to-peer media when they are located behind a NAT or a firewall?

Applications use Kinesis Video Streams STUN end point to discover their public IP address when they are located behind a NAT or a firewall. An application provides its public IP address as a possible location where it can receive connection requests from other applications for live streaming. The default option for all WebRTC communication is direct peer-to-peer connectivity but if the NAT or firewall does now allow direct connectivity (e.g. in case of symmetric NATs), applications can connect to the Kinesis Video Streams TURN end points for relaying media via the cloud. The GetIceServerConfig API provides the necessary TURN end point information that applications can use in their WebRTC configuration. This configuration allows applications to use TURN relay as a fallback when they are unable to establish a direct peer-to-peer connection for live streaming.

Q: How does Kinesis Video Streams secure the live media streaming with WebRTC?

End to end encryption is a mandatory feature of WebRTC, and Kinesis Video Streams enforces it on all the components, including signaling and media or data streaming. Regardless of whether the communication is peer-to-peer or relayed via Kinesis Video Streams TURN end points, all WebRTC communications are securely encrypted through standardized encryption protocols. The signaling messages are exchanged using secure Websockets (WSS), data streams are encrypted using Datagram Transport Layer Security (DTLS), and media streams are encrypted using Secure Real-time Transport Protocol (SRTP).

Console

Q: What is the Kinesis Video Streams management console?

The Kinesis Video Streams management console enables you to create, update, manage, and monitor your video streams. It console can also playback your media streams live or on an on-demand basis, as long as the content in the streams is in the supported media type. Using the player controls, you can view the live stream, skip forwards or backwards 10 seconds, use the date and time picker to rewind to a point in the past when you have set the corresponding retention period for the video stream. The Kinesis Video Streams management console's video playback capabilities are offered as a quick diagnostic tool for development and test scenarios for developers as they build solutions using Kinesis Video Streams.

Q: What media type does the console support?

The only supported video media type for playback in the Kinesis Video Streams management console is the popular H.264 format. This media format has wide support on devices, hardware and software encoders and playback engines. While, you can ingest any variety of video, audio, or other custom time-encoded data types for your own consumer applications and use cases, the management console will not perform playback of those other data types.

Q: What is the delay in the playback of video on the Kinesis Video Streams management console?

For a producer that is transmitting video data into the video stream, you will experience a 2 - 10 second lag in the live playback experience in the Kinesis Video Streams management console. The majority of the latency is added by the producer device as it accumulates frames into fragments before it transmits data over the internet. Once the data enters into the Kinesis Video Streams endpoint and you request playback, the console will get H.264 media type fragments from the durable storage, trans-package the fragments into a media format suitable for playback across different internet browsers. The trans-packaged media content will then be transferred to your location where you requested the playback from over the internet.

Encryption

Q: What Is Server-Side Encryption for Kinesis Video Streams?

Server-side encryption is a feature in Kinesis Video Streams that automatically encrypts data before it's at rest by using an AWS KMS key that you specify. Data is encrypted before it is written to the Kinesis Video Streams storage layer, and it is decrypted after it is retrieved from storage. As a result, your data is always encrypted at rest within the Kinesis Video Streams service.

Q: How do I get started with server-side encryption?

Server-side encryption is always enabled on Kinesis video streams. If a user-provided key is not specified when the stream is created, the default key (provided by Kinesis Video Streams) is used.

A user-provided AWS KMS key must be assigned to a Kinesis Video Streams stream when it is created. You can't later assign a different key to a stream using the UpdateStream API.

You can assign a user-provided AWS KMS key to a Kinesis video stream in two ways: When creating a Kinesis video stream in the console, specify the AWS KMS key in the Encryption section on the Create new Kinesis Video stream page. Or when creating a Kinesis Video Streams stream using the CreateStream API, specify the key ID in the KmsKeyId parameter.

Q: How much does it cost to use server-side encryption?

When you apply server-side encryption, you are subject to AWS KMS API usage and key costs. Unlike custom AWS KMS keys, the (Default) aws/kinesis-video KMS key is offered free of charge. However, you still pay for the API usage costs that Kinesis Video Streams incurs on your behalf. API usage costs apply for every KMS key, including custom ones. Kinesis Video Streams calls AWS KMS approximately every 45 minutes when it is rotating the data key. In a 30-day month, the total cost of AWS KMS API calls that are initiated by a Kinesis Video Streams stream should be less than a few dollars. This cost scales with the number of user credentials that you use on your data producers and consumers because each user credential requires a unique API call to AWS KMS.

Pricing and billing

Q: Is Amazon Kinesis Video Streams available in AWS Free Tier?

No. Amazon Kinesis Video Streams is not available in AWS Free Tier.

Q: How much does Kinesis Video Streams cost?
Kinesis Video Streams uses a simple pay as you go pricing. There is neither upfront cost nor minimum fees and you only pay for the resources you use. Kinesis Video Streams pricing is based on the data volume (GB) ingested, volume of data consumed (GB), and data stored (GB-Month) across all the video streams in your account.

Furthermore, Kinesis Video Streams will only charge for media data it successfully received, with a minimum chunk size of 4 KB. For comparison, a 64 kbps audio sample is 8 KB in size, so the minimum chunk size is set low enough to accommodate the smallest of audio or video streams.

Q: How does Kinesis Video Streams bill for data stored in streams?

Kinesis Video Streams will charge you for total amount of data durably stored under any given stream. The total amount of stored data per video stream can be controlled using retention hours.

Q: How am I charged for using Kinesis Video Streams WebRTC capability?

For using the Amazon Kinesis Video Streams WebRTC capability, you are charged based on the number of signaling channels that are active in a given month, number of signaling messages sent and received, and TURN streaming minutes used for relaying media. A signaling channel is considered active in a month if at any time during the month a device or an application connects to it. TURN streaming minutes are metered in 1 minute increments. Please see the pricing page for more details.

Service Level Agreement

Q: What does the Amazon Kinesis Video Streams SLA guarantee?

Our Amazon Kinesis Video Streams SLA guarantees a Monthly Uptime Percentage of at least 99.9% for Amazon Kinesis Video Streams.

Q: How do I know if I qualify for a SLA Service Credit?

You are eligible for a SLA credit for Amazon Kinesis Video Streams under the Amazon Kinesis Video Streams SLA if more than one Availability Zone in which you are running a task, within the same region has a Monthly Uptime Percentage of less than 99.9% during any monthly billing cycle.

For full details on all of the terms and conditions of the SLA, as well as details on how to submit a claim, please see the Amazon Kinesis Video Streams SLA details page.

Learn more about Amazon Kinesis Video Streams pricing

Visit the pricing page
Ready to get started?
Sign up
Have more questions?
Contact us