Amazon Kinesis Video Streams now supports adding and retrieving Metadata at Fragment-Level

Posted on: Oct 4, 2018

Amazon Kinesis Video Streams now enables you to easily add metadata to and retrieve from individual fragments in a Kinesis Video stream to build richer applications in the AWS cloud. For example, you can now send GPS values as metadata with each video fragment from an on-person camera, or send temperature values with videos fragments from a baby monitor, and then use both the metadata and video fragments in your consuming applications to create richer user experiences.

A fragment represents a segment of video, audio, or other time-encoded data. Metadata in Kinesis Video Streams is a mutable key-value pair that can be used to describe the content of the fragment, embed associated sensor readings, or any other custom data that needs to be transferred along with the actual fragment.

Fragment-level metadata gives you granular control over passing and processing additional information with each video fragment. The metadata is stored along with the video fragment for the entire duration of the stream's retention period. You can use it to embed GPS or temperature sensor values to video fragments that your consuming application can then use for creating meaningful correlations, or you can use it to mark exactly those video fragments that contain motion detected by your edge device, like a camera. Your deep-learning cloud application can then use this metadata to inform the next stage of processing such as identifying faces or objects.

Your stream producing application or device can use the Kinesis Video Producer SDK to add metadata to a video fragment. Your consuming application can use the Kinesis Video Stream Parser Library to easily retrieve the metadata for each fragment via the GetMedia or GetMediaForFragmentList API operations for further processing. To learn more, please see the developer documentation.

Refer to the AWS global region table for Amazon Kinesis Video Streams availability.