AWS for M&E Blog

Recording mobile video to Amazon S3 using Amazon Kinesis Video Streams

Ensuring you have reliable video capture and storage when one of your recording devices fails or loses connectivity remains a challenge. We will demonstrate how to build a resilient video streaming solution using Amazon Kinesis Video Streams to automatically back up video to Amazon Simple Storage Service (Amazon S3). It will verify footage is not lost, even when one of your capture devices (such as IP cameras, mobile devices, or Internet of Things (IoT) sensors) fails.

Amazon Kinesis Video Streams provides a fully managed service to securely stream video from connected devices to AWS. The solution offers resilience through two key capabilities:

  1. Continuously transmitting live video data to Amazon Web Services (AWS). Your footage remains safe in Amazon S3 (with 11 nines of durability), with your content preserved up to the last successful transmission.
  2. You can stream from multiple recording devices simultaneously. When one device stops working, other devices continue capturing without interruption.

We will show how to implement this video streaming solution using AWS serverless services, demonstrating how to securely stream and store video from mobile devices to Amazon S3.

We will use the AWS Software Development Kit (AWS SDK) for JavaScript to integrate Kinesis Video Streams into a mobile app. Then manage the video processing pipeline using AWS Lambda, a serverless compute service for running code without having to provision or manage servers. We will also leverage AWS Step Functions to orchestrate those serverless workloads. The resulting streams will be stored in Amazon S3 for durable long-term storage or further processing. This streamlined approach forms the backbone for more advanced use cases, enabling scaling as needs evolve.

Prerequisites

Before starting, make certain that you have completed the following:

  1. An AWS Account.
  2. Familiarity with AWS.
  3. Permissions to deploy resources in the AWS account, including:
  4. Familiarity with developing and deploying Android applications.
  5. An Android mobile device or access to AWS Device Farm:

Overview architecture

This solution enables near real-time video backup and storage, providing a robust, serverless architecture that verifies your video content is safely preserved, even in case of device failure. For scenarios where re-recording isn’t possible (such as live events, one-time performances, or critical security footage) we recommend using multiple recording devices simultaneously.

While we demonstrate using an Android mobile device, the solution works with any combination of webcams, security cameras, laptops, or devices that can run an application leveraging the AWS SDK.

This multi-device approach, combined with near real-time backup to Amazon S3, provides two layers of protection:

  1. If a security camera loses power, or a mobile phone is damaged during recording, all footage up until the moment of failure is already secured in Amazon S3.
  2. Other active devices can continue recording without interruption.

This redundancy is particularly valuable when capturing irreplaceable moments where a second take isn’t an option.

Following is how the solution works:

  • Video ingestion: An Android application streams video directly to Kinesis Video Streams.
  • Intelligent processing: It uses a serverless architecture combining:
    • Amazon CloudWatch metric alarms to monitor streaming activity.
    • AWS Step Functions to orchestrate the workflow with built-in error handling and automatic retries.
    • AWS Lambda functions to process video segments without needing to manage any infrastructure.
  • Durable storage: Processed video clips are automatically archived to Amazon S3, providing highly durable, cost-effective storage that’s accessible for future use.

The serverless design eliminates infrastructure management overhead and automatically scales with your needs—whether you’re streaming from one device or many. AWS Step Functions provides visual monitoring of the entire workflow and handles any processing failures.

The process of orchestrating Lambda functions is achieved through Step Functions and an CloudWatch metric alarm that triggers the workflow. CloudWatch alarms monitor streaming metrics (such as PutMedia.Incoming) and once there is a positive reading, kicks-off the Step Functions workflow. It checks whether the alarm has the necessary tags, the current state of the alarm, and runs a Lambda function to capture video clips every three-minutes. The amount of time is configurable during deployment.

The logic for checking CloudWatch alarms periodically in the Step Function workflow is based on an earlier blog, How to enable Amazon CloudWatch Alarms to send repeated notifications.

A cloud architecture diagram showing a mobile device on the left connecting to Amazon Kinesis Video Streams (orange square icon), which branches into two paths: one downward to Amazon CloudWatch Alarm (pink bar chart with bell icon) and one rightward into a pink-bordered box labeled "AWS Step Functions workflow" containing three orange AWS Lambda icons arranged vertically and horizontally with labels for uploading clips to an S3 bucket, checking tags, and checking alarm state. The CloudWatch Alarm connects to Amazon EventBridge (pink square icon) which triggers the Step Functions workflow, and the workflow ultimately connects to a S3 bucket (green icon) on the right side of the diagram. All components are enclosed within a box labeled "AWS Cloud" at the top, with arrows indicating data flow direction between services.

Figure 1: High-level architecture.

As shown in Figure 1, this process involves the following steps:

  1. A mobile app (Android in this case) streams video footage to Kinesis Video Streams through the Kinesis Producer Library (KPL).
  2. A CloudWatch metric alarm monitors the PutMedia.IncomingFragments metric of Kinesis Video Streams (where fragments are the native video chunks of Kinesis Video Streams, distinct from streaming protocols like HLS). The alarm activates when it detects more than 10 fragments within a 60-second period, indicating active streaming.
  3. Once the threshold value is breached and CloudWatch alarm is indicating an ALARM state, an Amazon EventBridge rule is triggered. EventBridge, in return activates the Step Functions and passes the JSON event from CloudWatch as input to the state machine.
  4. The Step Functions workflow has the following logic seen in Figure 2.
A vertical flowchart depicting an AWS Step Functions workflow that begins with a beige circle labeled "Start" at the top and ends with a beige circle labeled "End" at the bottom. The workflow contains several rounded rectangular boxes connected by arrows, starting with an orange AWS Lambda icon for "Check Alarm Tags", followed by a blue diamond choice state icon asking "Desired Alarm?" with output condition "$.output == 'RepeatedAlarm'", then a blue clock icon for a "Wait" state, another orange Lambda function to "Check Alarm State" with two possible paths labeled "Default" and "still in alarm", followed by a third orange Lambda function to "Upload Video Clip", and a blue diamond choice state asking "Stream Incoming?" with a "Default" path. The workflow includes a loop where the "still in alarm" path curves back to the Wait state, and both the "Default" paths from different choice states eventually converge to a blue arrow icon labeled "Pass" state before reaching the End circle. A more in-depth description of the flow's actions follows in the blog body.

Figure 2: AWS Step Functions logic map.

    • AWS Step Functions check whether the notifying CloudWatch alarm is the desired alarm to extract clips. To implement this logic, CloudWatch metric alarms checks to see if the desired video stream is tagged with the key-value pair: {RepeatedAlarm: True}
    • If the alarm does not contain the necessary tag, the event is discarded. If it’s the desired alarm, the clip extraction process kicks-off.
    • It first waits for a specified amount of time, which correlates to how long each clip should be stored in Amazon S3. For this demonstration, the interval is set for three minutes, however this interval is configurable.
    • After three minutes, a Lambda function is triggered to check the status of the alarm, determining whether there is still video footage coming into Kinesis Video Streams or not.
    • After the check, a Lambda function to upload video clips to Amazon S3 is triggered. This function gathers the desired time markers from the event data and extracts the clip to store in the S3 bucket in a partitioned manner.
    • If the CloudWatch alarm is still in the ALARM state, the Step Functions loop again through Steps c., d., and e.—uploading another three-minute clip. If the CloudWatch alarm is in an OK state, this means that the stream has stopped, and the Step Functions terminate. Any clip that starts or terminates in between the waiting period still gets captured as a partial clip that is shorter in length.
  1. A Lambda function uses a GetClip API call to Kinesis Video Streams to extract the video footage with set timestamps (in epoch time) and uses the AWS SDK for Python (Boto3) to upload the clip to the S3 bucket.
  2. Another Lambda function uses the DesribeAlarmState API call to CloudWatch metric alarm to fetch the current alarm state and decide whether to do another loop of the Step Functions or terminate the flow.

Launching the solution

Navigate to the GitHub sample repo Recording resilient mobile video streams to Amazon S3 using Amazon Kinesis Video Streams and follow the deployment steps there.

Currently, the Android app used for testing is supported only in the N. Virginia (us-east-1) Region. However, the rest of the solution can be deployed in any Region that supports the full set of services used.

After deployment, navigate to the Outputs section of the CloudFormation console, which will display the resources created by the template for quick access (Figure 3).

Screenshot of the AWS CloudFormation console showing the "Outputs" tab of the deployed stack. The image displays a list of key resources created by the template, including links to the Kinesis Video Stream, Lambda functions, and Step Functions workflow providing quick access to manage these components. There is a search feature at the top of the section with columns for Key, Value and Description underneath it. The Key tells you what resource, the Value is the link or location and the then the resources description in the last column.

Figure 3: CloudFormation outputs tab.

Mobile app setup (optional)

The solution’s architecture is now deployed in your AWS account. Next you need to set up your mobile app to connect to the deployed architecture.

Note: If you already have Kinesis Video Streams Producers that can stream to the newly deployed Kinesis Video Streams for testing, you do not need to install the Android mobile app. In the following, Steps 1-4 walkthrough how to configure the app. You can skip to the Using the solution section if you already have a device that can stream to Kinesis Video Streams.

Following are the steps for setting up the mobile app:

  1. Install the app from the GitHub repository and follow the instructions to set up and configure the app to use your Cognito User Pool.
    • Note: We do not cover how to install and set up the mobile app, instead follow the guide in the AWS Samples GitHub repository listed in the prerequisites.
    • If you selected CognitoCreation as True, navigate to the Amazon Cognito console to receive the required details.
  2. Upon successful configuration, launch the app through Android Studio by connecting your device to it.
  3. The first time you run the app you will be asked to login or create an account to login.
  4. Select streaming configurations (such as using the back or front camera, resolution, stream name, and so on). Choose Stream when done, as shown in Figure 4.
    • Note the stream name in the app should match the stream name deployed by the CloudFormation template, which can be found in the Outputs
Screenshot of a mobile application interface showing the video streaming configuration screen. The image displays settings for camera selection, resolution options, and a prominent "Stream" button that users tap to begin transmitting video to Kinesis Video Streams.

Figure 4: Screenshot from within the app to start the stream.

Using the solution

Now that the solution is deployed and a producer is streaming to the Kinesis Video Streams, let’s take a look at the archival flow.

Following is the flow through the solution:

  1. Navigate to the CloudWatch console and select All alarms from the left side menu to find the alarm that the CloudFormation template created for this demonstration. The CloudWatch alarm will transition to an In alarm state when your mobile app begins streaming, based on the PutMedia metric. To verify the alarm state:
    • Check the Metric Alarm field
    • View the alarm timeline infographic (shown in Figure 5)
  2. After you start streaming, confirm that the alarm transitions to the In alarm This transition can take a brief moment. You will need to refresh the dashboard to see the updated status.
A screenshot of the AWS CloudWatch console displaying a metric alarm named "put-stream-coming-2" which is currently in an alarm state, indicated by a red warning triangle. The main focus is a graph showing "PutMedia.IncomingFragments" metric that displays a count measurement over time from 17:25 to 18:20, where the line remains flat near zero until approximately 18:10 when it sharply spikes upward to around 45, creating a dramatic vertical rise on the right side of the chart. Below the line graph is a colored status bar that shows a long green section (OK state) from 17:25 until around 18:10, followed by a red section (In alarm state) that corresponds with the spike in the metric, with a legend indicating red means "In alarm", green means "OK", gray means "Insufficient data", and blue means "Disabled actions". The alarm is highlighted with a blue border in the alarms list on the left side of the console, distinguishing it from other alarms like "put-stream-coming" which shows an OK status.

Figure 5: Configured In alarm state CloudWatch alarm.

  1. After CloudWatch goes into the In alarm state, the CloudWatchAlarmProcessor1 Lambda function is triggered. It checks if a CloudWatch alarm is a repeated alarm by looking for a specific tag on the alarm and returns the result along with the original event data.
  2. A second Lambda function, CloudWatchAlarmProcessor2, is then triggered which checks the state of the alarm. The Lambda function validates if the alarm state is still active in CloudWatch and dictates whether the Step Function should terminate or conduct another three-minute loop.
  3. Then the Lambda function, KinesisVideoProcessor, runs to upload the clips to Amazon S3 and takes the time interval configured from the last checkpoint to store the clip streamed to Amazon S3.
FUNCTION lambda_handler(event, context):
    // 1. Extract input parameters
    EXTRACT output and event from input event
    GET stream_name from CloudWatch event details
    SET bucket_name = " Kinesis Video Streamstos3"
    
    // 2. Get Kinesis Video Stream endpoint
    endpoint = GET_DATA_ENDPOINT(stream_name, API_NAME='GET_CLIP')
    
    // 3. Calculate time window for clip
    start_time = event["time"]
    end_time = start_time + 180 seconds  // 3-minute clip
    
    // 4. Retrieve video clip from stream
    CREATE media_client with endpoint
    clip = GET_CLIP(
        stream_name,
        fragment_type = 'PRODUCER_TIMESTAMP',
        time_range = [start_time, end_time]
    )
    
    // 5. Generate S3 key with timestamp and random string
    current_time = GET_CURRENT_LOCAL_TIME()
    random_string = GENERATE_RANDOM_STRING(4 characters)
    s3_key = FORMAT("stream={stream}/year={year}/month={month}/day={day}/clip-{stream}-{date}T{time}-{random}.mp4")
    
    // 6. Upload clip to S3
    UPLOAD_TO_S3(clip_payload, bucket_name, s3_key)
    
    // 7. Return updated event
    RETURN {output, updated_event}
END FUNCTION
  1. Finally, after every loop interval (three minutes for this demonstration), the KinesisVideoProccesor Lambda function uploads the video from the previous checkpoint until the next one and stores it in Amazon S3 (as shown in Figure 6).
Screenshot of the S3 bucket showing stored video clips organized in a bucket. The image displays how clips are chronologically stored with timestamps in a hierarchical folder structure for quick retrieval.

Figure 6: Clips stored in S3 bucket.

Cleanup

Follow the steps needed for clean-up:

  1. Navigate to the Amazon S3 console
    • Locate the S3 bucket used to store the videos and select the radio button next to its name.
    • Once the desired bucket is selected, select the Empty button on the menu bar and confirm the action.
  2. Navigate to the CloudFormation console
    • Locate the CloudFormation stack used to launch this solution, select the radio button next to its name.
    • Choose the Delete button and confirm the action.

Real time analysis considerations

This solution is tailored towards archival use-cases and long-term storage of content being streamed to Kinesis Video Streams. Currently the video segments are uploaded to Amazon S3 with a fixed frequency (default of every three minutes, which is adjustable through the CloudFormation parameter). This threshold can be lowered for more frequent uploads of videos to Amazon S3 for a near real-time solution.

Alternatively, to achieve true real time, the Kinesis Video Streams notifications can be utilized to kickstart the Step Functions workflow. It can act directly by calling any downstream service needed for analysis.

Scaling and cost considerations

Different devices or producers can stream to the same Kinesis Video Streams stream only in non-overlapping timestamps, as the PutMedia API allows for only one simultaneous input coming in. However, the Step Function configuration deployed in this solution can be used to archive media from various Kinesis Video Streams at the same time. The concurrent execution settings, which limits both Step Functions and AWS Lambda functions, does need to be taken into consideration.

The solution’s architecture utilizes a serverless approach for the processing layer, so you pay only for what you use. Increasing the number of simultaneous streams to be archived will increase the total runtime of the Lambda functions, as there will be concurrently running functions.

The increase in cost will be proportional with the number of streams. Currently you do not need to adjust the solution, given the concurrency is below default service limits. Storage costs are related to the amount of data storage (for each GB) in your S3 buckets. Whether this is a single large bucket, or multiple buckets the cost structure is not affected.

Note: The template will deploy the following services and will incur charges based on each service’s pricing:

  1. Amazon Kinesis Video Streams
  2. AWS Lambda Function
  3. Amazon Step Functions
  4. Amazon S3
  5. AWS EventBridge Rule
  6. Amazon CloudWatch alarm

 Conclusion

We walked through how to live stream video from a mobile device to Amazon S3 using Amazon Kinesis Video Streams, AWS Lambda, and AWS Step Functions. This serverless and scalable architecture ingests and stores live video streams efficiently for applications like video archiving, security monitoring, or content post-processing.

Looking to the future, consider how this foundational setup can be expanded to meet evolving media and entertainment needs. As video content continues to surge, optimizing for scalability and cost-effectiveness becomes crucial. Embracing serverless architectures confirms you can handle varying workloads without the overhead of managing infrastructure. Leveraging the extensive suite of AWS media services positions you to rapidly innovate and respond to emerging trends in live streaming and content delivery.

We encourage you to experiment with this solution and explore how it can be tailored to your specific use cases. Visit the AWS Media & Entertainment Blog channel, for support on solutions like this one, or check out AWS re:Post.

Contact an AWS Representative to know how we can help accelerate your business.

Further Reading

Ali Maga

Ali Maga

Ali Maga is a Solution Architect at AWS. He works with customers to understand their business needs and challenges. He helps tailor technology-based solutions to unlock their full potential, with a focus on generative AI in the energy industry.

Archit Soni

Archit Soni

Archit Soni is a Solutions Architect for AWS. He is also a Media & Entertainment Specialist—spending time working with customers to help them on their cloud journey.

Dylan Souvage

Dylan Souvage

Dylan Souvage is a Partner Solutions Architect at AWS. Dylan loves working with customers to understand their business needs and help them in their cloud journey.