Skip to main content

Guidance for Multimodal Video Analytics of Smart Product Subscription Services on AWS

Video analytics that elevate your smart products

Overview

This Guidance demonstrates how providers offering smart product subscriptions can use large language model (LLM) inferencing to create innovative video analytics services that drive customer value and revenue. By implementing AI-powered video analytics, providers can transform raw video data into meaningful, actionable intelligence that solves specific customer problems. For instance, home camera systems can use AI to detect and alert homeowners about package theft in real-time, generate comprehensive summaries of pet behaviors from camera footage, and provide actionable insights that enhance user engagement. By using AI to develop targeted subscription services, this Guidance can help providers increase product utility, improve their customer's experience, and create new revenue streams through intelligent, contextual video analysis that goes beyond traditional monitoring capabilities.

How it works

These technical details feature an architecture diagram to illustrate how to effectively use this solution. The architecture diagram shows the key components and their interactions, providing an overview of the architecture's structure and functionality step-by-step.

Deploy with confidence

Ready to deploy? Review the sample code on GitHub for detailed deployment instructions to deploy as-is or customize to fit your needs. 

Go to sample code

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

Lambda facilitates seamless deployment and debugging through infrastructure-as-code (IaC) tools like an AWS Cloud Development Kit (AWS CDK). Lambda also offloads the burden of infrastructure management, handling all maintenance, security patches, and monitoring for functions. For example, it works with API Gateway to supply Amazon CloudWatch metrics on workflow elements like video analytics inferencing and message postprocessing. Using these logs and metrics from Lambda functions, your developers can more easily troubleshoot errors and performance bottlenecks.

Read the Operational Excellence whitepaper 

API Gateway acts as a proxy between the client and backend services, providing a protection layer when you invoke category services through an outbound API. This enables you to control access and implement security measures. Additionally, AWS IoT Core provides secure communication, authentication, authorization, and data protection mechanisms for edge devices. This helps it maintain the confidentiality, integrity, and availability of data exchanged between Internet of Things (IoT) devices and AWS, enabling secure and reliable edge computing operations. Finally, you can seamlessly integrate API Gateway and AWS IoT Core with AWS Identity and Access Management (IAM) so that you can control access to resources and data according to the principle of least privilege.

Read the Security whitepaper 

This Guidance uses AWS Regional services like Lambda, Amazon Bedrock, Amazon S3, and DynamoDB, which use Availability Zones and static stability to achieve high availability. As managed services, Lambda and Amazon Bedrock provide retry and automatic scaling features. Additionally, Amazon S3 and DynamoDB are designed for high reliability, minimizing the risk of losing videos and prompts in the case of a failure.

Read the Reliability whitepaper 

Kinesis Video Streams enables IP cameras to directly connect to the cloud, streamlining edge-to-cloud video processing in near real time. Additionally, Amazon Bedrock enables your developers to leverage LLMs for video analysis, using APIs to invoke and run inferences.

Read the Performance Efficiency whitepaper 

Lambda bills per millisecond of resource use, so you can run video ingestion, analytics, and message postprocessing without paying for idle compute. Additionally, Amazon S3 and DynamoDB offer a low total cost of ownership for storing and retrieving prompts, video data, and images. Finally, Amazon Bedrock enables you to use LLMs without the need to host or manage your own LLM servers. It also provides a cost-effective API-based approach for invoking inferences, charging based on the number of input and output tokens used.

Read the Cost Optimization whitepaper 

This Guidance invokes services like Lambda, API Gateway, and Amazon Bedrock only when there is a user query, minimizing resource overprovisioning. As serverless services, Lambda and API Gateway consume only the resources and energy required to support the workload. Additionally, as a managed service, Amazon Bedrock removes the need for you to host dedicated servers for LLMs. As a result, you can avoid the significant energy consumption associated with running such resource-intensive models.

Read the Sustainability whitepaper 

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.