Video analytics that elevate your smart products
This Guidance demonstrates how providers offering smart product subscriptions can use large language model (LLM) inferencing to create innovative video analytics services that drive customer value and revenue. By implementing AI-powered video analytics, providers can transform raw video data into meaningful, actionable intelligence that solves specific customer problems. For instance, home camera systems can use AI to detect and alert homeowners about package theft in real-time, generate comprehensive summaries of pet behaviors from camera footage, and provide actionable insights that enhance user engagement. By using AI to develop targeted subscription services, this Guidance can help providers increase product utility, improve their customer's experience, and create new revenue streams through intelligent, contextual video analysis that goes beyond traditional monitoring capabilities.
Note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Step 1
Ingest data, edit prompts, perform analytics, and set postprocessing actions on a website hosted on AWS Amplify.
Step 2
The website passes the request to and receives a response from Amazon API Gateway.
Step 3
API Gateway directs a request to the video-streaming-and-upload component, which integrates video data from a smart camera (using Amazon Kinesis Video Streams) or image data (from AWS IoT Core). Use AWS IoT Greengrass to manage and deploy machine learning models to edge devices.
Step 4
API Gateway forwards the analysis request, which includes video frames and prompts, to the visual analytics component. This component, equipped with an AWS Lambda function and a model library, processes the request and returns the result from the language model to API Gateway. The model library includes the foundation models on Amazon Bedrock and an open-source model hosted on Amazon SageMaker.
Step 5
If you specify a postprocessing action through natural language input, the LLM agent, delivered by an Amazon Bedrock agent or Lambda, will implement it through various capabilities hosted in Lambda functions. One example includes sending SMS messages to mobile clients or notifications to edge devices.
Step 6
You can store the videos in Amazon Simple Storage Service (Amazon S3). You can also store video metadata and fine-tune prompts on Amazon DynamoDB. The prompts can also be managed by Amazon Bedrock Prompt Management.
Step 7
You will have the option to save intermediate results of video analysis to Amazon OpenSearch Service through a Lambda function. Then, on the website, you can use an LLM through Amazon Bedrock to conduct question-and-answer sessions based on the video content.
Get Started
Deploy this Guidance
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
Lambda facilitates seamless deployment and debugging through infrastructure-as-code (IaC) tools like an AWS Cloud Development Kit (AWS CDK). Lambda also offloads the burden of infrastructure management, handling all maintenance, security patches, and monitoring for functions. For example, it works with API Gateway to supply Amazon CloudWatch metrics on workflow elements like video analytics inferencing and message postprocessing. Using these logs and metrics from Lambda functions, your developers can more easily troubleshoot errors and performance bottlenecks.
-
Security
API Gateway acts as a proxy between the client and backend services, providing a protection layer when you invoke category services through an outbound API. This enables you to control access and implement security measures. Additionally, AWS IoT Core provides secure communication, authentication, authorization, and data protection mechanisms for edge devices. This helps it maintain the confidentiality, integrity, and availability of data exchanged between Internet of Things (IoT) devices and AWS, enabling secure and reliable edge computing operations. Finally, you can seamlessly integrate API Gateway and AWS IoT Core with AWS Identity and Access Management (IAM) so that you can control access to resources and data according to the principle of least privilege.
-
Reliability
This Guidance uses AWS Regional services like Lambda, Amazon Bedrock, Amazon S3, and DynamoDB, which use Availability Zones and static stability to achieve high availability. As managed services, Lambda and Amazon Bedrock provide retry and automatic scaling features. Additionally, Amazon S3 and DynamoDB are designed for high reliability, minimizing the risk of losing videos and prompts in the case of a failure.
-
Performance Efficiency
Kinesis Video Streams enables IP cameras to directly connect to the cloud, streamlining edge-to-cloud video processing in near real time. Additionally, Amazon Bedrock enables your developers to leverage LLMs for video analysis, using APIs to invoke and run inferences.
-
Cost Optimization
Lambda bills per millisecond of resource use, so you can run video ingestion, analytics, and message postprocessing without paying for idle compute. Additionally, Amazon S3 and DynamoDB offer a low total cost of ownership for storing and retrieving prompts, video data, and images. Finally, Amazon Bedrock enables you to use LLMs without the need to host or manage your own LLM servers. It also provides a cost-effective API-based approach for invoking inferences, charging based on the number of input and output tokens used.
-
Sustainability
This Guidance invokes services like Lambda, API Gateway, and Amazon Bedrock only when there is a user query, minimizing resource overprovisioning. As serverless services, Lambda and API Gateway consume only the resources and energy required to support the workload. Additionally, as a managed service, Amazon Bedrock removes the need for you to host dedicated servers for LLMs. As a result, you can avoid the significant energy consumption associated with running such resource-intensive models.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.