Guidance for Integrating a Custom Foundation Model with Advertising and Marketing ISVs on AWS
Overview
How it works
This architecture diagram shows how to securely import inferences into your ISV application from your customers’ Amazon Bedrock FMs to centralize generative AI efforts and enrich your application.
Well-Architected Pillars
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
Operational Excellence
CloudWatch aggregates logs and creates observability metrics and dashboards to monitor the number of model invocations, latency, and the input of output tokens or errors that might impact the Guidance. By visualizing and analyzing these logs, you can better identify performance bottlenecks and troubleshoot requests. Additionally, you can use CloudWatch alarms to identify trends that could be problematic before they impact your application or business. Additionally, CloudTrail captures API calls for Amazon Bedrock, along with other account activities. You can use CloudTrail to enable operational and risk auditing and governance and facilitate compliance for your AWS account.
Security
When performing model fine-tuning, you can store the encrypted labeled data within Amazon S3 and use AWS KMS or the default Amazon S3 key. You can then specify an IAM role for Amazon Bedrock to allow it access to the S3 bucket. Additionally, while at rest in the S3 bucket owned by Amazon Bedrock, the custom model artifact will also be encrypted using an AWS key. You can use IAM access policies to set up least-privilege access control for different API calls, reducing the surface area of security risk. Additionally, to achieve private connectivity between VPCs and avoid exposing traffic to the internet, you can use API Gateway over AWS PrivateLink to share the Amazon Bedrock endpoint with other ISVs.
Reliability
Amazon Bedrock, Amazon S3, Lambda, and API Gateway are serverless services that automatically scale horizontally based on the workload demand and span multiple Availability Zones (AZs), helping them maintain availability in the case of service interruption in a single AZ. Additionally, Amazon Bedrock supports reliability by storing training and validation data in Amazon S3 and by invoking actions using Lambda. Amazon S3 lets you set up lifecycle configuration, enable versioning, and set object lock for cross-Region replications. Lambda supports features like versioning, reserved concurrency, retries, and dead-letter queues, and API Gateway lets you configure custom throttling for your API.
Performance Efficiency
This Guidance uses serverless and managed services to achieve high performance efficiency. For example, Amazon S3 provides consistent low latency and high-throughput performance, and it automatically scales to support high request rates. API Gateway can handle large volumes of traffic and can cache Amazon Bedrock endpoint responses, reducing the number of calls made to your endpoint and improving the latency of requests. Lambda manages scaling automatically to optimize individual functions without manual configuration, reducing latency, increasing throughput, and helping maintain consistent performance.
Cost Optimization
Lambda and Amazon S3 can help reduce costs compared to the costs of managing infrastructure yourself. Amazon S3 lets you store data across a range of storage classes purpose-built for specific use cases and access patterns, helping you optimize costs based on your business requirements. With Lambda, you are charged only for the compute time you consume. Additionally, Amazon Bedrock provides diverse model offerings, so you can select cost-effective LLMs based on your specific use case and budget. You can use the metrics tracked in CloudWatch to analyze cost drivers and identify opportunities for improvement, enabling you to right-size AI needs and avoid overprovisioning resources.
Sustainability
Amazon Bedrock is a fully managed AI service and reduces the need for you to manage your own infrastructure. It works with serverless services like Lambda, which scales up and down automatically based on workload requirements, so servers don’t need to run continuously. Overall, the services used in this Guidance improve efficiency and help you reduce your carbon footprint through optimized AI deployments.
Disclaimer
Did you find what you were looking for today?
Let us know so we can improve the quality of the content on our pages