Skip to main content

Guidance for Integrating a Custom Foundation Model with Advertising and Marketing ISVs on AWS

Overview

This Guidance shows how independent software vendors (ISVs) across the advertising and marketing technology industry can increase their customer engagement by integrating their customers’ large language model (LLM) within the ISV’s generative artificial intelligence (AI) application. Amazon Bedrock offers a single API approach to help ISVs securely access customized foundation models (FMs) and base models provided by Amazon and other leading AI companies. This approach allows ISVs to create generative AI applications that deliver up-to-date answers based on a brand’s proprietary knowledge sources.

How it works

This architecture diagram shows how to securely import inferences into your ISV application from your customers’ Amazon Bedrock FMs to centralize generative AI efforts and enrich your application.

Well-Architected Pillars

The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.

CloudWatch aggregates logs and creates observability metrics and dashboards to monitor the number of model invocations, latency, and the input of output tokens or errors that might impact the Guidance. By visualizing and analyzing these logs, you can better identify performance bottlenecks and troubleshoot requests. Additionally, you can use CloudWatch alarms to identify trends that could be problematic before they impact your application or business. Additionally, CloudTrail captures API calls for Amazon Bedrock, along with other account activities. You can use CloudTrail to enable operational and risk auditing and governance and facilitate compliance for your AWS account.

Read the Operational Excellence whitepaper 

When performing model fine-tuning, you can store the encrypted labeled data within Amazon S3 and use AWS KMS or the default Amazon S3 key. You can then specify an IAM role for Amazon Bedrock to allow it access to the S3 bucket. Additionally, while at rest in the S3 bucket owned by Amazon Bedrock, the custom model artifact will also be encrypted using an AWS key. You can use IAM access policies to set up least-privilege access control for different API calls, reducing the surface area of security risk. Additionally, to achieve private connectivity between VPCs and avoid exposing traffic to the internet, you can use API Gateway over AWS PrivateLink to share the Amazon Bedrock endpoint with other ISVs.

Read the Security whitepaper 

Amazon Bedrock, Amazon S3, Lambda, and API Gateway are serverless services that automatically scale horizontally based on the workload demand and span multiple Availability Zones (AZs), helping them maintain availability in the case of service interruption in a single AZ. Additionally, Amazon Bedrock supports reliability by storing training and validation data in Amazon S3 and by invoking actions using Lambda. Amazon S3 lets you set up lifecycle configuration, enable versioning, and set object lock for cross-Region replications. Lambda supports features like versioning, reserved concurrency, retries, and dead-letter queues, and API Gateway lets you configure custom throttling for your API.

Read the Reliability whitepaper 

This Guidance uses serverless and managed services to achieve high performance efficiency. For example, Amazon S3 provides consistent low latency and high-throughput performance, and it automatically scales to support high request rates. API Gateway can handle large volumes of traffic and can cache Amazon Bedrock endpoint responses, reducing the number of calls made to your endpoint and improving the latency of requests. Lambda manages scaling automatically to optimize individual functions without manual configuration, reducing latency, increasing throughput, and helping maintain consistent performance.

Read the Performance Efficiency whitepaper 

Lambda and Amazon S3 can help reduce costs compared to the costs of managing infrastructure yourself. Amazon S3 lets you store data across a range of storage classes purpose-built for specific use cases and access patterns, helping you optimize costs based on your business requirements. With Lambda, you are charged only for the compute time you consume. Additionally, Amazon Bedrock provides diverse model offerings, so you can select cost-effective LLMs based on your specific use case and budget. You can use the metrics tracked in CloudWatch to analyze cost drivers and identify opportunities for improvement, enabling you to right-size AI needs and avoid overprovisioning resources.

Read the Cost Optimization whitepaper 

Amazon Bedrock is a fully managed AI service and reduces the need for you to manage your own infrastructure. It works with serverless services like Lambda, which scales up and down automatically based on workload requirements, so servers don’t need to run continuously. Overall, the services used in this Guidance improve efficiency and help you reduce your carbon footprint through optimized AI deployments.

Read the Sustainability whitepaper 

Disclaimer

The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.