[SEO Subhead]
This Guidance shows how independent software vendors (ISVs) across the advertising and marketing technology industry can increase their customer engagement by integrating their customers’ large language model (LLM) within the ISV’s generative artificial intelligence (AI) application. Amazon Bedrock offers a single API approach to help ISVs securely access customized foundation models (FMs) and base models provided by Amazon and other leading AI companies. This approach allows ISVs to create generative AI applications that deliver up-to-date answers based on a brand’s proprietary knowledge sources.
Please note: [Disclaimer]
Architecture Diagram
[Architecture diagram description]
Model Customization
Step 1
The customer moves any labeled examples that are needed for LLM customization to an Amazon Simple Storage Service (Amazon S3) bucket.
Step 2
The customer uses Amazon SageMaker notebooks to write LLM-fine-tuning code. The customer then uses the Amazon Bedrock software development kit within the notebook to adjust model parameters and improve its performance on specific tasks or in certain domains. Alternatively, the customer can use the Amazon Bedrock console to run and monitor these tuning jobs.
Step 3
The fine-tuning request will call the training orchestration component of Amazon Bedrock. This invokes a model training job.
Step 4
The model training job uses a base model from an S3 bucket managed by AWS and a labeled dataset from the customer’s S3 bucket. To improve the security posture, the customer can give Amazon S3 access to the labeled datasets through virtual private cloud (VPC) configurations (such as subnets, security groups, or endpoints).
Step 5
A new custom model is deployed in the fine-tuned model bucket, encrypted using AWS Key Management Service (AWS KMS) customer-managed keys. Only the customer can access the customized models, and no customer data is used to further train any Amazon Bedrock models.
Model Inference
Step 6
As the ISV, you can invoke the Amazon Bedrock API from your application to run inferences on available models. Use the inference response and store it in your application’s data store.
Step 7
Your customer creates a REST API on Amazon API Gateway as the entry point for you to access the fine-tuned LLM inference endpoint. An AWS Lambda function brokers the connection between API Gateway and the Amazon Bedrock inference endpoint. You can then access the API Gateway endpoint through AWS PrivateLink without exposing the traffic to internet.
Well-Architected Pillars
The AWS Well-Architected Framework helps you understand the pros and cons of the decisions you make when building systems in the cloud. The six pillars of the Framework allow you to learn architectural best practices for designing and operating reliable, secure, efficient, cost-effective, and sustainable systems. Using the AWS Well-Architected Tool, available at no charge in the AWS Management Console, you can review your workloads against these best practices by answering a set of questions for each pillar.
The architecture diagram above is an example of a Solution created with Well-Architected best practices in mind. To be fully Well-Architected, you should follow as many Well-Architected best practices as possible.
-
Operational Excellence
CloudWatch aggregates logs and creates observability metrics and dashboards to monitor the number of model invocations, latency, and the input of output tokens or errors that might impact the Guidance. By visualizing and analyzing these logs, you can better identify performance bottlenecks and troubleshoot requests. Additionally, you can use CloudWatch alarms to identify trends that could be problematic before they impact your application or business. Additionally, CloudTrail captures API calls for Amazon Bedrock, along with other account activities. You can use CloudTrail to enable operational and risk auditing and governance and facilitate compliance for your AWS account.
-
Security
When performing model fine-tuning, you can store the encrypted labeled data within Amazon S3 and use AWS KMS or the default Amazon S3 key. You can then specify an IAM role for Amazon Bedrock to allow it access to the S3 bucket. Additionally, while at rest in the S3 bucket owned by Amazon Bedrock, the custom model artifact will also be encrypted using an AWS key. You can use IAM access policies to set up least-privilege access control for different API calls, reducing the surface area of security risk. Additionally, to achieve private connectivity between VPCs and avoid exposing traffic to the internet, you can use API Gateway over AWS PrivateLink to share the Amazon Bedrock endpoint with other ISVs.
-
Reliability
Amazon Bedrock, Amazon S3, Lambda, and API Gateway are serverless services that automatically scale horizontally based on the workload demand and span multiple Availability Zones (AZs), helping them maintain availability in the case of service interruption in a single AZ. Additionally, Amazon Bedrock supports reliability by storing training and validation data in Amazon S3 and by invoking actions using Lambda. Amazon S3 lets you set up lifecycle configuration, enable versioning, and set object lock for cross-Region replications. Lambda supports features like versioning, reserved concurrency, retries, and dead-letter queues, and API Gateway lets you configure custom throttling for your API.
-
Performance Efficiency
This Guidance uses serverless and managed services to achieve high performance efficiency. For example, Amazon S3 provides consistent low latency and high-throughput performance, and it automatically scales to support high request rates. API Gateway can handle large volumes of traffic and can cache Amazon Bedrock endpoint responses, reducing the number of calls made to your endpoint and improving the latency of requests. Lambda manages scaling automatically to optimize individual functions without manual configuration, reducing latency, increasing throughput, and helping maintain consistent performance.
-
Cost Optimization
Lambda and Amazon S3 can help reduce costs compared to the costs of managing infrastructure yourself. Amazon S3 lets you store data across a range of storage classes purpose-built for specific use cases and access patterns, helping you optimize costs based on your business requirements. With Lambda, you are charged only for the compute time you consume. Additionally, Amazon Bedrock provides diverse model offerings, so you can select cost-effective LLMs based on your specific use case and budget. You can use the metrics tracked in CloudWatch to analyze cost drivers and identify opportunities for improvement, enabling you to right-size AI needs and avoid overprovisioning resources.
-
Sustainability
Amazon Bedrock is a fully managed AI service and reduces the need for you to manage your own infrastructure. It works with serverless services like Lambda, which scales up and down automatically based on workload requirements, so servers don’t need to run continuously. Overall, the services used in this Guidance improve efficiency and help you reduce your carbon footprint through optimized AI deployments.
Implementation Resources
A detailed guide is provided to experiment and use within your AWS account. Each stage of building the Guidance, including deployment, usage, and cleanup, is examined to prepare it for deployment.
The sample code is a starting point. It is industry validated, prescriptive but not definitive, and a peek under the hood to help you begin.
Related Content
[Title]
Disclaimer
The sample code; software libraries; command line tools; proofs of concept; templates; or other related technology (including any of the foregoing that are provided by our personnel) is provided to you as AWS Content under the AWS Customer Agreement, or the relevant written agreement between you and AWS (whichever applies). You should not use this AWS Content in your production accounts, or on production or other critical data. You are responsible for testing, securing, and optimizing the AWS Content, such as sample code, as appropriate for production grade use based on your specific quality control practices and standards. Deploying AWS Content may incur AWS charges for creating or using AWS chargeable resources, such as running Amazon EC2 instances or using Amazon S3 storage.
References to third-party services or organizations in this Guidance do not imply an endorsement, sponsorship, or affiliation between Amazon or AWS and the third party. Guidance from AWS is a technical starting point, and you can customize your integration with third-party services when you deploy the architecture.