.NET on AWS Blog
Serverless solution to summarize text using Amazon Bedrock Converse API in .NET
Introduction
Imagine you want to create intelligent text summarization tools without managing infrastructure. How can you efficiently build AI-powered solutions that can transform lengthy documents into concise summaries?
Amazon Bedrock, a fully managed service from AWS, addresses this challenge by optimizing text summarization through its Converse API. The Converse API provides a standardized interface for interacting with foundation models to write adaptable code across different Amazon Bedrock supported AI models. By offering a consistent approach to model interactions, you can focus on creating innovative applications rather than managing complex integration with different models.
This blog post demonstrates how to build a serverless text summarization solution using the AWS SDK for .NET and Amazon Bedrock Converse API, showcasing how you can use generative AI capabilities with minimal infrastructure overhead.
Solution Overview
This solution implements a .NET serverless backend API for text summarization using Converse API. The architecture uses Amazon API Gateway to expose endpoints and handle inference parameters, AWS Lambda to execute summarization logic, and Amazon Bedrock for generating summaries through customizable model interactions. Security is managed through AWS Identity and Access Management (IAM), enabling secure access between services. The serverless design provides automatic scaling, cost-effectiveness through pay-per-use pricing, and eliminates infrastructure management, offering you the flexibility to switch between different AI models without code modifications.
Architecture
The architecture is shown in Figure 1. The solution has the following flow:
- Client calls the API Gateway endpoint and passes different inference parameters such as the input text and text generation configuration parameters.
- The API Gateway triggers a Lambda function which prepares the converse API request.
- Amazon Bedrock API generates the text summarization for the provided text input using specified model and returns the response.
- AWS Lambda function returns a response to API Gateway, which includes the text summarization.
- Client can view the text summary in the response of API request.

Figure 1: Solution Architecture
This architecture has the following benefits:
- Serverless components help to scale automatically.
- Built-in access control, security, distributed storage from AWS services.
- Only pay for what you use with serverless model.
- Easy integration with other applications via API Gateway.
- The Converse API provides a consistent interface for invoking different large language models (LLMs) through Amazon Bedrock with flexibility of switching between models without modifying your code.
Prerequisites
To replicate this solution, you will need the following prerequisites:
- An AWS account to create and manage the necessary AWS resources for this solution.
- The latest version of the AWS Command Line Interface (AWS CLI), configured for your account.
- The AWS Cloud Development Kit (CDK) to define and provision AWS resources.
- Install the .NET SDK.
Provisioning Infrastructure
Step 1: Enable Amazon Bedrock Model Access
Access to models in Amazon Bedrock is not enabled by default. Navigate to the Model Access section in the Amazon Bedrock console with authorized IAM user permissions to enable the selected model. Review the End User License Agreement (EULA) and submit an access request through the console.
We are using Amazon Titan Text G1 – Express (amazon.titan-text-express-v1) model for demonstration purposes. Verify that you have the access to Amazon Titan Text G1 – Express within Amazon Bedrock. If you wish to experiment with alternative models, enable access to each desired model. For instructions on managing model access, refer to Add or remove access to Amazon Bedrock foundation models.
Step 2: Create CDK stack to provision AWS Resources
The following CDK C# code demonstrates the configuration of a .NET 8 Lambda function, and IAM permissions required for Amazon Bedrock model access.
Step 3: Create Lambda function to process request prompt
Following is the AWS Lambda function code to invoke Amazon Bedrock Converse API to generate summarized text using model inference parameters. Using the BedrockClient.ConverseAsync
method from AWS SDK, it invokes the selected model with the prepared payload. When the model invocation succeeds with HTTP response status code 200, you receive the text summary in the response.
Step 4: Deployment
Deploy the solution by following these steps to set up the required AWS resources and infrastructure.
1. Clone the GitHub repository sample-bedrock-converse-api:
git clone https://github.com/aws-samples/sample-bedrock-converse-api
2. Navigate to the source code folder, Change to the folder containing the cdk.json
file:
cd sample-bedrock-converse-api
3. If this is the initial deployment utilizing the AWS CDK, initialize the AWS CDK environment:
cdk bootstrap —init
4. Synthesize the AWS CloudFormation template:
cdk synth
5. Deploy the resources:
cdk deploy
Step 5: Testing
Once you finish the deployment, you can test the implemented functionality from the AWS console and/or an API Client as follows.
Test using the AWS Console
1. Sign in to the AWS console and navigate to API Gateway.
2. Go to APIs.
3. Choose your deployed API (e.g. BedrockTextGenerationRestAPI).
4. Select POST method under /text resource.
5. Select Test tab.
6. Construct a test payload and put it in Request body:
{
"modelId":"<specify-model-id>",
"prompt":"<write your detailed text to get the summary>"
}
7. Choose Test to invoke the API method with the test payload.
8. Check the request and response details to verify if it works.
Test using API Client
You can test by invoking the API endpoint from an API client such as Postman. Make sure to replace API endpoint URL, headers, payload data and method as POST.
API URL:
https://{api-id}.execute-api.region.amazonaws.com/DEV/text
Payload:
{
"modelId":"amazon.titan-text-express-v1",
"prompt":"<write your detailed text to get the summary>"
}
Once you get a successful response, it displays the generated text summary.
Following is the prompt and summarized response by using Converse API with Amazon Titan Text G1 – Express model, but you can use any of the models available in Amazon Bedrock. Amazon Titan Text G1 – Express is one of the Amazon Titan family models useful for a wide range of advanced, general language tasks such as open-ended text generation, summarization and conversational chat.
Prompt:
provide summary for below paragraph.
Amazon Titan Text Express and Amazon Titan Text Lite are large language models (LLMs) that help customers improve productivity and efficiency for an extensive range of text-related tasks, and offer price and performance options that are optimized for your needs. You can now access these Amazon Titan Text foundation models in Amazon Bedrock, which helps you easily build and scale generative AI applications with new text generation capabilities.
Amazon Titan Text Express has a context length of up to 8,000 tokens, making it well-suited for a wide range of advanced, general language tasks such as open-ended text generation and conversational chat, as well as support within Retrieval Augmented Generation (RAG). This model is optimized for English, with multilingual support for more than 100 additional languages available in preview. Alternatively, with a context length of up to 4,000 tokens, Amazon Titan Text Lite is the fastest model in the Titan Text family and is ideal for fine-tuning and English-language tasks, including summarization and copywriting, where you may want a smaller, more cost-effective text generation model that is also highly customizable.
Amazon Titan Text Express and Amazon Titan Text Lite foundation models in Amazon Bedrock are now generally available in the US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Tokyo), and Europe (Frankfurt) AWS Regions. To learn more, read the AWS News launch blog, Amazon Titan product page, and documentation. To get started with Titan Text models in Amazon Bedrock, visit the Amazon Bedrock console.
Text summary output:
Amazon Titan Text Express and Amazon Titan Text Lite are large language models (LLMs) that help customers improve productivity and efficiency for an extensive range of text-related tasks. They are now available in Amazon Bedrock, which helps you easily build and scale generative AI applications with new text generation capabilities. Amazon Titan Text Express has a context length of up to 8,000 tokens, while Amazon Titan Text Lite is the fastest model in the Titan Text family and is ideal for fine-tuning and English-language tasks.
Troubleshooting
If you encounter issues when invoking your API, following are some troubleshooting tips:
- When there are lambda timeout errors, increase the timeout in AWS CloudFormation template. This gives more time to execute.
- If you cannot access a model ID, check that the right capabilities are enabled for that model in the AWS region. Use the AWS console to enable model access.
- If you get errors about outdated models, make sure you use the newest Amazon Bedrock model IDs. This avoids retired models.
- Amazon Bedrock models availability vary by AWS region. Refer to Model support by AWS Region to confirm the models available in your region before building your solution.
Cleanup
To avoid ongoing charges, remove the resources you created for this project. Since we used AWS CDK to provision the resources, you can delete them with a single command:
cdk destroy
Security & Monitoring
In order to ensure use of responsible AI, apply Amazon Bedrock GuardRails to your own solutions to mitigate the risk of undesirable image generation. You can monitor Amazon Bedrock using Amazon CloudWatch, which collects raw data and processes it into readable, near real-time metrics. For further readings, visit Monitor Amazon Bedrock and Security in Amazon Bedrock.
To further improve security of this solution,
- Enable access logs for Amazon S3 bucket and Amazon API Gateway
- Enable Amazon CloudWatch logging
- Use custom role instead of default AWSLambdaBasicExecutionRole.
- Use Amazon API gateway authorization
- Activate request validation on API Gateway endpoints
- Use AWS WAF on public-facing API Gateway Endpoints.
- Implement responsible GenAI using Amazon Bedrock Guardrails to mitigate the risk of undesirable text summary
- You can monitor Amazon Bedrock using Amazon CloudWatch, which collects raw data and processes it into readable, near real-time metrics. For further readings visit Monitor the health and performance of Amazon Bedrock and Security in Amazon Bedrock.
Conclusion
In this post, you learned how to build a serverless solution for text summarization using the Amazon Bedrock Converse API with .NET. The key advantages include flexible model integration through the Amazon Bedrock Converse API, a serverless architecture that scales effortlessly, and the capability to process complex text inputs with minimal computational overhead. The sample application we utilized is available on GitHub. As a .NET developer working on AWS, cloning this repository and experimenting with Amazon Bedrock will provide hands-on learning opportunities.
We encourage you to learn and experiment with Amazon Bedrock, and other AWS AI services which you can utilize in your .NET code.