.NET on AWS Blog

Integrating Amazon Bedrock in your .NET applications

by Jagadeesh Chitikesi and Ashish Bhatia | on | Permalink |  Share

Recent developments in generative Artificial Intelligence (generative AI) have made it easier and affordable to create applications and experiences that improve the way we live and work. Generative AI is helping businesses improve customer experiences through assistants and conversational analytics and boost employee productivity through automated generation of content, reports, and code.

In this blog post, we will introduce you to Amazon Bedrock and show you how you can build generative AI infused .NET applications by using foundation models (FMs) supported by Amazon Bedrock. This blog post also serves as an introductory post in the blog post series that aims to dig deeper into different use cases and explores concepts like Retrieval-Augmented Generation (RAG) pattern, knowledge bases and agents in Amazon Bedrock to build custom generative AI applications for your business. You can use this post as a reference to build .NET enterprise applications in the Gen AI domain using AWS services.

Using Amazon Bedrock from .NET code

There are two ways you can consume Amazon Bedrock functionality from your .NET Code: the Amazon Bedrock API, or the AWS SDK for .NET. Wherever possible it’s highly recommended you use AWS SDKs while interacting with AWS services from your code. Apart from providing a simple, consistent, and idiomatic way for .NET developers to consume AWS services, SDKs abstract away things like request authentication, retries, and timeouts.

The following NuGet packages capture all the core functionality you need to start building .NET Gen AI applications using Amazon Bedrock.

  • AWSSDK.Bedrock – Contains control plane APIs for managing, training, and deploying model. Like listing all FMs, getting details about a particular FM, or creating model customization jobs.
  • AWSSDK.BedrockRuntime – Contains data plane APIs for making inference requests to models hosted in Amazon Bedrock.
  • AWSSDK.BedrockAgent – Contains control plane APIs for creating and managing Agents and Knowledge bases for Amazon Bedrock.
  • AWSSDK.BedrockAgentRuntime – Contains data plane APIs for invoking agents and querying knowledge bases.

Amazon Bedrock API Reference has more information about the actions that are supported in each of the APIs.

Let us walk you through the steps required to build a .NET application using Amazon Bedrock. But before we get started, we recommend going through the Amazon Bedrock’s pricing page to make sure you are comfortable with the cost associated with this simple learning exercise. Some sample cost analysis for generative AI use cases is documented in the Generative AI Application Builder on AWS Implementation Guide and there are also pricing examples on the Amazon Bedrock Pricing. page.

Prerequisites

For the purpose of this blog post we assume you already have the following setup:

Step 1: Configure model access in Amazon Bedrock

In this walkthrough, we will use Anthropic Claude 3.0 model in Amazon Bedrock. This model is a leading Large Language Model (LLM) from Anthropic that enables a wide range of tasks from sophisticated dialogue and creative content generation to detailed instruction. First, let’s ensure that this model is enabled for you to use. On the Amazon Bedrock console, as shown in Figure 1, choose Model Access in the left navigation pane.

Figure 1: Amazon Bedrock Console

Figure 1: Amazon Bedrock Console

If the model you want to access is enabled you will see the Access status column set to Access granted for the model. If you see the Access status as Available to request, choose Manage model access , and then select Anthropic, Claude.. Choose Save.. Access is usually granted instantly. Verify that the value of Access status column is Access granted, as shown in Figure 2.

Figure 2: Amazon Bedrock model access

Figure 2: Amazon Bedrock model access

Step 2: Setting up AWS Identity and Access Management (IAM) permissions

To be able to use Amazon Bedrock, you need a user or a role with the appropriate IAM permissions. To grant Amazon Bedrock access to your user identity:

  1. Open the IAM console.
  2. Select Policies, then search for the AmazonBedrockFullAccess policy and attach it to the user, as shown in Figure 3.

Although we are using the AWS managed AmazonBedrockFullAccess policy in our solution, it is important that you implement security best practices in your projects by applying least-privilege permissions . Amazon Bedrock documentation provides IAM policy examples for several use cases.

Figure 3: Setting up IAM permissions

Figure 3: Setting up IAM permissions

Step 3: Implement the solution

To start using Amazon Bedrock in your .NET applications, install the related NuGet packages from the AWS SDK for .NET. This can be done easily via the NuGet package manager in Visual Studio. First, create a console or ASP.NET application using Visual Studio. Then, right click on the project in Solution Explorer and select “Manage NuGet Packages”. Search for “AWSSDK.Bedrock“.

You will get two packages as a result of this search:

AWSSDK.Bedrock
AWSSDK.BedrockRuntime

Alternatively, you can execute the following commands using the .NET Command Line Interface (CLI) to add these packages to your project.

dotnet add package AWSSDK.Bedrock
dotnet add package AWSSDK.BedrockRuntime

The first package, AWSSDK.Bedrock, offers a client object to call Amazon Bedrock management API actions like ListFoundationModels. You can find the full list of supported API actions in the AWS SDK for .NET API reference documentation. The second package, AWSSDK.BedrockRuntime, offers a client object to call Amazon Bedrock model invocation API actions.

The following code creates an instance of AmazonBedrockClient class from the AWSSDK.Bedrock NuGet package and calls the ListFoundationModelsAsync method to list all the available Amazon Bedrock foundation models. For API details, see ListFoundationModels in AWS SDK for .NET API Reference.

if (chain.TryGetAWSCredentials("Bedrock", out awsCredentials))
{
    AmazonBedrockClient client = new AmazonBedrockClient(awsCredentials);
    _foundationModels = (await client.ListFoundationModelsAsync(new ListFoundationModelsRequest())).ModelSummaries.OrderBy(x => x.ProviderName);
}

Amazon Bedrock API Request Structure

With the NuGet packages installed, initialize an AmazonBedrockRuntimeClient object. This object provides methods to invoke Amazon Bedrock models to run inference and generate responses. The following code invokes Claude V3 large language model using the client object.

Next, instantiate an InvokeRequest object, set its properties that include the model id and the prompt, and pass it into the AmazonBedrockRuntimeClient.InvokeModelAsync method call to run model inference. The structure of prompts plays a big role in guiding the FMs towards providing the right responses. For information about writing prompts, see Prompt engineering guidelines.

// Instantiate the request object
 InvokeModelRequest request = new InvokeModelRequest()
 {
    // Set ContentType property to indicate we are sending inference parameters as a Json object. For Stable Diffusion model, it could be 'img/png'
    ContentType = "application/json",
    
    // Set Accept property to indicate we are expecting result as Json object. Again, for Stable Diffusion model, it could also be 'img/png'
    Accept = "application/json",
    
    // Set ModelId property which foundation model you cant to invoke. You can find the list of Model Ids you can use at https://docs.aws.amazon.com/bedrock/latest/userguide/model-ids-arns.html
    ModelId = "anthropic.claude-v32",
    
    // Serialize to a MemoryStream the Json object containing the inference parameters expected by the model. Each foundation model expects a different set of inference parameters in different formats (application/json for most of them). It is up to you to know what you need to provide in this property and the appropriate format.
    Body = new MemoryStream(
        Encoding.UTF8.GetBytes(
            JsonSerializer.Serialize(new
            {
                prompt = "\n\nHuman:Explain how async/await work in .NET and provide a code example\n\nAssistant:",
                max_tokens_to_sample = 2000
            })
        )
    )
 };

Invoke Anthropic Claude 3.x on Amazon Bedrock

The following example code invokes the Anthropic Claude V3 model to generate a text response for our prompt. The following example shows how to use the InvokeModelWithResponse API to generate streaming text with C# using the prompt ‘Explain how async/await work in .NET and provide a code sample”

Each foundation model supported in Amazon Bedrock has different prompt format requirements and different formats for the responses the model returns. See Amazon Bedrock Runtime examples using AWS SDK for .NET for examples of prompts for FMs supported by Amazon Bedrock. Also, SDK supports a streaming API that returns data in a streaming format.

This allows you to access responses in chunks without waiting for the entire result. This can provide a better user experience by displaying the LLM responses as they are generated (which at times can be long), rather than relying on the synchronous InvokeModel API request.

Calling Amazon Bedrock when the user enters a question and submit for response

    // Call the InvokeModelAsync method. It actually calls the InvokeModel action from the Amazon Bedrock API
    InvokeModelResponse response = await client.InvokeModelAsync(request);

    // Check the HttpStatusCode to ensure successful completion of the request
    if (response.HttpStatusCode == System.Net.HttpStatusCode.OK)
    {
        // Deserialize the Json object returned in a plain old C# object (POCO). Here, we use the ClaudeBodyResponse internal class defined bellow
        ClaudeBodyResponse? body = await JsonSerializer.DeserializeAsync<ClaudeBodyResponse>(
        response.Body,
        new JsonSerializerOptions() {
        PropertyNameCaseInsensitive = true
        });
        // Write the completion string to the console
        Console.WriteLine(body?.Completion);
    }
    else
    {
        Console.WriteLine("Something wrong happened");
    }
 }

In this example, we write the response to the console window.

Introducing Sample App

All the code from this blog post and much more can be found in the dotnet-genai-samples GitHub repository. We will use this sample .NET application throughout this blog series. The application shows you how to run inference on different foundation models through playgrounds. These playgrounds are modeled to be somewhat similar to the Amazon Bedrock playgrounds you’ll find on the AWS management console but are built using .NET code. Follow the steps in the README file for application setup. The application provides the following functionality:

  • Lists and displays all the foundation models you have access to, as shown in Figure 4, along with their characteristics.
  • Text playground – Lets you experiment with text models supported on Amazon Bedrock and also allowing you to exercise your prompt engineering skills.
Figure 4: List of Foundation Models

Figure 4: List of Foundation Models

Pricing

Before you start building production ready applications on Amazon Bedrock, we recommend going through the service’s pricing page to understand how the pricing for Amazon Bedrock works. You’ll observe that Amazon Bedrock provides flexible pricing options. If you plan to use Amazon Bedrock for inference, you have two choices for the pricing plans: 1. On-Demand and Batch – pricing is based on the input and output token sizes, AWS Region, and the model being used. 2. Provisioned throughput – you will be charged by the number of model units you reserve that will meet your application’s performance requirement. For Model customization, Model evaluation, and Guardrails for Amazon Bedrock pricing, refer to the pricing page.

Cleanup

If you followed along and tried our sample application in your own AWS account, it is important to clean up the resources that you created to stop incurring charges.

If you no longer need to use a foundational model, follow the remove model access steps from Amazon Bedrock user guide to remove access to it.

Conclusion

In this post, we learned about integrating generative AI in your .NET applications with Amazon Bedrock. The sample application we used is on GitHub and shows the power of generative AI. As a .NET developer working on AWS, cloning this repository and experimenting with Amazon Bedrock will provide hands-on learning. Immersing yourself in real-world examples is the best way to understand how you can leverage these transformative technologies to solve problems and create innovative solutions. We also encourage you to learn and experiment with Amazon Bedrock, and all the other AWS AI services you can use in your .NET code.