Amazon Bedrock Developer Experience

Amazon Bedrock makes it easy for developers to work with a broad range of high-performing foundation models

Choose from leading FMs

Amazon Bedrock makes building with a range of foundation models (FMs) as straightforward as an API call. Amazon Bedrock provides access to leading models including AI21 Labs' Jurassic, Anthropic's Claude, Cohere's Command and Embed, Meta's Llama 2, and Stability AI's Stable Diffusion, as well as our own Amazon Titan models. With Amazon Bedrock, you can select the FM that is best suited for your use case and application requirements.

Showcasing the Amazon Foundation model, highlighting its key features and design elements

Experiment with FMs for different tasks

Experiment with different FMs using interactive playgrounds for various modalities including text, chat, and image. The playgrounds allow you to try out various models for your use case to get a feel for the model’s suitability for a given task.

Titan image generator playground

Evaluate FMs to select the best one for your use case

Model Evaluation on Amazon Bedrock allows you to use automatic and human evaluations to select FMs for a specific use case. Automatic model evaluation uses curated datasets and provides predefined metrics including accuracy, robustness, and toxicity. For subjective metrics, you can use Amazon Bedrock to set up a human evaluation workflow in a few quick steps. With human evaluations, you can bring your own datasets and define custom metrics, such as relevance, style, and alignment to brand voice. Human evaluation workflows can use your own employees as reviewers or you can engage a team managed by AW to perform the human evaluation, where AWS hires skilled evaluators and manages the complete workflow on your behalf. To learn more, read the blog.

automatic model evaluation

Privately customize FMs with your data

In a few quick steps, Amazon Bedrock lets you go from generic models to ones that are specialized and customized for your business and use case. To adapt an FM for a specific task, you can use a technique called fine-tuning. Point to a few labeled examples in Amazon Simple Storage Service (Amazon S3), and Amazon Bedrock makes a copy of the base model, trains it with your data, and creates a fine-tuned model accessible only to you, so you get customized responses. Fine-tuning is available for Command, Llama 2, Amazon Titan Text Lite and Express, Amazon Titan Image Generator, and Amazon Titan Multimodal Embeddings models. A second way you can adapt Amazon Titan Text Lite and Amazon Titan Express FMs in Amazon Bedrock is with continued pretraining, a technique that uses your unlabeled datasets to customize the FM for your domain or industry. With both fine-tuning and continued pretraining, Amazon Bedrock creates a private, customized copy of the base FM for you, and your data is not used to train the original base models. Your data used to customize models is securely transferred through your Amazon Virtual Private Cloud (Amazon VPC). To learn more, read the blog.

Configuration page displaying settings for the fine-tuned model

Single API

Use a single API to perform inference, regardless of the model you choose. Having a single API provides the flexibility to use different models from different model providers and keep up to date with the latest model versions with minimal code changes.

An image illustrating the process of making API requests, showcasing the communication between two entities