Amazon Bedrock Developer Experience

Amazon Bedrock makes it easy for developers to work with a broad range of high-performing foundation models (FMs)
Showcasing the Amazon Foundation model, highlighting its key features and design elements

Choose from leading FMs

Amazon Bedrock makes building with a range of FMs as easy as an API call. Amazon Bedrock provides access to leading models including AI21 Labs' Jurassic, Anthropic's Claude, Cohere's Command and Embed, Meta's Llama 2, and Stability AI's Stable Diffusion, as well as our own Amazon Titan models. With Amazon Bedrock, you can select the FM that is best suited for your use case and application requirements.

Titan image generator playground

Experiment with FMs for different tasks

You can easily experiment with different FMs using interactive playgrounds for various modalities including text, chat, and image. The playgrounds allow you to try out various models for your use case to get a feel for the model’s suitability for a given task.

automatic model evaluation

Evaluate FMs to select the best one for your use case

Model Evaluation on Amazon Bedrock allows you to use automatic and human evaluations to select FMs for a specific use case. Automatic model evaluation uses curated datasets and provides pre-defined metrics including accuracy, robustness, and toxicity. For subjective metrics, you can use Amazon Bedrock to set up a human evaluation workflow with a few clicks. With human evaluations, you can bring your own datasets and define custom metrics, such as relevance, style, and alignment to brand voice. Human evaluation workflows can leverage your own employees as reviewers or you can engage an AWS-managed team to perform the human evaluation, where AWS hires skilled evaluators and manages the end-to-end workflow on your behalf. To learn more, read the blog.

Configuration page displaying settings for the fine-tuned model

Privately customize FMs with your data

With a few clicks, Amazon Bedrock lets you go from generic models to ones that are specialized and customized for your business and use case. To adapt an FM for a specific task, you can use a technique called fine-tuning. Simply point to a few labeled examples in Amazon S3, and Amazon Bedrock makes a copy of the base model, trains it with your data, and creates a fine-tuned model accessible only to you, so you get customized responses. Fine-tuning is available for Command, Llama 2, Titan Text Lite and Express, Titan Image Generator, and Titan Multimodal Embeddings models. A second way you can adapt Titan Text Lite and Express FMs in Amazon Bedrock is with continued pre-training, a technique that uses your unlabeled datasets to customize the FM for your domain or industry. With both fine-tuning and continued pre-training, Amazon Bedrock creates a private, customized copy of the base FM for you, and your data is not used to train the original base models. Your data used to customize models is securely transferred through your Amazon Virtual Private Cloud (VPC). To learn more, read the blog.

An image illustrating the process of making API requests, showcasing the communication between two entities

Single API

Use a single API to perform inference, regardless of the model you choose. Having a single API provides the flexibility to use different models from different model providers and keep up to date with the latest model versions with minimal code changes.