AWS Machine Learning Blog

Build generative AI applications on Amazon Bedrock — the secure, compliant, and responsible foundation

Generative AI has revolutionized industries by creating content, from text and images to audio and code. Although it can unlock numerous possibilities, integrating generative AI into applications demands meticulous planning. Amazon Bedrock is a fully managed service that provides access to large language models (LLMs) and other foundation models (FMs) from leading AI companies through a single API. It provides a broad set of tools and capabilities to help build generative AI applications.

Starting today, I’ll be writing a blog series to highlight some of the key factors driving customers to choose Amazon Bedrock. One of the most important reason is that Bedrock enables customers to build a secure, compliant, and responsible foundation for generative AI applications. In this post, I explore how Amazon Bedrock helps address security and privacy concerns, enables secure model customization, accelerates auditability and incident response, and fosters trust through transparency and responsible AI. Plus, I’ll showcase real-world examples of companies building secure generative AI applications on Amazon Bedrock—demonstrating its practical applications across different industries.

Listening to what our customers are saying

During the past year, my colleague Jeff Barr, VP & Chief Evangelist at AWS, and I have had the opportunity to speak with numerous customers about generative AI. They mention compelling reasons for choosing Amazon Bedrock to build and scale their transformative generative AI applications. Jeff’s video highlights some of the key factors driving customers to choose Amazon Bedrock today.

As you build and operationalize generative AI, it’s important not to lose sight of critically important elements—security, compliance, and responsible AI—particularly for use cases involving sensitive data. The OWASP Top 10 For LLMs outlines the most common vulnerabilities, but addressing these may require additional efforts including stringent access controls, data encryption, preventing prompt injection attacks, and compliance with policies. You want to make sure your AI applications work reliably, as well as securely.

Making data security and privacy a priority

Like many organizations starting their generative AI journey, the first concern is to make sure the organization’s data remains secure and private when used for model tuning or Retrieval Augmented Generation (RAG). Amazon Bedrock provides a multi-layered approach to address this issue, helping you ensure that your data remains secure and private throughout the entire lifecycle of building generative AI applications:

  • Data isolation and encryption. Any customer content processed by Amazon Bedrock, such as customer inputs and model outputs, is not shared with any third-party model providers, and will not be used to train the underlying FMs. Furthermore, data is encrypted in-transit using TLS 1.2+ and at-rest through AWS Key Management Service (AWS KMS).
  • Secure connectivity options. Customers have flexibility with how they connect to Amazon Bedrock’s API endpoints. You can use public internet gateways, AWS PrivateLink (VPC endpoint) for private connectivity, and even backhaul traffic over AWS Direct Connect from your on-premises networks.
  • Model access controls. Amazon Bedrock provides robust access controls at multiple levels. Model access policies allow you to explicitly allow or deny enabling specific FMs for your account. AWS Identity and Access Management (IAM) policies let you further restrict which provisioned models your applications and roles can invoke, and which APIs on those models can be called.

Druva provides a data security software-as-a-service (SaaS) solution to enable cyber, data, and operational resilience for all businesses. They used Amazon Bedrock to rapidly experiment, evaluate, and implement different LLM components tailored to solve specific customer needs around data protection without worrying about the underlying infrastructure management.

“We built our new service Dru — an AI co-pilot that both IT and business teams can use to access critical information about their protection environments and perform actions in natural language — in Amazon Bedrock because it provides fully managed and secure access to an array of foundation models,”

– David Gildea, Vice President of Product, Generative AI at Druva.

Ensuring secure customization

A critical aspect of generative AI adoption for many organizations is the ability to securely customize the application to align with your specific use cases and requirements, including RAG or fine-tuning FMs. Amazon Bedrock offers a secure approach to model customization, so sensitive data remains protected throughout the entire process:

  • Model customization data security. When fine-tuning a model, Amazon Bedrock uses the encrypted training data from an Amazon Simple Storage Service (Amazon S3) bucket through a private VPC connection. Amazon Bedrock doesn’t use model customization data for any other purpose. Your training data isn’t used to train the base Amazon Titan models or distributed to third parties. Nor is other usage data, such as usage timestamps, logged account IDs, and other information logged by the service, used to train the models. In fact, none of the training or validation data you provide for fine tuning or continued pre-training is stored by Amazon Bedrock. When the model customization work is complete—it remains isolated and encrypted with your KMS keys.
  • Secure deployment of fine-tuned models. The pre-trained or fine-tuned models are deployed in isolated environments specifically for your account. You can further encrypt these models with your own KMS keys, preventing access without appropriate IAM permissions.
  • Centralized multi-account model access.  AWS Organizations provides you with the ability to centrally manage your environment across multiple accounts. You can create and organize accounts in an organization, consolidate costs, and apply policies for custom environments. For organizations with multiple AWS accounts or a distributed application architecture, Amazon Bedrock supports centralized governance and access to FMs – you can secure your environment, create and share resources, and centrally manage permissions. Using standard AWS cross-account IAM roles, administrators can grant secure access to models across different accounts, enabling controlled and auditable usage while maintaining a centralized point of control.

With seamless access to LLMs in Amazon Bedrock—and with data encrypted in-transit and at-rest—BMW Group securely delivers high-quality connected mobility solutions to motorists around the world.

“Using Amazon Bedrock, we’ve been able to scale our cloud governance, reduce costs and time to market, and provide a better service for our customers. All of this is helping us deliver the secure, first-class digital experiences that people across the world expect from BMW.”

– Dr. Jens Kohl, Head of Offboard Architecture, BMW Group.

Enabling auditability and visibility

In addition to the security controls around data isolation, encryption, and access, Amazon Bedrock provides capabilities to enable auditability and accelerate incident response when needed:

  • Compliance certifications. For customers with stringent regulatory requirements, you can use Amazon Bedrock in compliance with the General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA), and more. In addition, AWS has successfully extended the registration status of Amazon Bedrock in Cloud Infrastructure Service Providers in Europe Data Protection Code of Conduct (CISPE CODE) Public Register. This declaration provides independent verification and an added level of assurance that Amazon Bedrock can be used in compliance with the GDPR. For Federal agencies and public sector organizations, Amazon Bedrock recently announced FedRAMP Moderate, approved for use in our US East and West AWS Regions. Amazon Bedrock is also under JAB review for FedRAMP High authorization in AWS GovCloud (US).
  • Monitoring and logging. Native integrations with Amazon CloudWatch and AWS CloudTrail provide comprehensive monitoring, logging, and visibility into API activity, model usage metrics, token consumption, and other performance data. These capabilities enable continuous monitoring for improvement, optimization, and auditing as needed – something we know is critical from working with customers in the cloud for the last 18 years. Amazon Bedrock allows you to enable detailed logging of all model inputs and outputs, including IAM invocation role, and metadata associated with all calls that are performed in your account. These logs facilitate monitoring model responses to adhere to your organization’s AI policies and reputation guidelines. When you enable log model invocation logging, you can use AWS KMS to encrypt your log data, and use IAM policies to protect who can access your log data. None of this data is stored within Amazon Bedrock, and is only available within a customer’s account.

Implementing responsible AI practices

AWS is committed to developing generative AI responsibly, taking a people-centric approach that prioritizes education, science, and our customers, to integrate responsible AI across the full AI lifecycle. With AWS’s comprehensive approach to responsible AI development and governance, Amazon Bedrock empowers you to build trustworthy generative AI systems in line with your responsible AI principles.

We give our customers the tools, guidance, and resources they need to get started with purpose-built services and features, including several in Amazon Bedrock:

  • Safeguard generative AI applications– Guardrails for Amazon Bedrock is the only responsible AI capability offered by a major cloud provider that enables customers to customize and apply safety, privacy, and truthfulness checks for your generative AI applications. Guardrails helps customers block as much as 85% more harmful content than protection natively provided by some FMs on Amazon Bedrock today. It works with all LLMs in Amazon Bedrock, fine-tuned models, and also integrates with Agents and Knowledge Bases for Amazon Bedrock. Customers can define content filters with configurable thresholds to help filter harmful content across hate speech, insults, sexual language, violence, misconduct (including criminal activity), and prompt attacks (prompt injection and jailbreak). Using a short natural language description, Guardrails for Amazon Bedrock allows you to detect and block user inputs and FM responses that fall under restricted topics or sensitive content such as personally identifiable information (PII). You can combine multiple policy types to configure these safeguards for different scenarios and apply them across FMs on Amazon Bedrock. This ensures that your generative AI applications adhere to your organization’s responsible AI policies as well as provide a consistent and safe user experience.
  • Provenance tracking. Now available, Model Evaluation on Amazon Bedrock helps customers evaluate, compare, and select the best FMs for their specific use case based on custom metrics, such as accuracy and safety, using either automatic or human evaluations. Customers can evaluate AI models in two ways—automatic or with human input. For automatic evaluations, they pick criteria such as accuracy or toxicity, and use their own data or public datasets. For evaluations needing human judgment, customers can easily set up workflows for human review with a few clicks. After setting up, Amazon Bedrock runs the evaluations and provides a report showing how well the model performed on important safety and accuracy measures. This report helps customers choose the best model for their needs, even more important when helping customers are evaluating migrating to a new model in Amazon Bedrock against an existing model for an application.
  • Watermark detection. All Amazon Titan FMs are built with responsible AI in mind. Amazon Titan Image Generator creates images embedded with imperceptible digital watermarks. The watermark detection for Amazon Titan Image Generator allows you to identify images generated by Amazon Titan Image Generator, a foundation model that allows users to create realistic, studio-quality images in large volumes and at low cost, using natural language prompts. With this feature, you can increase transparency around AI-generated content by mitigating harmful content generation and reducing the spread of misinformation. It also provides a confidence score, allowing you to assess the reliability of the detection, even if the original image has been modified. Simply upload an image in the Amazon Bedrock console, and the API will detect watermarks embedded in images created by Titan Image Generator, including those generated by the base model and any customized versions.
  • AI Service Cards provide transparency and document the intended use cases and fairness considerations for our AWS AI services. Our latest services cards include Amazon Titan Text Premier and Amazon Titan Text Lite and Titan Text Express with more coming soon.

Aha! is a software company that helps more than 1 million people bring their product strategy to life.

“Our customers depend on us every day to set goals, collect customer feedback, and create visual roadmaps. That is why we use Amazon Bedrock to power many of our generative AI capabilities. Amazon Bedrock provides responsible AI features, which enable us to have full control over our information through its data protection and privacy policies, and block harmful content through Guardrails for Bedrock.”

– Dr. Chris Waters, co-founder and Chief Technology Officer at Aha!

Building trust through transparency

By addressing security, compliance, and responsible AI holistically, Amazon Bedrock helps customers to unlock generative AI’s transformative potential. As generative AI capabilities continue to evolve so rapidly, building trust through transparency is crucial. Amazon Bedrock works continuously to help develop safe and secure applications and practices, helping build generative AI applications responsibly.

The bottom line? Amazon Bedrock makes it effortless for you to unlock sustained growth with generative AI and experience the power of LLMs. Get started today – Build AI applications or customize models securely using your data to start your generative AI journey with confidence.

Resources

For more information about generative AI and Amazon Bedrock, explore the following resources:


About the author

Vasi Philomin is VP of Generative AI at AWS. He leads generative AI efforts, including Amazon Bedrock and Amazon Titan.