AWS Cloud Operations Blog
AWS Audit Manager launches AWS Best Practices Framework for Generative AI
The rapid growth of generative AI brings promising new innovation, and at the same time raises new challenges. At AWS, we are committed to developing AI responsibly while enabling customers to provide assurance regarding the security of their environment to regulators and auditors. AWS Audit Manager announces the first version of AWS best practices framework for generative AI that automates evidence for Amazon Bedrock. With this framework, customers can now harness the full potential of generative AI while addressing concerns around ethical and responsible usage.
AWS Audit Manager, launched in December 2020, a service that automates evidence collection (resource configurations and usage activity) against pre-defined controls, monitoring if a customer’s AWS usage matches their intended policy and regulatory requirements. Amazon Bedrock, launched in September 2023, a fully managed service that makes foundation models (FMs) from Amazon and other leading AI companies available through an API, enabling you to privately tune existing large language models (LLMs) with your organization data. Amazon Bedrock customers can deploy this new best practices framework via AWS Audit Manager in the accounts where they are running their generative AI models and applications, to collect evidence that will help monitor compliance with intended policies.
AWS experts in AI, compliance, and security assurance, developed this framework with additional review by global audit and assurance firm Deloitte, an AWS Partner. A typical security or compliance framework builds a perimeter around known risks and entities based on a particular mission or industry objective. With this in mind, the goal is to shape customers’ minds towards leveraging a brand-new technology with discretion and until regulations and compliance matures.
“As we see more standardized use cases for Generative AI develop, it will become important to see standard controls and Generative AI guardrails implemented to address Generative AI-specific risks, such as toxicity and hallucination. AWS’s framework in Audit Manager for Generative AI provides organizations the ability to begin monitoring evolving AI risks and explore more Generative AI opportunities.”
– Christina DeJong, CPA – Partner, Deloitte
This new framework groups 110 controls into 32 objectives across 8 essential domains: Accuracy, Fair, Privacy, Resilience, Responsible, Safe, Secure, Sustainable. This provides risk coverage at each technology layer including the generative AI system, the underlying models, data that customers input and data that is ultimately generated. For example, customers seeking to mitigate known biases before feeding data into their model can use the ‘Pre-processing Techniques’ control in this framework to require evidence of validation criteria including documentation of data augmentation, re-weighting, or re-sampling.
Setting up an assessment with AWS Audit Manager
Using the best practices for generative AI v1 framework in Audit Manager is simple. Be sure the following prerequisites are in place before continuing to creating an Audit Manager assessment.
Prerequisites
- An active use of Amazon Bedrock in the account(s) that Audit Manager will run the assessment. If you have not setup Amazon Bedrock yet, follow these instructions to get started.
- Audit Manager is enabled in the account. If you have not previously enabled Audit Manager in your account(s), follow these instructions to get started.
- What’s New Post – Best Practices framework for GenAI
- AWS Audit Manager documentation
- Transform responsible AI from theory into practice
- Amazon Bedrock User Guide
- Transform responsible AI from theory into practiceh
- AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and Generative AI
Start by navigating to the AWS console and selecting or searching for Audit Manager.
From the Audit Manager console select “Create assessment” in the top right:
Next, you will need to provide 2 pieces of required information: a name for the assessment which is 300 or fewer characters, and an S3 bucket where the assessment reports will be placed in when generated. Optionally, you can provide a 1000 word Assessment description.
The next step is to choose the “AWS Generative AI Best Practices Framework v1” framework from the list of standard frameworks as shown:
Choose Add new tag to associate a tag with your assessment. You can specify a key and a value for each tag. The tag key is mandatory and can be used as a search criteria when you search for this assessment. For more information about tags in Audit Manager, see Tagging AWS Audit Manager resources. Adding up to 50 tags to the assessment is an optional step you can take before pressing the “Next” button:
The next step of this process is to specify the AWS accounts in scope for the assessment. In my demonstration environment, I am selecting the Development, Logging and a single production application account. For your own use case this might vary, but if you do have a logging type account, you should include it to capture relevant events that could be routed there. After specifying the accounts, choose “Next” to continue.
Since this is a standard and not custom framework, Audit Manager selects the required services to collect evidence from (this does not mean which services could be getting tested, just where the data comes from). The only thing left to do on this screen is select “Next”.
You will need to select one or more “Audit Owners” by user name or role. These entities will have the capability to make changes to this assessment. It is generally reserved for those with that type of duty in the organization. After the audit owners, choose “Next” to continue.
Finally, the next screen will show you all of the choices you have made and allow you to go back and make changes or, if satisfied with the choices, select “Create assessment”.
As the banner above states, and assuming Amazon Bedrock is actively being used in the account(s) selected during the assessment setup, you should begin seeing evidence within 24 hours.
If you wish to dive deeper, you can customize the framework to suit your more specific needs. For more information on customizing standard frameworks in AWS Audit Manager, look here.
About the authors