AI Safety Summit - Enhancing Frontier AI Safety

We thank the government of the United Kingdom for organizing the AI Safety Summit, which comes at a crucial moment in the continuing development and public understanding of AI. The pace of AI innovation in just the past year has captivated public attention. While there is recognition that AI has the potential to reshape the economy and improve modern life, recent advances have also prompted understandable questions about whether and how organisations developing “frontier” AI systems are accounting for their unique risks. Because the opportunities and the risks presented by AI are fundamentally global in nature, the AI Safety Summit is an opportune moment for industry, governments, and civil society to align around a shared vision for ensuring that frontier AI is safe, secure, and trustworthy. 

Earlier this year we took an important first step towards this goal when we proudly endorsed the White House Voluntary AI Commitments. Recognizing the dynamic state-of-the-art, the White House Commitments are built on a forward-looking set of best practices that are flexible and durable enough to evolve along with the technology. The Commitments set out ambitious and concrete objectives for managing many of the unique risks of generative AI and for building trust with the public. While Amazon initially signed the Commitments in the US, our efforts to operationalize them are not territorially limited.

The AI Safety Summit is an opportunity to build on the White House Voluntary AI Commitments. To foster global confidence that the Commitments are meaningful, it is important that we provide information about how our existing AI development practices align with the underlying commitments and the additional areas of inquiry set out in the “AI Safety Policies” questionnaire. Our responses below are organised by topic and highlight the best practices and policies that guided our development and deployment of Amazon Titan, the family of large language models we recently launched in connection with Bedrock, our foundation model service. 

As our AI services and the science continue to evolve, so too will the policies and practices we use to operationalize the Commitments. We look forward to discussing our current practices and learning from the global community that will be in the United Kingdom for the Summit. 

Amazon and AI

Amazon’s perspectives on AI are informed by our dual role as a developer of AI technology and a deployer of AI tools and services. Artificial intelligence (AI) and Machine Learning (ML) have been a focus for Amazon for over 25 years, and many of the capabilities customers use with Amazon are driven by ML. Our e-commerce recommendations engine is driven by ML; the paths that optimize robotic picking routes in our fulfilment centers are driven by ML; and our supply chain, forecasting, and capacity planning are informed by ML. Prime Air (our drones) and the computer vision technology in Amazon Go (our retail experience that lets consumers select items off a shelf and leave the store without having to formally check out) use deep learning. Alexa, powered by more than 30 different machine learning systems, helps customers billions of times each week to manage smart homes, shop, get information and entertainment, and more. We have thousands of engineers at Amazon committed to ML, and it’s a big part of our heritage, current ethos, and future. 

Amazon Web Services (AWS) has played a key role in helping organisations across industry sectors leverage AI to improve their productivity, enhance their competitiveness, and better serve their customers. AWS offers the broadest and deepest portfolio of AI and ML services for cloud customers, empowering developers to build, train, and deploy their own ML models or easily incorporate pre-trained AI functionality. From the most performant, scalable infrastructure for ML training and inference to API-based capabilities like anomaly detection, forecasting, and intelligent search, we provide customers with a broad range of options to help them benefit from AI. AWS facilitates access to a range of generative AI and foundation model (FM) services, including:

  • Amazon Bedrock is a fully managed service that makes available a broad range of high-performing FMs from leading AI companies like Anthropic's Claude, AI21 Labs' Jurassic-2, Stability AI's Stable Diffusion, Cohere's Command and Embed, Meta's Llama 2 and Amazon's own Titan language and embeddings models.
  • Amazon Titan Foundation Models are a family of large language models available through Amazon Bedrock that customers can use to power their generative AI applications. Titan Text generates text from a simple natural language command for a wide variety of applications like writing blogs, emails, summarizing documents, open-ended Q&A, and information extraction. Titan Embeddings generates text embeddings (numerical representation of text) for applications like search, anomaly detection, and personalization.
  • Amazon Sagemaker Jumpstart Amazon SageMaker JumpStart is a machine learning hub that provides users of Sagemaker with access to pretrained models, including foundation models, to perform tasks like article summarization and image generation. 
  • Amazon CodeWhisperer is an AI coding companion that increases developer productivity by generating code suggestions in real-time based on developers' comments in natural language and prior code in their Integrated Development Environment.
  • Amazon QuickSight incorporates generative AI capabilities that enable users to interact with datasets and easily create and customize visuals using natural-language commands. 
  • Please share any draft or enacted policy documents that describe the framework you use to make decisions about how to manage development and deployment of increasingly capable frontier AI systems, including current models. This should include the conditions under which you’d consider it unsafe to continue AI deployment (or, as appropriate development), or disclose the risk (e.g. to governments). We expect this policy will likely reference policies described in other parts of this submission, for example, evaluations and red-teaming and security controls.

    • Reliable and objective measurement of frontier model capabilities is an evolving science. AWS currently uses risk-based approaches to evaluate our Titan FMs. Our risk evaluations rely on a combination of use case analysis, threat modeling, and red teaming to assess the relative level of overall risk a model may pose relative to a baseline (e.g., whether the model increases the likelihood or severity of potential harm that a malicious actor could cause compared to access to the internet alone). Based on our current evaluations, Titan FMs do not pose added safety risks compared to such a baseline.
    • Looking forward to future generations of models, AWS and the broader scientific community will need to address several fundamental questions as we work to establish objective measurements of model capabilities. For instance: we need (1) shared frameworks for defining and measuring what a “capability” is, (2) more nuanced understandings of the relationships between “capabilities” and “risk” (e.g., capability does not necessarily correlate with risk), and (3) methods of provably enforcing constraints on the exercise of an AI’s capability to mitigate potential risks.
  • To make informed decisions about model deployment and risk mitigation design, it is critical to understand a model’s capabilities and limitations, and its potential for harm under real world conditions (including misuse). This section addresses the systematic model evaluation and red teaming necessary to support the sort of safeguards covered above. Please describe the steps you take to gauge the full range extent and limitations of your models’ capabilities, and their potential for harm under real world settings (including misuse).

    Safety Evaluation and Red Teaming

    • Risk management involves a careful assessment of the effectiveness of the risk mitigations we implement. Performing such an assessment typically requires multiple datasets, each focused on a different use case or risk. It also involves tracking multiple different development trajectories - the trajectory of the FM itself, and the trajectories of the test datasets, because each test result is a function of both the model and dataset used.
    • For a managed FM service such as Amazon Titan Text, we test end-to-end for broad set of possible risks. Examples of risks include underperformance on overall utility (fitness for purpose), controllability, privacy, security, fairness, safety, robustness, explainability, veracity, and alignment. Testing occurs at the component, subsystem and system levels. Each test is associated with a risk, and is characterized by who conducts it, whether the input is manual or automated, whether the assessment is manual or automated, and other factors, such as statistical power.
    • When assessing a risk (e.g., safety, privacy, security) requires more specialized knowledge than the development team possesses, we engage expertise external to the team (e.g., our responsible AI security experts, external red teaming vendors) to augment our internal testing capability. Our testing options include, but are not limited to: testing on internal benchmark datasets to determine progress against internal design objectives, testing on public benchmark datasets, testing against use case-specific datasets that represent anticipated customer uses, manual red teaming by independent parties to “jailbreak” design policies and surface unexpected uses, automated red-teaming, customer testing on their own use cases that they choose to disclose, and manual and automated security testing for cloud and AI-specific vulnerabilities.
    • To keep risk assessments current, we test during development, prior to release, and post-release. We integrate results from teams independent of the service team to continuously improve our risk coverage and our test suites.

    Security Evaluation and Red Teaming

    • Highly available and secure information systems are foundational to AWS’s mission and business functions. It is therefore critical that AWS’ Services and infrastructure operate in a manner consistent with the highest security standards in the industry. AWS integrates security review and testing activities into design, development, and deployment, and regularly performs security reviews throughout the entirety of an AWS Services’ lifecycle. In addition, the AWS Penetration Testing Program uses industry experts to continually validate and improve AWS Services’ ability to protect, detect, respond, and recover from emerging threats and vulnerabilities.
    • Our approach to AI security builds on our traditional security practices, and layers on additional testing to account for AI-specific risks, including risks specific to generative AI. We have built a list of known security risks with the support of our threat intelligence experts across Amazon Security. Once launched, we incorporate continuous feedback loops to evaluate the performance of our models and ensure that we can quickly react to new vectors of risk.
    • We continuously improve our mechanisms for patchability and safeguards in our AI services by testing at each layer, and then apply those learnings into improved mitigating controls in these systems. In some models, this means we fine tune to optimize response patterns for types of gaps, while in others it means we enforce stronger isolation and controls on the operating environment. We test these systems by leveraging our red teams and penetration testing processes, and refine the controls where applicable across the stack. 
  • Model reporting and information sharing is a crucial tool to enable a range of actors (government, other AI labs, downstream model users, online platforms on which AI-generated content is likely to be published, etc.) to use models responsibly or mitigate risks associated with their use. Please describe your processes for information sharing, including what level of information detail you share with whom and for what purposes. Please also describe your processes for determining whether the information you share is accurate and maximally informative to its intended audience.

    • Titan Text models are currently offered to select customers as a limited preview. The preview period is an important opportunity to work with established customers get feedback on Titan FM performance and how we can best support for the diverse range of use cases that may arise when the service is made generally available. We are incorporating lessons from the preview period to develop an AI Service Card for our Titan models that accounts for customer needs.
    • We recently launched AWS AI Service Cards to provide our customers and the broader AI community with greater transparency about our AI services. AI Service Cards are a form of responsible AI documentation that provide a single place to find information on the intended use cases and limitations, responsible AI design choices, and deployment and performance optimization best practices for our AI services. They are part of a comprehensive development process we undertake to build our services in a responsible way that addresses fairness and bias, explainability, robustness, governance, transparency, privacy, and security.
    • Each AI Service Card contains four sections:
    1. An overview of the key operational features of the AI service to help customers understand its functionality
    2. An analysis of intended use cases and limitations that provides information about common uses for a service, and helps customers assess whether a service is a good fit for their application.
    3. Information about the design of the service and the key responsible AI design considerations across important areas, such as our test-driven methodology, fairness and bias, explainability, and performance expectations. We provide example performance results on an evaluation dataset that is representative of a common use case.
    4. Best practices for deployment and performance optimization, including key levers that customers should consider to account for real-world deployment scenarios. 
  • Given there is no perfect method for identifying all risks associated with a model prior to its release, processes for responsible disclosure of model vulnerabilities, or model features with misuse potential (discovered by users, developers, academics, etc) is critical to ensuring timely remediation of newly discovered issues. Please describe any processes you have in place for enabling and incentivising the responsible disclosure of such issues, as well as your technical and governance processes (including response times) for remediating and sharing information about issues identified.

    • Vulnerability Reporting: Amazon has long recognised that the security of our products and services is enhanced through robust coordinated vulnerability disclosure processes that encourage customers and security researchers to report potential vulnerabilities so they can be quickly remediated.  We maintain reporting mechanisms that span across our services, devices, and Amazon store to facilitate the disclosure of vulnerabilities, and we are committed to quickly investigating all reports upon receipt. Amazon is leveraging our existing security vulnerability reporting mechanisms to enhance AI safety by encouraging the submission of reports about our AI products and services. Extending our existing vulnerability reporting mechanisms to our AI products and services will be an important mechanism for addressing new categories of safety risk that are implicated by frontier models.
    • Notification: If applicable, AWS coordinates public notification of any validated vulnerability to all impacted customers, and for any significant issues makes public notifications in the form of our Security Bulletins.
    • Cross-Industry Information Sharing: We are likewise committed to working with industry peers and the broader community of AI stakeholders to enhance these capabilities and, where necessary, to develop new mechanisms for addressing risk. We intend to participate in cross-industry information sharing programs to share threat intelligence information about potential frontier safety risks.
    • Abuse Reporting: In addition to our vulnerability reporting mechanism, we maintain a related process for reporting abuse of AWS resources (such as suspicious activity related to an EC2 instance or S3 bucket). Any information shared with AWS is kept confidential and only shared with a third party if it affects a third-party product., In this case we will share this information with the third-party product's author or manufacturer based on their reporting processes.
  • Because prompt controls and similar model safety features are imperfect and can be bypassed by knowledgeable users, it is important to have monitoring mechanisms in place to detect, investigate, and respond to patterns of vulnerabilities, accidents or misuse (e.g. by blocking certain queries, or malicious users, or alerting downstream parties of risks arising from model misuse). Please describe your processes for monitoring for such issues post-release (e.g. through API monitoring). Please also describe your technical and governance processes (including response times) for remediating or reporting any issues identified. Please also describe how you measure the efficacy of such monitoring processes.

    • Amazon takes a lifecycle-based approach to managing the risks of AI, including potential risks that can arise from misuse of our AI/ML services. Our approach to risk mitigation begins at the earliest stages of model development, including the data sources we use in the pre-training processes. We evaluate the sources of data used for pre-training and take steps to screen out data that could compromise the reliability or safety of our models. 
    • Following pre-training of our large language models, we use a range of fine-tuning methods to improve their performance and predictability. Among other modes of fine-tuning, we use reinforcement learning with human feedback (RLHF) to shape model behavior, with a particular focus on discouraging the model from producing outputs that create AI safety risks. 
    • After a large language model is fine-tuned, we evaluate model capabilities to assess residual AI safety risks and determine whether additional safeguards are necessary. For our Amazon Bedrock service, which offers our Titan FMs, we have applied filters on user inputs and model outputs to address a range of potential risks, including AI safety risks. These filters use a combination of rules-based and machine learning classifiers that can be applied both to the prompt and the model output. The classifier algorithm processes model inputs and outputs, and assigns type of harm and level of confidence. The classification process is automated and does not involve human review of user inputs or model outputs. These classifier guardrails enable us to:
              o    Filter Harmful Content: These filters work by evaluating both user inputs and model responses against a set of defined policies and blocking them if those policies are violated. We can therefore detect and filter problematic and irrelevant queries & outputs by defining a set of denied topics (including related to safety concerns).
              o    Identify Patterns of Misuse: We also use classifier metrics to identify patterns of potential violations and recurring behavior. We may compile and share aggregated and anonymized classifier metrics with third-party model providers whose models are available on Bedrock. Amazon Bedrock does not store user input or model output and does not share these with third-party model providers.
              o    Automated Abuse Detection.  When customers use Bedrock, we may conduct automated abuse detection to detect harmful content (e.g., content that incites violence) in user inputs and model outputs. We have the right to suspend access to Bedrock in the event that a customer continues to use the service in a manner that may violate (our or a third-party model provider’s) policies,
              o    Investigate Compliance Concerns: We may request information about customers’ use of Bedrock and compliance with our policies. This includes our AWS Responsible AI Policy, which requires customers to provide information about their use of the service upon request. In the event that a customer continues to use the service in a manner that may violate AWS’s policies or a third-party model provider’s AUP, AWS may suspend access to Bedrock, taking into account severity, recurrence of the activity, customer cooperation, and whether customers have mechanisms in place to prevent misuse of the service. 
    • Our safety efforts are enhanced by a dedicated AWS Trust & Safety team that investigates third party reports of potential policy violations. Our Trust & Safety team maintains robust triage and escalation processes to ensure the appropriate review, validation, and resolution of abuse reports. Trust & Safety subject matter experts provide support and guidance to cross-functional red-teaming needs, where abuse insights, intelligence, and emerging risks are shared. Our internal abuse data, engagement with customers, actionable intelligence, and collaborations with external Trust & Safety experts and organizations ensure that we understand the threat landscape, industry trends, and emerging risks. This in turn helps us improve our prevention toolkit and response strategies. 
    • AWS’ Trust and Safety team has a dedicated strategic engagement arm to ensure that our work is evidence-based, demonstrates best practice, and is future focused. This includes membership of, and participation in, global alliances and initiatives dedicated to Trust and Safety (e.g. World Economic Forum Global Coalition on Digital Safety).
  • Model access, model weights, and some of the artefacts used to create models (e.g. architecture and algorithms, training data, infrastructure details) could potentially be misused to cause harm, particularly as models become more capable.

    Please describe any policies and practices you have for protecting your systems and information from theft, damage or loss and ensuring artefacts remain secure and can only be accessed by authorised and trusted parties.

    • Amazon has long experience of building ML based applications using secure software engineering practices. Amazon’s approach to security is rooted in the principles of least privilege, need-to know, separation of duties, and defense in depth. Amazon implements these principles – from concept development through deployment and operations – through stringent security controls that address both logical security risks and physical security risks. We likewise foster a culture that makes security a shared responsibility for everyone who has a role in developing and deploying our services.  
    • Our Titan FMs, including their weights, are protected by our logical and physical security.

    Logical Security

    • We apply many of our existing security techniques and methods, such as AWS tenant isolation and data access authorization, to generative AI products to provide foundational security controls. As generative AI introduces new and unique security   issues such as prompt injection, data extraction, and training data poisoning attacks, we have implemented new generative AI-specific security mechanisms including automated prompt testing, security guardrails during model training and at run time, and model fine-tuning that improves model resiliency to malicious prompts.
    • We integrate security practices in to every stage of the software development lifecycle – making threat modeling, security practices, and security testing part of the software development process itself.  Our software development practices are anchored in DevSecOps, an integrated engineering approach to development and operations that includes security throughout. Key components of our approach include code analysis, change management, compliance management, threat modeling, security testing, and security training.  And, we have developed  bespoke tools and processes that encourage collaboration between our  developers, security specialists, and operations teams.
    • Our software developers use tools that detect security flaws when they are writing code; they follow two-person rules that mean no one person can check in code without inspection by another engineer; they run automated securitytests on every code check-in; and a centralized security team penetration-teststhe pre-release service for any vulnerabilities. Any flaws are then fixed before releasing the final application to end users. We then monitor for potential issues, making changes and working with teams to release updated versions as needed.
    • We apply the principle of least privilege and we enforce separation of duties with appropriate authorization for each interaction. We centralize identity management and utilize phishing-resistant multi-factor authentication for all human identity verification, and dynamic and automatic provisioning of credentials for software to eliminate reliance on long-term static credentials.
    • We protect data based on data classification and use encryption, tokenization, and access controls where appropriate. We also use mechanisms and tools to reduce or eliminate the need for direct access or manual processing of data. This reduces the risk of mishandling, modification, or human error when processing sensitive data. We also protect against insider threat using a variety of monitoring and assessment tools and techniques, as well as a security team comprised of former law enforcement personnel to investigate based on any evidence of possible misbehavior.
    • Underlying all of these measures are our conceptual models of least privilege and  separation of duties, accompanied by Zero Trust mechanisms. These focus on  providing security controls around digital assets that do not depend on traditional network controls or network perimeters. Accordingly, we employ identity-centric controls that build on network controls but provide finer-grained, more frequently evaluated access decisions. These techniques require users and systems to strongly prove their identities and the trustworthiness of their devices, and enforce fine-grained identity-based authorization rules before allowing them to access applications, data, and other systems.
    • These practices are backstopped by our incident response protocols. We continuously and rigorously prepare for security incidents that might occur to ensure that we can respond and recover quickly. This includes the development of playbooks to guide our efforts and regular engagement across our security teams to practice their implementation.

    Physical Security

    • Our data centers are secure by design and our controls make that possible. Before we build a data center, we comprehensively assess potential threats and then design, implement, and test controls to ensure the systems, technology, and people we deploy counteract those risks.
    • Physical access to our data centers is limited to screened personnel within specific job functions and with specific approvals. Any employee whose job does not involve daily work inside the data center who needs data center access must first apply and provide a valid business justification. Access requests are evaluated in accordance with the principle of least privilege: employees are permitted to enter a data center only if they have a valid business justification and their access is limited temporally and to the specific layer of the data center for which they require access.
    • Physical access to AWS data centers is logged, monitored, and retained. AWS correlates information gained from logical and physical monitoring systems to  enhance security on an as-needed basis. The AWS Security Operations Center  performs regular threat and vulnerability reviews of our data centers. Ongoing  assessment and mitigation of potential vulnerabilities is performed through data  center risk assessment activities. We monitor our data centers using our global -  Security Operations Centers, which are responsible for monitoring, triaging, and  executing security programs. They provide 24/7 global support by managing and  monitoring data center access activities, equipping local teams and other support    teams to respond to security incidents by triaging, consulting, analyzing, and    dispatching responses.

    Penetration Testing  

    • Amazon Web Services conducts a security review and penetration testing before new products or significant new features launch. All significant security risks identified must be mitigated before services are allowed to launch. We have developed a penetration testing methodology that is tailored to generative AI risks to help us proactively identify issues in our generative AI products. We have a large number of qualified penetration testing engineers on staff, but also hire outside firms who are able to employee some of the industry’s top talent. We vet penetration testing vendors using a multi-step selection process in which each vendor and tester is reviewed for bar-raising technical expertise and experience. Every year vendors are re-assessed and contracts are issued based on Requests for Information, Requests for Proposal, and interviews.  
  • With the increased capabilities of generative AI, there are significant concerns over the misuse of such abilities for misinformation, propaganda, deception, and other online harms. Please describe any technical measures and policies you have in place to directly or indirectly address such concerns. For example, if you have policies on acceptable use for generative AI, if you have engaged in public literacy efforts around these risks, or if you have explored or implemented watermarking or other technical measures for detection and attribution of AI generated content.

    • We recently published an AWS Responsible AI Policy to supplement our existing AWS Acceptable Use Policy and Service Terms. The AWS Responsible AI Policy applies to the use of all AWS AI/ML services, including our foundation model service Bedrock. The AWS Responsible AI Policy explicitly prohibits the use of our AI/ML services: 
    1. for intentional disinformation or deception;
    2. to violate the privacy rights of others, including unlawful tracking, monitoring, and identification;
    3. to depict a person’s voice or likeness without their consent or other appropriate rights, including unauthorized impersonation and non-consensual sexual imagery;
    4. for harm or abuse of a minor, including grooming and child sexual exploitation;  
    5. to harass, harm, or encourage the harm of individuals or specific groups;              
    6. to intentionally circumvent safety filters and functionality or prompt models to act in a manner that violates our Policies;       
    7. to perform a lethal function in a weapon without human authorization or control.
    • As our generative AI service offerings evolve, so too will the governance and technical safeguards we use address the risks that might be implicated by the content they generate. As recognized by the White House Voluntary AI Commitments, multi-modal models that facilitate the creation of audio or visual content present particularly acute risks. We are actively exploring mechanisms to facilitate the identification of content generated by multi-modal models, including through watermarking and watermark detection APIs. In evaluating such mechanisms, we will aim to ensure that they are robust enough to withstand manipulation while also minimizing the impact on the integrity of the content and avoiding latency.
  • As AI capabilities are rapidly advancing, much is unknown about their societal, safety and security risks. Please describe any efforts you undertake, either internally or through engagement with external parties (e.g. individuals, organisations, governments), to conduct research and better understand and mitigate such risks. Please also include research efforts that are supported by you through non-monetary means, such as compute or model access.

    Amazon and AWS are committed to responsible AI research and its practical application. The company has taken a proactive approach to addressing the societal implications of artificial intelligence, a commitment that is reflected in our support for a wide range of research and investment initiatives.

    • Academic Engagement: Amazon actively collaborates with leading academics and researchers through programs like Amazon Scholars, Amazon Visiting Academics, and Amazon Postdoctoral Scientists. This collaboration allows for the pursuit of responsible AI research while these scholars continue their academic roles.
    • Funding Research: 
    • Amazon partners with academic institutions to advance responsible AI research. In 2019, National Science Foundation (NSF) and Amazon jointly funded a $20 million, three year collaboration to support research focused on Fairness in AI. To date, the program has provided funding for 70 faculty at over 40 institutions, which has resulted in 283 publications.
    • Amazon will be hosting the Fairness in AI conference in January 2024.
    • Through Amazon Research Awards, which offers unrestricted funds and AWS Promotional Credits to support research at academic institutions and non-profit organisations, we have a ~$1M dedicated Fairness in AI track.
    • Amazon awards a yearly cybersecurity research grant which includes subjects of generative AI security.
    • Contributions to Responsible AI Science: Amazon actively contributes to the responsible AI field by producing research on topics like bias, fairness, and privacy in AI. Recent publications include:
    • “I’m fully who I am”: Towards centering transgender and non-binary voices to measure biases in open language generation focuses on the potential marginalization and harm experienced by transgender and non-binary (TGNB) individuals in the context of language generation technologies. It emphasizes the importance of centering TGNB voices in assessing gender biases and harm, introducing the TANGO dataset to evaluate mis-gendering and harmful responses to gender disclosure. The findings highlight the prevalence of binary gender norms in language models, especially when using singular they and neopronouns, underscoring the need for further research in making AI more inclusive based on community voices and interdisciplinary literature.
    • Unsupervised and semi-supervised bias benchmarking in face recognition introduces SPE-FR, a statistical method for evaluating face verification systems when identity labels are missing or incomplete, focusing on unsupervised scenarios. SPE-FR accurately assesses performance and reveals demographic biases in system performance through experiments with various face embedding models.
    • Utility-preserving privacy-enabled Speech embeddings for emotion detection  addresses audio privacy concerns and presents a method that modifies embeddings generated through adversarial task training to protect speaker privacy without compromising the utility for related tasks. This approach helps meet privacy regulations concerning biometrics and voiceprints while maintaining the effectiveness of audio representation learning.
    • Efficient fair PCA for fair representation learning addresses fair principal component analysis (PCA) with the aim of concealing demographic information in data. They propose a straightforward method that offers an analytical solution similar to standard PCA, can be kernelized, and is computationally more efficient than existing fair PCA methods, achieving comparable results.
    • Multiaccurate proxies for downstream fairness addresses the challenge of training a model for demographic fairness when sensitive features like race are unavailable during training. They propose a fairness pipeline approach where an "upstream" learner learns a proxy model for sensitive features from other attributes, allowing a "downstream" learner to train a fair model with minimal assumptions on the prediction task by adhering to multiaccuracy constraints. This approach is more feasible than traditional classification accuracy, even when sensitive features are challenging to predict.
    • University Hubs Program: Through the University Hubs Program, Amazon promotes collaboration between academia and industry, ensuring that research artefacts are publicly accessible, papers are published, and research is presented at public events. This promotes transparency and the dissemination of responsible AI research. There are currently six Hubs: Columbia, UCLA, UW, MIT, UT Austin, Max Plank Institute (Germany). Since our first engagement with Columbia in 2021, the Hubs program has funded more than 120 Sponsored Research Projects and 65 PhD Fellowships. 
    • Diversity and Education Initiatives: Amazon supports programs that empower underrepresented individuals in AI and machine learning and create a more equitable and inclusive AI ecosystem. The AWS AI & ML Scholarship program is a $10 million annual education and scholarship program designed to prepare underrepresented and underserved students globally for careers in machine learning. Since launch in 2021, 3,000 students from over 85 countries, 47 of which are countries from the global south, have been awarded $15 million in scholarships. Another 1,000 students will receive $5 million in scholarships to upskill students on foundational ML concepts to support their pursuit of careers in AI in 2023. The program provides hands-on learning and mentorship for students to apply ML skills to address issues relevant to them. 
    • Resources and Education: Amazon's Machine Learning University (MLU) offers accessible resources and educational materials, including courses and tools that promote responsible AI development. Machine Learning University (MLU) provides free access to the same machine learning courses used to train Amazon’s own developers on machine learning. 
    • Engagement with Multi-Stakeholder Organisations: Amazon actively participates in various international and multi-stakeholder organisations that focus on responsible AI, such as the OECD AI working groups, the Partnership on AI, and the Responsible AI Institute.  Through engagements with the White House and UN, among others, we are committed to sharing our knowledge and expertise to advance the responsible and secure use of AI. We are closely engaged with NIST, ISO and CEN-CENELEC in informing international technical standards for AI. Effective standards help reduce confusion about what AI is, what responsible AI entails, and help focus the industry on the reduction of potential harms. AWS is working in a community of diverse international stakeholders to improve AI standards for risk management, including on data quality, bias, and transparency. 
    • Development of Responsible AI Tools: Amazon has invested in creating tools, like Amazon SageMaker Clarify and Amazon Augmented AI (Amazon A2I), that help developers and organisations identify and mitigate bias, as well as incorporate human review into ML systems.
  • Some concerns regarding AI systems, in particular large-scale systems such as frontier models, relate to the inputs required to make them, in particular the data used to train and operate them. Data raises concerns of privacy, copyright, and bias, among others. Please describe the policies that govern how you collect, curate, and manage access to data used throughout the AI lifecycle, including pre-training, fine-tuning and post-deployment.

    Amazon’s practices around data collection, curation, and management are tailored to address the unique considerations of our individual businesses and products. We maintain robust policies and practices related to appropriate handling of data, driven by our priority of earning customer trust. Our models are trained on data from a variety of sources, such as licenced data, open-source datasets, and publicly available data where appropriate. We are thoughtful about important risks raised by data, and our systems are designed with data security and privacy in mind. We continually test and assess our products and services to define, measure, and mitigate concerns about accuracy, fairness, intellectual property, appropriate use, toxicity, and privacy.

    To illustrate, below are several key examples from Amazon Bedrock and our Titan Foundation Models:

    • Protecting Customer Content During Fine-Tuning. AWS customers trust us to protect their most critical and sensitive assets: their data. Accordingly, Amazon Bedrock allows customers to privately customize FMs. Since the data customers want for customization is valuable intellectual property, it must stay completely protected, secure, and private during that process, and customers want control over how the data is shared and used. With Amazon Bedrock, customers can make a separate copy of the base foundation model that is accessible only by the customer and use this private copy for further training. None of the customer’s content is used to train the original base model. The customer can configure the Amazon VPC settings to access Amazon Bedrock APIs and provide model fine-tuning data in a secure manner. The customer’s data is encrypted in transit (TLS1.2) and at rest through service-managed keys.
    • Training Data. For our Titan Foundation Models, AWS uses data from the following sources for training: (1) data licenced from third parties; (2) open-source datasets; and (3) publicly-available data where appropriate. Before including datasets in the Titan Foundation Models’ training data, AWS reviews them for possible bias, toxicity, legal, and other quality considerations. We store training data in secure repositories that are subject to robust access controls consistent with AWS’s state of the art data security policies. We treat training data as confidential AWS information, and apply appropriate security and access controls. In particular, we encrypt all data in transit and data that is persisted at rest. We retain training data for as long as it is needed to keep the models accurate and up to date, in accordance with our data retention policies. How long we retain data depends on a number of factors, such as its nature, sensitivity, and the utility of that data for training the Titan Foundation Models, as well as any legal requirements. We will delete unnecessary data or data that is no longer useful in between cycles of training the model.
    • Bias Testing. As part of AWS’s commitment to responsible use of AI and efforts to design services that perform well across demographic groups, we have created fairness metrics to measure performance across different populations, and have tested and measured the progress of the Titan Text Models. For example, we carry out internal and external adversarial-style testing on the Titan Text Models to determine the best ways to mitigate the risks associated with problematic inputs. We have also put in place processes to remediate potential fairness and bias issues, and analyzed the potential for bias in different use cases to determine the level of attention needed. We continue to monitor for potential bias in the Titan Text Models, including assessing that our models perform as expected across different population segments. To help prevent problematic bias, we train and test the Titan Text Models with synthetic data (i.e., data that we create). For example, we prompt models with synthetic data referring to different demographic groups such as race, religion, gender, and political ideology and evaluate responses using bias metrics.

    More generally AWS makes core commitments regarding data security and privacy, which apply to customers’ use of Amazon Bedrock and our Titan Foundation Models:

    • Access. AWS customers maintain full control of the content they upload to the AWS services under their AWS accounts, and responsibility for configuring access to AWS services and resources. AWS provides an advanced set of access, encryption, and logging features to help customers do this effectively (e.g., AWS Identity and Access Management, AWS Organisations and AWS CloudTrail). AWS provides APIs for customers to configure access control permissions for any of the services they develop or deploy in an AWS environment. AWS does not access or use customer content for any purpose without their agreement. AWS never uses customer content or derive information from it for marketing or advertising purposes.
    • Storage. Customers choose the AWS Region(s) in which their content is stored. Customers can replicate and back up their content in more than one AWS Region. AWS will not move or replicate customer content outside of their chosen AWS Region(s) except as agreed to by a customer.
    • Security. Customers choose how their content is secured. AWS offers industry-leading encryption features to protect customer content in transit and at rest, and we provide customers with the option to manage their own encryption keys. These data protection features include:
    • Data encryption capabilities available in over 100 AWS services.
    • Flexible key management options using AWS Key Management Service (KMS), allowing customers to choose whether to have AWS manage their encryption keys or enabling customers to keep complete control over their keys.
    • Disclosure of customer content. AWS will not disclose customer content unless we're required to do so to comply with the law or a binding order of a government body. If a governmental body sends AWS a demand for your customer content, we will attempt to redirect the governmental body to request that data directly from the relevant customer. If compelled to disclose customer content to a government body, we will give the customer reasonable notice of the demand to allow the customer to seek a protective order or other appropriate remedy unless AWS is legally prohibited from doing so.
    • Security Assurance. We have developed a security assurance program that uses best practices for global privacy and data protection to help customers operate securely within AWS, and to make the best use of our security control environment. These security protections and control processes are independently validated by multiple third-party independent assessments.