AWS Public Sector Blog

National framework for AI assurance in Australian government: Guidance when building with AWS AI/ML solutions

AWS branded background design with text overlay that says "National framework for AI assurance in Australian government: Guidance when building with AWS AI/ML solutions"

As Australia moves forward with its National framework for the assurance of artificial intelligence in government, Amazon Web Services (AWS) is committed to helping our customers implement AI solutions that align with Australia’s AI Ethics Principles.

Mechanisms to help businesses and organisations identify and managed risks for the design and implementation of AI solutions are being introduced globally, including a variety of international standards (ISO 42001, ISO 23894, and NIST RMF), legislation (EU AI Act), and localised standards and frameworks such as the NSW AI Assurance Framework.

This post outlines how AWS tools and services can support government agencies in adhering to Australia’s AI Ethics Principles when developing AI and machine learning (ML) solutions. The post includes a focus on implementation to help Australian governments responsibly innovate whilst maintaining cloud-based agility.

The national framework for the assurance of AI in government

The framework, released June 21, 2024, is a joint approach by the Australian federal, state, and territory governments to ensure the safe and responsible use of AI in the public sector.

It maps implementation guidelines to the country’s eight AI ethics principles:

  • Human, societal and environmental wellbeing
  • Human-centred values
  • Fairness
  • Privacy protection and security
  • Reliability and safety
  • Transparency and explainability
  • Contestability
  • Accountability

At AWS, we focus on developing responsible AI through eight dimensions: fairness, explainability, controllability, safety, privacy and security, governance, transparency, and veracity and robustness. Each dimension is explored in depth in our responsible use of machine learning guide.

Let’s dive into how AWS supports organisations in implementing the national framework for the assurance of AI in government.

Human, societal and environmental wellbeing

To maximise wellbeing and minimise impacts, we encourage you to focus on defining high-value, specific application use cases that address specific business needs. AWS offers a Working Backwards workshop to systematically vet ideas and create new products by defining the customer experience and iteratively working backwards. This targeted approach offers clearer accountability, enhanced transparency, simplified risk assessment, and focused testing procedures.

Cloud solutions can offer major environmental benefits with specialised data centres. Amazon has heavily invested in green energy for powering its operations globally, and has invested in two solar farms in regional New South Wales (NSW) with the mission to power operations with 100 percent renewable energy by 2025. AWS is also investing in custom processors (Graviton) and accelerators (Trainium) with 60 percent and 25 percent less energy respectively for the same performance as comparable Amazon Elastic Compute Cloud (Amazon EC2) instances.

Infrastructure choices and their forecasted impact on carbon emissions can be tracked, measured, and reviewed using the carbon footprint tool, based on Greenhouse Gas (GHG) Protocol standards. Solutions created on the cloud can also focus on environmental wellbeing. As an example, Qantas is enabling more than 50,000 tons of carbon emissions reductions each year by using AI/ML for real-time flight path optimisation.

Human-centred values

At Amazon, we prioritise equity, privacy, fairness, and respect for human rights and the rule of law. You can find our AWS Responsible AI Policy describing responsible AI requirements for the use of our AI/ML services as well as prohibited uses.

In our Responsible use of machine learning guide, we advise that diverse and multidisciplinary teams are crucial for developing responsible ML systems and shaping AI policies. These teams should encompass a wide range of backgrounds, perspectives, skills, and experiences, including various genders, races, ethnicities, abilities, ages, religions, sexual orientations, military statuses, and political views.

Cross-functional expertise from technologists, ethicists, lawyers, domain experts, and external resources ensures holistic understanding and consideration of ethical, legal, and domain-specific factors. Organisations should also consider leveraging external resources such as user testing, focus groups, third-party advocacy groups, and public resources like the EU Assessment List for Trustworthy Artificial Intelligence.

Fairness

Imbalances in training data may result in bias that impacts model decisions. Measuring bias is crucial for mitigating it, and each bias measure corresponds to a different fairness notion. Follow best practices to consider fairness throughout your AI project’s lifecycle.

AWS provides resources to understand different types of bias and the corresponding bias metrics, guide you through available bias metrics and their use in Amazon SageMaker Clarify, and understand AI fairness and explainability.

These additional blog posts look further at measuring and mitigating bias:

Privacy protection and security

When working on a new use case, consider doing a privacy impact assessment (PIA) at the design phase. Privacy by design ensures that good privacy practices are built into your information systems, business processes, products, and services. AWS provides many security tools to maintain data privacy and is accredited to operate at the IRAP PROTECTED classification level.

To assist with planning a strong security foundation for ML workloads, you can refer to the AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and Generative AI. For Australian and New Zealand customers, AWS also provides detailed local guidelines for security and compliance.

Under the AWS Service Terms, customer content processed by an “AI Service” on AWS may be used and stored by AWS in order to develop and improve AWS and affiliate technologies. Agencies must opt out manually via configuring an AI Services Opt-Out Policy for any content that needs to stay private.

Detailed guidance on security for your generative AI workloads is available here:

Reliability and safety

We believe that ISO 42001 certification will be one important mechanism for demonstrating excellence in responsibly developing and deploying AI systems and applications. Amazon is pursuing ISO 42001 conformity, and AWS recommended in our Safe and Responsible AI submission that the Australian Government should recognise this standard as a conformance mechanism.

Customers should consider potential inaccuracies in ML system results and prepare a plan to address them, such as narrowing scope, introducing human oversight, or altering dependencies on the AI system. For example, the Titan Image Generator foundation model on Amazon Bedrock, contains an invisible watermark to help reduce the spread of deceptive content and disinformation.

To assess if an AI system operates as intended, it is important to use accurate and representative training data. AWS encourages specific policies and provides safeguards such as Guardrails for Amazon Bedrock to block harmful user inputs.

Evaluate models thoroughly on safety characteristics such as prompt stereotyping (encoded biases for gender, socioeconomic status, etc), factual knowledge, and toxicity. FMEval, an open source library is available in Amazon SageMaker for developing these insights. Model evaluation is also available in Amazon Bedrock for large language models (LLMs).

Model outputs can be evaluated by a pool of human evaluators, or through an automated process. Test performance through techniques like red teaming and reinforcement learning from human feedback (RLHF).

Continually evaluate performance and responsibility metrics before deployment. Use tools like Amazon SageMaker Model Monitor to detect data drift and prompt retraining if needed.

Transparency and explainability

AWS AI Service Cards provide transparency on the intended use cases, limitations, and responsible AI design choices for AWS AI services. These provide key transparency information about how the LLMs have been evaluated for veracity (that is, the likelihood of hallucination), safety (including our efforts to red team the model), and controllability.

Amazon SageMaker Clarify provides greater visibility into model behaviour, so you can provide transparency to stakeholders, inform humans making decisions, and track whether a model is performing as intended. ML Governance from Amazon SageMaker provides purpose-built tools for improving governance of your ML projects, letting you capture and share model information and stay informed on model behaviour. You can leverage Amazon SageMaker Model Cards to document critical details about your ML models for streamlined governance and reporting.

Contestability

For AI/ML projects where there is a risk of adverse outcomes for a person, community, group or environment, a timely contestability process should be included in the management of the solution. In our Responsible use of machine learning guide, we provide three key guidelines supporting the contestability pillar:

  1. During design and development, implement robust tracking and review mechanisms and maintain comprehensive documentation of design decisions and inputs. This traceable record will be invaluable for both internal audits and external reviews, ensuring transparency and facilitating continuous improvement.
  2. During deployment, clearly inform citizens when they are interacting with an AI system, provide alternatives for those who may not wish to engage with AI, and ensure your AI systems are accessible to all members of the public, including those with disabilities, to maintain equitable service delivery.
  3. During the operational phase, establish strong feedback loops to continuously improve your AI systems. Actively seek input from citizens and stakeholders through various channels. Develop clear policies on how feedback will be evaluated and addressed. For AI systems that impact public services or decision-making processes, create robust mechanisms for citizens to request information or appeal decisions.

Following these approaches not only improves system performance but also builds public trust in government AI initiatives.

Accountability

Successful AI adoption requires significant cultural and organisational changes, including defining the roles and responsibilities required for accountability. You can leverage the stakeholder map used in the ISO/IEC DIS 22989(en) and described in our blog post Learn how to assess the risk of AI systems.

Additionally, it is critical to ensure your AI maintenance team is well-resourced, trained, and empowered to understand, operate, and critically evaluate the system’s performance and decision-making processes. Amazon is investing heavily in Australia’s digital future by providing free AI and cloud skills training to more than 400,000 people across the country since 2017. In November 2023, Amazon launched the “AI Ready” initiative, which offers a suite of free AI and generative AI training courses, aligned to both technical and non-technical roles, so that anyone can build AI skills.

Summary

This post explored how AWS supports Australian government agencies in implementing AI solutions aligned with Australia’s AI Ethics Principles. We outlined the AWS responsible AI approach, which encompasses fairness, explainability, safety, and privacy. For each of the eight AI ethics principles, we detailed AWS tools, services, and best practices, such as Amazon SageMaker Clarify for bias detection.

The AWS Public Sector team in Australia and New Zealand is committed to helping customers build responsible AI/ML solutions with citizens’ best interests in mind. If your organisation wishes to discuss responsible AI/ML solutions, please contact us.

Natacha Fort

Natacha Fort

Natacha is the government data science lead solutions architect for public sector in Australia and New Zealand for Amazon Web Services (AWS). She has a passion for helping organizations navigate their machine learning (ML) journey. Natacha collaborates with organizations to establish a foundation for success with MLOps, is an active contributor in the responsible artificial intelligence (AI) space at AWS, has recently started exploring the potential of generative AI to deliver outcomes in government.

Pauline Kelly

Pauline Kelly

Pauline is a solutions architect for public sector in Australia and New Zealand for Amazon Web Services (AWS). She is passionate about helping healthcare organisations realise their visions of improving patient outcomes and experiences, and bridging the chasm between medical technology research and practice. Pauline also assists customers with realising novel applications of generative artificial intelligence (AI) and machine learning (ML) applications in a cost effective, sustainable, and responsible manner.