AWS Security Blog
Preparing for take-off: Regulatory perspectives on generative AI adoption within Australian financial services
The Australian financial services regulator, the Australian Prudential Regulation Authority (APRA), has provided its most substantial guidance on generative AI to date in Member Therese McCarthy Hockey’s remarks to the AFIA Risk Summit 2024. The guidance gives a green light for banks, insurance companies, and superannuation funds to accelerate their adoption of this transformative technology, but reminded the financial services industry of the need for adequate guardrails to make sure that the benefits of generative AI don’t come at an unacceptable cost to the community.
Amazon Web Services (AWS) is committed to developing AI responsibly and strongly supports APRA’s message to proceed with generative AI adoption with appropriate guardrails implemented. AWS is at the forefront of generative AI research and innovation, and many of our financial services customers are already harnessing the benefits of our artificial intelligence (AI), machine learning (ML), and generative AI services. AWS is committed to the responsible development and use of AI so that we can help our customers achieve their business goals while meeting—and aiming to exceed—their regulators’ expectations.
A green light for AI, ML, and generative AI
APRA’s guidance, as outlined in APRA Member Therese McCarthy Hockey’s remarks to the AFIA Risk Summit 2024, offers a clear pathway for adoption of AI, ML, and generative AI technologies by APRA-regulated entities. Ms. McCarthy Hockey says that there is “keen support” within APRA and across government for companies to realize the benefits of technology-led innovation, and she highlights the significant advantages that effective use of generative AI can deliver, such as improved productivity, cost efficiencies, more personalized customer experiences, and the ability to divert valuable resources to higher-level areas of need.
“Within APRA and across governments and regulators there is keen support for the realisation of tangible improvements through innovation.” — APRA Member Therese McCarthy Hockey’s remarks to AFIA Risk Summit May 2024 |
AWS financial services customers are starting to use more advanced AI for a variety of purposes, such as customer service, marketing, application development, fraud detection, and regulatory compliance. Specific use cases cited by APRA were the use of generative AI to rapidly review long documents against criteria such as policy requirements, use of generative AI-powered coding tools to produce better code faster, and creating generative AI bots to simulate customer testing of products and services. This is an extension of less sophisticated forms of AI which have been in operation for some time, with APRA citing internet chat bots and natural language processing as examples where businesses have already realized efficiencies by automating and speeding up manual or time-consuming processes.
APRA and other financial services regulators are experimenting internally with AI themselves. In Ms. McCarthy Hockey’s speech, she noted that APRA itself is using text analysis tools on an ongoing basis to review responses to APRA risk culture surveys, with the results helping APRA risk specialists direct focus to where it’s most required. APRA is also experimenting with natural language processing tools to review incident reporting data from regulated entities and to highlight incidents that are worthy of further investigation. This helps to reduce the human effort required by APRA staff and increase regulatory efficiency. Finally, APRA is collaborating with the Australian Securities and Investments Commission (ASIC) and the Reserve Bank of Australia (RBA) on a proof of concept to reduce the effort required to compare, analyze, and summarize the reams of documentation the three agencies must review as part of their regular entity supervision duties.
Risks must be understood and managed
APRA advocates for a prudent approach to experimentation with these technologies. As was the case with cloud adoption, organizations with more mature risk and data management capabilities will be able to move faster than those without.
“APRA’s message to the entities we regulate is that firm board oversight, robust technology platforms and strong risk management are essential for companies that want to begin experimenting with new ways of harnessing AI.” — APRA Member Therese McCarthy Hockey’s remarks to AFIA Risk Summit May 2024 |
APRA’s current regulatory framework is fit-for-purpose
APRA also made the specific point that its existing prudential framework remains fit-for-purpose for the increased uptake of AI, ML, and generative AI.
APRA’s primary focus is on governance, citing three key areas:
- Do boards have sufficient capability to determine an appropriate AI strategy and make sound risk management decisions? Are they able to effectively challenge management? What sort of learning and development programs are in train, and do the boards have access to external skills and advice if required?
- How mature is the risk culture? Is a risk management mindset embedded and functioning effectively across all three lines of defense? What controls and monitoring are in place to help prevent employees making unauthorized use of AI, ML, and generative AI tools?
- Is there adequate data quality and reliability? AI outputs depend directly on the quality of the inputs. APRA states that data management is an area where many regulated entities have a long way to go.
APRA also focuses on accountability, reminding regulated entities that as with any form of outsourcing or use of third-party services, the regulated entity retains accountability for the outputs of the AI, ML, and generative AI programs they deploy. There must always be a human in the loop: a person accountable for verifying that AI operates as intended. The level of human involvement can vary—for example, APRA does not suggest that a human should be involved in every AI decision made by a fraud detection service, but there should be a human who is accountable for the algorithm it runs, its operations, and the outcomes it drives.
How AWS is helping customers locally and globally use AI responsibly
From the outset, AWS has prioritized responsible AI innovation by embedding safety, fairness, robustness, security, and privacy into our development processes, and continuously educating our employees. We extend this commitment through to our customers by designing services that help customers derive business value from AI in a safe and responsible way.
AWS collaborates with organizations such as the OECD AI working groups, the Partnership on AI, the Responsible AI Institute, and strategic partnerships with universities worldwide. In Australia, AWS collaborates with key institutions like the National AI Centre, CSIRO, the Australian Information Industry Association, and the Tech Council of Australia to provide insights on responsible AI adoption and to maximize the benefits of AI technology for the country. The recent Voluntary AI Safety Standard developed by the National AI Centre is the start of clear guidance for Australian organizations to follow, and AWS is engaging with Australia and other governments on the responsible use adoption and use of generative AI.
Recently, AWS has supported global financial services customers in critical areas such as risk management, financial crime prevention, and cybersecurity by using generative AI to analyze and respond to large data volumes in real-time. Verafin (a Nasdaq company) used Amazon Bedrock to improve anti-money laundering and fraud prevention processes. This application of AI enhances the effectiveness of financial crime management programs. Mastercard employs AWS AI and machine learning services to detect and prevent fraud while providing the most seamless customer experience possible.
Generative AI’s role in modernizing legacy systems is increasingly recognized, especially among Australian financial services customers who are undertaking transformation programs to reduce technology debt and enhance process resilience. CommBank, PEXA, and National Australia Bank (NAB) employ generative AI technology to improve speed, quality, and security when building and modifying applications.
How to implement responsible AI within your organization
The core dimensions of responsible AI at AWS align to the key regulatory considerations of both APRA and regulators globally:
- Fairness – Considering impacts on different groups of stakeholders
- Explainability – Understanding and evaluating system outputs
- Privacy and security – Appropriately obtaining, using, and protecting data and models
- Safety – Working to prevent harmful system output and misuse
- Controllability – Having mechanisms to monitor and steer AI system behaviour
- Veracity and robustness – Achieving correct system outputs, even with unexpected or adversarial inputs
- Governance – Incorporating best practices into the AI supply chain, including providers and deployers
- Transparency – Enabling stakeholders to make informed choices about their engagement with an AI system
Note that responsible AI is a continually evolving field. Customers can keep updated with developments in this area on our Responsible AI webpage.
The Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and Generative AI provides extensive guidance, and serves as both a starting point and a guide to help customers meet, and in many cases exceed, regulatory expectations.
We have integrated features into our generative AI services to facilitate the application of responsible AI policies for organizations. For example, Amazon Bedrock Guardrails can help financial services organizations comply with APRA guidance on AI use in several key ways:
- Content filtering – Guardrails allows organizations to configure content filters to block harmful or inappropriate content in AI model inputs and outputs. This helps AI applications to adhere to with APRA’s expectations for responsible AI use.
- Topic restrictions – Organizations can define specific topics to be avoided in AI interactions. For example, a banking chatbot could be configured so it won’t provide investment advice, aligning with regulatory restrictions.
- Sensitive information protection – Guardrails can detect and redact personally identifiable information (PII) in AI inputs and outputs. This helps protect customer privacy and aids in compliance with data protection requirements.
- Custom word filters – Companies can set up lists of words or phrases to block, helping maintain appropriate communication.
- Contextual grounding checks – This feature helps detect and filter AI hallucinations in model responses where a reference source and a user query are provided, improving the accuracy and reliability of AI-generated responses. This aligns with APRA’s focus on making sure that AI systems provide accurate and trustworthy information.
- Customizable policies – Guardrails allows organizations to tailor AI safeguards to their specific needs and regulatory requirements, helping them align with APRA’s principles-based approach.
- Consistent safeguards – Guardrails can be applied across multiple AI models and applications, enabling a standardized approach to responsible AI use across the organization.
- Transparency and testing – The ability to test guardrails and iterate on configurations supports APRA’s expectations for due diligence and appropriate monitoring of AI systems.
We have a comprehensive user guide detailing how to implement, configure, and test Amazon Bedrock Guardrails.
AWS AI Service Cards also provide detailed information on AWS AI services, including intended use cases, limitations, and responsible AI design choices. This transparency helps financial institutions understand and responsibly use AI technologies.
APRA’s existing prudential standards do not set specific rules for managing AI/ML and generative AI risks. Instead, APRA outlines desired risk management outcomes, leaving it to each regulated entity to assess AI deployment risks and implement appropriate controls. AWS offers the User Guide to Financial Services Regulations and Guidelines in Australia to help customers meet APRA’s requirements.
Ultimately, the rate of AI, ML, and generative AI adoption amongst APRA-regulated entities will be determined by the risk appetite and risk management capability of individual entities. APRA openly encourages its regulated entities—our financial services customers—who are considering AI, ML, and generative AI experimentation and adoption to reach out to APRA directly and initiate dialogue. APRA is a highly experienced, knowledgeable, and approachable regulator, and will be able to provide valuable insights and guidance to regulated entities.
Conclusion and next steps
APRA’s messaging to industry is a significant milestone for AI, ML, and generative AI adoption in the Australian financial services industry. Boards, executives, and technology decision-makers should review APRA’s Risk Summit speech and consider APRA’s support for the adoption of these technologies when refining their strategies and plans.
AWS, and our AWS Partner Network, are experienced in working with financial services customers, and there are already a number of examples both internationally and locally where generative AI has been implemented to create value for our customers. AWS is ready to help our customers meet and exceed APRA’s risk management expectations.
Contact your AWS representative to discuss how the AWS solution architects, AWS Professional Services teams, AWS Training and Certification, and the AWS Partner Network can assist with your AI, ML, and generative AI adoption journey. If you don’t have an AWS representative, please contact us at https://aws.amazon.com/contact-us.